Compare commits
No commits in common. "master" and "0.6.19" have entirely different histories.
1
.gitattributes
vendored
@ -1,5 +1,4 @@
|
||||
constants.py ident export-subst
|
||||
/test export-ignore
|
||||
/library export-ignore
|
||||
cps/static/css/libs/* linguist-vendored
|
||||
cps/static/js/libs/* linguist-vendored
|
||||
|
20
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@ -6,23 +6,12 @@ labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
<!-- Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md) -->
|
||||
|
||||
## Short Notice from the maintainer
|
||||
|
||||
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
|
||||
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
|
||||
I have turned off all notifications from GitHub/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
|
||||
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
|
||||
|
||||
|
||||
Please also have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
|
||||
|
||||
**Describe the bug/problem**
|
||||
|
||||
**Describe the bug/problem**
|
||||
A clear and concise description of what the bug is. If you are asking for support, please check our [Wiki](https://github.com/janeczku/calibre-web/wiki) if your question is already answered there.
|
||||
|
||||
**To Reproduce**
|
||||
|
||||
Steps to reproduce the behavior:
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
@ -30,19 +19,15 @@ Steps to reproduce the behavior:
|
||||
4. See error
|
||||
|
||||
**Logfile**
|
||||
|
||||
Add content of calibre-web.log file or the relevant error, try to reproduce your problem with "debug" log-level to get more output.
|
||||
|
||||
**Expected behavior**
|
||||
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Screenshots**
|
||||
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Environment (please complete the following information):**
|
||||
|
||||
- OS: [e.g. Windows 10/Raspberry Pi OS]
|
||||
- Python version: [e.g. python2.7]
|
||||
- Calibre-Web version: [e.g. 0.6.8 or 087c4c59 (git rev-parse --short HEAD)]:
|
||||
@ -52,4 +37,3 @@ If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here. [e.g. access via reverse proxy, database background sync, special database location]
|
||||
|
||||
|
9
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@ -7,14 +7,7 @@ assignees: ''
|
||||
|
||||
---
|
||||
|
||||
# Short Notice from the maintainer
|
||||
|
||||
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
|
||||
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
|
||||
I have turned off all notifications from GitHub/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
|
||||
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
|
||||
|
||||
Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
|
||||
<!-- Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md) -->
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
2
.gitignore
vendored
@ -28,10 +28,8 @@ cps/cache
|
||||
.idea/
|
||||
*.bak
|
||||
*.log.*
|
||||
.key
|
||||
|
||||
settings.yaml
|
||||
gdrive_credentials
|
||||
client_secrets.json
|
||||
gmail.json
|
||||
/.key
|
||||
|
@ -20,20 +20,20 @@ Some of the user languages in Calibre-Web having missing translations. We are ha
|
||||
|
||||
### **Documentation**
|
||||
|
||||
The Calibre-Web documentation is hosted in the GitHub [Wiki](https://github.com/janeczku/calibre-web/wiki). The Wiki is open to everybody, if you find a problem, feel free to correct it. If information is missing, you are welcome to add it. The content will be reviewed time by time. Please try to be consistent with the form with the other Wiki pages (e.g. the project name is Calibre-Web with 2 capital letters and a dash in between).
|
||||
The Calibre-Web documentation is hosted in the Github [Wiki](https://github.com/janeczku/calibre-web/wiki). The Wiki is open to everybody, if you find a problem, feel free to correct it. If information is missing, you are welcome to add it. The content will be reviewed time by time. Please try to be consistent with the form with the other Wiki pages (e.g. the project name is Calibre-Web with 2 capital letters and a dash in between).
|
||||
|
||||
### **Reporting a bug**
|
||||
|
||||
Do not open up a GitHub issue if the bug is a **security vulnerability** in Calibre-Web. Instead, please write an email to "ozzie.fernandez.isaacs@googlemail.com".
|
||||
|
||||
Ensure the **bug was not already reported** by searching on GitHub under [Issues](https://github.com/janeczku/calibre-web/issues). Please also check if a solution for your problem can be found in the [wiki](https://github.com/janeczku/calibre-web/wiki).
|
||||
Ensure the ***bug was not already reported** by searching on GitHub under [Issues](https://github.com/janeczku/calibre-web/issues). Please also check if a solution for your problem can be found in the [wiki](https://github.com/janeczku/calibre-web/wiki).
|
||||
|
||||
If you're unable to find an **open issue** addressing the problem, open a [new one](https://github.com/janeczku/calibre-web/issues/new/choose). Be sure to include a **title** and **clear description**, as much relevant information as possible, the **issue form** helps you provide the right information. Deleting the form and just pasting the stack trace doesn't speed up fixing the problem. If your issue could be resolved, consider closing the issue.
|
||||
If you're unable to find an **open issue** addressing the problem, open a [new one](https://github.com/janeczku/calibre-web/issues/new?assignees=&labels=&template=bug_report.md&title=). Be sure to include a **title** and **clear description**, as much relevant information as possible, the **issue form** helps you providing the right information. Deleting the form and just pasting the stack trace doesn't speed up fixing the problem. If your issue could be resolved, consider closing the issue.
|
||||
|
||||
### **Feature Request**
|
||||
|
||||
If there is a feature missing in Calibre-Web and you can't find a feature request in the [Issues](https://github.com/janeczku/calibre-web/issues) section, you could create a [feature request](https://github.com/janeczku/calibre-web/issues/new?assignees=&labels=&template=feature_request.md&title=).
|
||||
We will not extend Calibre-Web with any more login abilities or add further files storages, or file syncing ability. Calibre-Web is made for home usage for company in-house usage, so requests regarding any sorts of social interaction capability, payment routines, search engine or website analytics integration will not be implemented.
|
||||
We will not extend Calibre-Web with any more login abilities or add further files storages, or file syncing ability. Furthermore Calibre-Web is made for home usage for company in-house usage, so requests regarding any sorts of social interaction capability, payment routines, search engine or web site analytics integration will not be implemented.
|
||||
|
||||
### **Contributing code to Calibre-Web**
|
||||
|
||||
@ -42,5 +42,5 @@ Open a new GitHub pull request with the patch. Ensure the PR description clearly
|
||||
In case your code enhances features of Calibre-Web: Create your pull request for the development branch if your enhancement consists of more than some lines of code in a local section of Calibre-Webs code. This makes it easier to test it and check all implication before it's made public.
|
||||
|
||||
Please check if your code runs with python 3, python 2 is no longer supported. If possible and the feature is related to operating system functions, try to check it on Windows and Linux.
|
||||
Calibre-Web is automatically tested on Linux in combination with python 3.8. The code for testing is in a [separate repo](https://github.com/OzzieIsaacs/calibre-web-test) on GitHub. It uses unit tests and performs real system tests with selenium; it would be great if you could consider also writing some tests.
|
||||
Calibre-Web is automatically tested on Linux in combination with python 3.8. The code for testing is in a [separate repo](https://github.com/OzzieIsaacs/calibre-web-test) on Github. It uses unit tests and performs real system tests with selenium; it would be great if you could consider also writing some tests.
|
||||
A static code analysis is done by Codacy, but it's partly broken and doesn't run automatically. You could check your code with ESLint before contributing, a configuration file can be found in the projects root folder.
|
||||
|
@ -1,3 +1 @@
|
||||
graft src/calibreweb
|
||||
global-exclude __pycache__
|
||||
global-exclude *.pyc
|
||||
|
215
README.md
@ -1,186 +1,99 @@
|
||||
# Calibre-Web
|
||||
# About
|
||||
|
||||
Calibre-Web is a web app that offers a clean and intuitive interface for browsing, reading, and downloading eBooks using a valid [Calibre](https://calibre-ebook.com) database.
|
||||
Calibre-Web is a web app providing a clean interface for browsing, reading and downloading eBooks using a valid [Calibre](https://calibre-ebook.com) database.
|
||||
|
||||
[](https://github.com/janeczku/calibre-web/blob/master/LICENSE)
|
||||

|
||||
[](https://github.com/janeczku/calibre-web/releases)
|
||||
[](https://github.com/janeczku/calibre-web/blob/master/LICENSE)
|
||||
[]()
|
||||
[](https://github.com/janeczku/calibre-web/releases)
|
||||
[](https://pypi.org/project/calibreweb/)
|
||||
[](https://pypi.org/project/calibreweb/)
|
||||
[](https://discord.gg/h2VsJ2NEfB)
|
||||
|
||||
<details>
|
||||
<summary><strong>Table of Contents</strong> (click to expand)</summary>
|
||||
|
||||
1. [About](#calibre-web)
|
||||
2. [Features](#features)
|
||||
3. [Installation](#installation)
|
||||
- [Installation via pip (recommended)](#installation-via-pip-recommended)
|
||||
- [Quick start](#quick-start)
|
||||
- [Requirements](#requirements)
|
||||
4. [Docker Images](#docker-images)
|
||||
5. [Troubleshooting](#troubleshooting)
|
||||
6. [Contributor Recognition](#contributor-recognition)
|
||||
7. [Contact](#contact)
|
||||
8. [Contributing to Calibre-Web](#contributing-to-calibre-web)
|
||||
|
||||
</details>
|
||||
|
||||
*This software is a fork of [library](https://github.com/mutschler/calibreserver) and licensed under the GPL v3 License.*
|
||||
|
||||

|
||||
|
||||
## Features
|
||||
|
||||
- Modern and responsive Bootstrap 3 HTML5 interface
|
||||
- Full graphical setup
|
||||
- Comprehensive user management with fine-grained per-user permissions
|
||||
- Bootstrap 3 HTML5 interface
|
||||
- full graphical setup
|
||||
- User management with fine-grained per-user permissions
|
||||
- Admin interface
|
||||
- Multilingual user interface supporting 20+ languages ([supported languages](https://github.com/janeczku/calibre-web/wiki/Translation-Status))
|
||||
- OPDS feed for eBook reader apps
|
||||
- Advanced search and filtering options
|
||||
- Custom book collection (shelves) creation
|
||||
- eBook metadata editing and deletion support
|
||||
- Metadata download from various sources (extensible via plugins)
|
||||
- eBook conversion through Calibre binaries
|
||||
- eBook download restriction to logged-in users
|
||||
- Public user registration support
|
||||
- Send eBooks to E-Readers with a single click
|
||||
- Sync Kobo devices with your Calibre library
|
||||
- In-browser eBook reading support for multiple formats
|
||||
- Upload new books in various formats, including audio formats
|
||||
- Calibre Custom Columns support
|
||||
- Content hiding based on categories and Custom Column content per user
|
||||
- User Interface in brazilian, czech, dutch, english, finnish, french, german, greek, hungarian, italian, japanese, khmer, korean, polish, russian, simplified and traditional chinese, spanish, swedish, turkish, ukrainian
|
||||
- OPDS feed for eBook reader apps
|
||||
- Filter and search by titles, authors, tags, series, book format and language
|
||||
- Create a custom book collection (shelves)
|
||||
- Support for editing eBook metadata and deleting eBooks from Calibre library
|
||||
- Support for downloading eBook metadata from various sources, sources can be extended via external plugins
|
||||
- Support for converting eBooks through Calibre binaries
|
||||
- Restrict eBook download to logged-in users
|
||||
- Support for public user registration
|
||||
- Send eBooks to E-Readers with the click of a button
|
||||
- Sync your Kobo devices through Calibre-Web with your Calibre library
|
||||
- Support for reading eBooks directly in the browser (.txt, .epub, .pdf, .cbr, .cbt, .cbz, .djvu)
|
||||
- Upload new books in many formats, including audio formats (.mp3, .m4a, .m4b)
|
||||
- Support for Calibre Custom Columns
|
||||
- Ability to hide content based on categories and Custom Column content per user
|
||||
- Self-update capability
|
||||
- "Magic Link" login for easy access on eReaders
|
||||
- LDAP, Google/GitHub OAuth, and proxy authentication support
|
||||
- "Magic Link" login to make it easy to log on eReaders
|
||||
- Login via LDAP, google/github oauth and via proxy authentication
|
||||
|
||||
## Installation
|
||||
|
||||
### Installation via pip (recommended)
|
||||
#### Installation via pip (recommended)
|
||||
1. To avoid problems with already installed python dependencies, it's recommended to create a virtual environment for Calibre-Web
|
||||
2. Install Calibre-Web via pip with the command `pip install calibreweb` (Depending on your OS and or distro the command could also be `pip3`).
|
||||
3. Optional features can also be installed via pip, please refer to [this page](https://github.com/janeczku/calibre-web/wiki/Dependencies-in-Calibre-Web-Linux-and-Windows) for details
|
||||
4. Calibre-Web can be started afterwards by typing `cps`
|
||||
|
||||
1. **Create a virtual environment**: It’s essential to isolate your Calibre-Web installation to avoid dependency conflicts. You can create a virtual environment by running:
|
||||
```
|
||||
python3 -m venv calibre-web-env
|
||||
```
|
||||
2. **Activate the virtual environment**:
|
||||
```
|
||||
source calibre-web-env/bin/activate
|
||||
```
|
||||
3. **Install Calibre-Web**: Use pip to install the application:
|
||||
```
|
||||
pip install calibreweb
|
||||
```
|
||||
4. **Install optional features**: For additional functionality, you may need to install optional features. Refer to [this page](https://github.com/janeczku/calibre-web/wiki/Dependencies-in-Calibre-Web-Linux-and-Windows) for details on what can be installed.
|
||||
5. **Start Calibre-Web**: After installation, you can start the application with:
|
||||
```
|
||||
cps
|
||||
```
|
||||
In the Wiki there are also examples for: a [manual installation](https://github.com/janeczku/calibre-web/wiki/Manual-installation), [installation on Linux Mint](https://github.com/janeczku/calibre-web/wiki/How-To:Install-Calibre-Web-in-Linux-Mint-19-or-20), [installation on a Cloud Provider](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-on-a-Cloud-Provider).
|
||||
|
||||
*Note: Users of Raspberry Pi OS may encounter installation issues. If you do, try upgrading pip and/or installing cargo as follows:*
|
||||
```
|
||||
./venv/bin/python3 -m pip install --upgrade pip
|
||||
sudo apt install cargo
|
||||
```
|
||||
## Quick start
|
||||
|
||||
### Important Links
|
||||
- For additional installation examples, check the following:
|
||||
- [Manual installation](https://github.com/janeczku/calibre-web/wiki/Manual-installation)
|
||||
- [Linux Mint installation](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-in-Linux-Mint-19-or-20)
|
||||
- [Cloud Provider setup](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-on-a-Cloud-Provider)
|
||||
Point your browser to `http://localhost:8083` or `http://localhost:8083/opds` for the OPDS catalog \
|
||||
Login with default admin login \
|
||||
Set `Location of Calibre database` to the path of the folder where your Calibre library (metadata.db) lives, push "submit" button \
|
||||
Optionally a Google Drive can be used to host the calibre library [-> Using Google Drive integration](https://github.com/janeczku/calibre-web/wiki/G-Drive-Setup#using-google-drive-integration) \
|
||||
Afterwards you can configure your Calibre-Web instance ([Basic Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#basic-configuration) and [UI Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#ui-configuration) on admin page)
|
||||
|
||||
## Quick Start
|
||||
#### Default admin login:
|
||||
*Username:* admin\
|
||||
*Password:* admin123
|
||||
|
||||
1. **Access Calibre-Web**: Open your browser and navigate to:
|
||||
```
|
||||
http://localhost:8083
|
||||
```
|
||||
or for the OPDS catalog:
|
||||
```
|
||||
http://localhost:8083/opds
|
||||
```
|
||||
2. **Log in**: Use the default admin credentials:
|
||||
- **Username:** admin
|
||||
- **Password:** admin123
|
||||
3. **Database Setup**: If you do not have a Calibre database, download a sample from:
|
||||
```
|
||||
https://github.com/janeczku/calibre-web/raw/master/library/metadata.db
|
||||
```
|
||||
Move it out of the Calibre-Web folder to avoid overwriting during updates.
|
||||
4. **Configure Calibre Database**: In the admin interface, set the `Location of Calibre database` to the path of the folder containing your Calibre library (where `metadata.db` is located) and click "Save".
|
||||
5. **Google Drive Integration**: For hosting your Calibre library on Google Drive, refer to the [Google Drive integration guide](https://github.com/janeczku/calibre-web/wiki/G-Drive-Setup#using-google-drive-integration).
|
||||
6. **Admin Configuration**: Configure your instance via the admin page, referring to the [Basic Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#basic-configuration) and [UI Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#ui-configuration) guides.
|
||||
|
||||
## Requirements
|
||||
|
||||
- **Python Version**: Ensure you have Python 3.7 or newer.
|
||||
- **Imagemagick**: Required for cover extraction from EPUBs. Windows users may also need to install [Ghostscript](https://ghostscript.com/releases/gsdnld.html) for PDF cover extraction.
|
||||
- **Optional Tools**:
|
||||
- **Calibre desktop program**: Recommended for on-the-fly conversion and metadata editing. Set the path to Calibre’s converter tool on the setup page.
|
||||
- **Kepubify tool**: Needed for Kobo device support. Download the tool and place the binary in `/opt/kepubify` on Linux or `C:\Program Files\kepubify` on Windows.
|
||||
python 3.5+
|
||||
|
||||
Optionally, to enable on-the-fly conversion from one ebook format to another when using the send-to-ereader feature, or during editing of ebooks metadata:
|
||||
|
||||
[Download and install](https://calibre-ebook.com/download) the Calibre desktop program for your platform and enter the folder including program name (normally /opt/calibre/ebook-convert, or C:\Program Files\calibre\ebook-convert.exe) in the field "calibre's converter tool" on the setup page.
|
||||
|
||||
[Download](https://github.com/pgaskin/kepubify/releases/latest) Kepubify tool for your platform and place the binary starting with `kepubify` in Linux: `/opt/kepubify` Windows: `C:\Program Files\kepubify`.
|
||||
|
||||
## Docker Images
|
||||
|
||||
Pre-built Docker images are available:
|
||||
A pre-built Docker image is available in these Docker Hub repository (maintained by the LinuxServer team):
|
||||
|
||||
### **LinuxServer - x64, aarch64**
|
||||
- **Docker Hub**: [linuxserver/calibre-web](https://hub.docker.com/r/linuxserver/calibre-web)
|
||||
- **GitHub**: [linuxserver/docker-calibre-web](https://github.com/linuxserver/docker-calibre-web)
|
||||
- **Optional Calibre layer**: [linuxserver/docker-mods](https://github.com/linuxserver/docker-mods/tree/universal-calibre)
|
||||
#### **LinuxServer - x64, armhf, aarch64**
|
||||
+ Docker Hub - [https://hub.docker.com/r/linuxserver/calibre-web](https://hub.docker.com/r/linuxserver/calibre-web)
|
||||
+ Github - [https://github.com/linuxserver/docker-calibre-web](https://github.com/linuxserver/docker-calibre-web)
|
||||
+ Github - (Optional Calibre layer) - [https://github.com/linuxserver/docker-calibre-web/tree/calibre](https://github.com/linuxserver/docker-calibre-web/tree/calibre)
|
||||
|
||||
To include the Calibre `ebook-convert` binary (x64 only), add the environment variable:
|
||||
```
|
||||
DOCKER_MODS=linuxserver/mods:universal-calibre
|
||||
```
|
||||
in your Docker run/compose file. Omit this variable for a lightweight image.
|
||||
This image has the option to pull in an extra docker manifest layer to include the Calibre `ebook-convert` binary. Just include the environmental variable `DOCKER_MODS=linuxserver/calibre-web:calibre` in your docker run/docker compose file. **(x64 only)**
|
||||
|
||||
If you do not need this functionality then this can be omitted, keeping the image as lightweight as possible.
|
||||
|
||||
Both the Calibre-Web and Calibre-Mod images are rebuilt automatically on new releases of Calibre-Web and Calibre respectively, and on updates to any included base image packages on a weekly basis if required.
|
||||
+ The "path to convertertool" should be set to `/usr/bin/ebook-convert`
|
||||
+ The "path to unrar" should be set to `/usr/bin/unrar`
|
||||
|
||||
- **Paths Configuration**:
|
||||
- Set **Path to Calibre Binaries** to `/usr/bin`.
|
||||
- Set **Path to Unrar** to `/usr/bin/unrar`.
|
||||
# Contact
|
||||
|
||||
## Troubleshooting
|
||||
Just reach us out on [Discord](https://discord.gg/h2VsJ2NEfB)
|
||||
|
||||
- **Common Issues**:
|
||||
- If you experience issues starting the application, check the log files located in the `logs` directory for error messages.
|
||||
- If eBooks fail to load, verify that the `Location of Calibre database` is correctly set and that the database file is accessible.
|
||||
For further information, How To's and FAQ please check the [Wiki](https://github.com/janeczku/calibre-web/wiki)
|
||||
|
||||
- **Configuration Errors**: Ensure that your Calibre database is compatible and properly formatted. Refer to the Calibre documentation for guidance on maintaining the database.
|
||||
# Contributing to Calibre-Web
|
||||
|
||||
- **Performance Problems**:
|
||||
- If the application is slow, consider increasing the allocated resources (CPU/RAM) to your server or optimizing the Calibre database by removing duplicates and unnecessary entries.
|
||||
- Regularly clear the cache in your web browser to improve loading times.
|
||||
|
||||
- **User Management Issues**: If users are unable to log in or register, check the user permission settings in the admin interface. Ensure that registration is enabled and that users are being assigned appropriate roles.
|
||||
|
||||
- **Support Resources**: For additional help, consider visiting the [FAQ section](https://github.com/janeczku/calibre-web/wiki/FAQ) of the wiki or posting your questions in the [Discord community](https://discord.gg/h2VsJ2NEfB).
|
||||
|
||||
## Contributor Recognition
|
||||
|
||||
We would like to thank all the [contributors](https://github.com/janeczku/calibre-web/graphs/contributors) and maintainers of Calibre-Web for their valuable input and dedication to the project. Your contributions are greatly appreciated.
|
||||
|
||||
## Contact
|
||||
|
||||
Join us on [Discord](https://discord.gg/h2VsJ2NEfB)
|
||||
|
||||
For more information, How To's, and FAQs, please visit the [Wiki](https://github.com/janeczku/calibre-web/wiki)
|
||||
|
||||
## Contributing to Calibre-Web
|
||||
|
||||
To contribute, please check our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md). We welcome issues, feature requests, and pull requests from the community.
|
||||
|
||||
### Reporting Bugs
|
||||
|
||||
If you encounter bugs or issues, please report them in the [issues section](https://github.com/janeczku/calibre-web/issues) of the repository. Be sure to include detailed information about your setup and the problem encountered.
|
||||
|
||||
### Feature Requests
|
||||
|
||||
We welcome suggestions for new features. Please create a new issue in the repository to discuss your ideas.
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- **Documentation**: Comprehensive documentation is available on the [Calibre-Web wiki](https://github.com/janeczku/calibre-web/wiki).
|
||||
- **Community Contributions**: Explore the [community contributions](https://github.com/janeczku/calibre-web/pulls) to see ongoing work and how you can get involved.
|
||||
|
||||
---
|
||||
|
||||
Thank you for using Calibre-Web! We hope you enjoy managing your eBook library with our tool.
|
||||
Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
|
||||
|
68
SECURITY.md
@ -10,46 +10,34 @@ To receive fixes for security vulnerabilities it is required to always upgrade t
|
||||
|
||||
## History
|
||||
|
||||
| Fixed in | Description | CVE number |
|
||||
|---------------|----------------------------------------------------------------------------------------------------------------------------------|----------------|
|
||||
| 3rd July 2018 | Guest access acts as a backdoor | |
|
||||
| V 0.6.7 | Hardcoded secret key for sessions | CVE-2020-12627 |
|
||||
| V 0.6.13 | Calibre-Web Metadata cross site scripting | CVE-2021-25964 |
|
||||
| V 0.6.13 | Name of Shelves are only visible to users who can access the corresponding shelf Thanks to @ibarrionuevo | |
|
||||
| V 0.6.13 | JavaScript could get executed in the description field. Thanks to @ranjit-git and Hagai Wechsler (WhiteSource) | |
|
||||
| V 0.6.13 | JavaScript could get executed in a custom column of type "comment" field | |
|
||||
| V 0.6.13 | JavaScript could get executed after converting a book to another format with a title containing javascript code | |
|
||||
| V 0.6.13 | JavaScript could get executed after converting a book to another format with a username containing javascript code | |
|
||||
| V 0.6.13 | JavaScript could get executed in the description series, categories or publishers title | |
|
||||
| V 0.6.13 | JavaScript could get executed in the shelf title | |
|
||||
| V 0.6.13 | Login with the old session cookie after logout. Thanks to @ibarrionuevo | |
|
||||
| V 0.6.14 | CSRF was possible. Thanks to @mik317 and Hagai Wechsler (WhiteSource) | CVE-2021-25965 |
|
||||
| V 0.6.14 | Migrated some routes to POST-requests (CSRF protection). Thanks to @scara31 | CVE-2021-4164 |
|
||||
| V 0.6.15 | Fix for "javascript:" script links in identifier. Thanks to @scara31 | CVE-2021-4170 |
|
||||
| V 0.6.15 | Cross-Site Scripting vulnerability on uploaded cover file names. Thanks to @ibarrionuevo | |
|
||||
| V 0.6.15 | Creating public shelfs is now denied if user is missing the edit public shelf right. Thanks to @ibarrionuevo | |
|
||||
| V 0.6.15 | Changed error message in case of trying to delete a shelf unauthorized. Thanks to @ibarrionuevo | |
|
||||
| V 0.6.16 | JavaScript could get executed on authors page. Thanks to @alicaz | CVE-2022-0352 |
|
||||
| V 0.6.16 | Localhost can no longer be used to upload covers. Thanks to @scara31 | CVE-2022-0339 |
|
||||
| V 0.6.16 | Another case where public shelfs could be created without permission is prevented. Thanks to @nhiephon | CVE-2022-0273 |
|
||||
| V 0.6.16 | It's prevented to get the name of a private shelfs. Thanks to @nhiephon | CVE-2022-0405 |
|
||||
| V 0.6.17 | The SSRF Protection can no longer be bypassed via an HTTP redirect. Thanks to @416e6e61 | CVE-2022-0767 |
|
||||
| V 0.6.17 | The SSRF Protection can no longer be bypassed via 0.0.0.0 and it's ipv6 equivalent. Thanks to @r0hanSH | CVE-2022-0766 |
|
||||
| V 0.6.18 | Possible SQL Injection is prevented in user table Thanks to Iman Sharafaldin (Forward Security) | CVE-2022-30765 |
|
||||
| V 0.6.18 | The SSRF protection no longer can be bypassed by IPV6/IPV4 embedding. Thanks to @416e6e61 | CVE-2022-0939 |
|
||||
| V 0.6.18 | The SSRF protection no longer can be bypassed to connect to other servers in the local network. Thanks to @michaellrowley | CVE-2022-0990 |
|
||||
| V 0.6.20 | Credentials for emails are now stored encrypted | |
|
||||
| V 0.6.20 | Login is rate limited | |
|
||||
| V 0.6.20 | Passwordstrength can be forced | |
|
||||
| V 0.6.21 | SMTP server credentials are no longer returned to client | |
|
||||
| V 0.6.21 | Cross-site scripting (XSS) stored in href bypasses filter using data wrapper no longer possible | |
|
||||
| V 0.6.21 | Cross-site scripting (XSS) is no longer possible via pathchooser | |
|
||||
| V 0.6.21 | Error Handling at non existent rating, language, and user downloaded books was fixed | |
|
||||
| V 0.6.22 | Upload mimetype is checked to prevent malicious file content in the books library | |
|
||||
| V 0.6.22 | Cross-site scripting (XSS) stored in comments section is prevented better (switching from lxml to bleach for sanitizing strings) | |
|
||||
| V 0.6.23 | Cookies are no longer stored for opds basic authentication and proxy authentication | |
|
||||
|
||||
|
||||
| Fixed in | Description |CVE number |
|
||||
|---------------|--------------------------------------------------------------------------------------------------------------------|---------|
|
||||
| 3rd July 2018 | Guest access acts as a backdoor ||
|
||||
| V 0.6.7 | Hardcoded secret key for sessions |CVE-2020-12627 |
|
||||
| V 0.6.13 | Calibre-Web Metadata cross site scripting |CVE-2021-25964|
|
||||
| V 0.6.13 | Name of Shelves are only visible to users who can access the corresponding shelf Thanks to @ibarrionuevo ||
|
||||
| V 0.6.13 | JavaScript could get executed in the description field. Thanks to @ranjit-git and Hagai Wechsler (WhiteSource) ||
|
||||
| V 0.6.13 | JavaScript could get executed in a custom column of type "comment" field ||
|
||||
| V 0.6.13 | JavaScript could get executed after converting a book to another format with a title containing javascript code ||
|
||||
| V 0.6.13 | JavaScript could get executed after converting a book to another format with a username containing javascript code ||
|
||||
| V 0.6.13 | JavaScript could get executed in the description series, categories or publishers title ||
|
||||
| V 0.6.13 | JavaScript could get executed in the shelf title ||
|
||||
| V 0.6.13 | Login with the old session cookie after logout. Thanks to @ibarrionuevo ||
|
||||
| V 0.6.14 | CSRF was possible. Thanks to @mik317 and Hagai Wechsler (WhiteSource) |CVE-2021-25965|
|
||||
| V 0.6.14 | Migrated some routes to POST-requests (CSRF protection). Thanks to @scara31 |CVE-2021-4164|
|
||||
| V 0.6.15 | Fix for "javascript:" script links in identifier. Thanks to @scara31 |CVE-2021-4170|
|
||||
| V 0.6.15 | Cross-Site Scripting vulnerability on uploaded cover file names. Thanks to @ibarrionuevo ||
|
||||
| V 0.6.15 | Creating public shelfs is now denied if user is missing the edit public shelf right. Thanks to @ibarrionuevo ||
|
||||
| V 0.6.15 | Changed error message in case of trying to delete a shelf unauthorized. Thanks to @ibarrionuevo ||
|
||||
| V 0.6.16 | JavaScript could get executed on authors page. Thanks to @alicaz |CVE-2022-0352|
|
||||
| V 0.6.16 | Localhost can no longer be used to upload covers. Thanks to @scara31 |CVE-2022-0339|
|
||||
| V 0.6.16 | Another case where public shelfs could be created without permission is prevented. Thanks to @nhiephon |CVE-2022-0273|
|
||||
| V 0.6.16 | It's prevented to get the name of a private shelfs. Thanks to @nhiephon |CVE-2022-0405|
|
||||
| V 0.6.17 | The SSRF Protection can no longer be bypassed via an HTTP redirect. Thanks to @416e6e61 |CVE-2022-0767|
|
||||
| V 0.6.17 | The SSRF Protection can no longer be bypassed via 0.0.0.0 and it's ipv6 equivalent. Thanks to @r0hanSH |CVE-2022-0766|
|
||||
| V 0.6.18 | Possible SQL Injection is prevented in user table Thanks to Iman Sharafaldin (Forward Security) |CVE-2022-30765|
|
||||
| V 0.6.18 | The SSRF protection no longer can be bypassed by IPV6/IPV4 embedding. Thanks to @416e6e61 |CVE-2022-0939|
|
||||
| V 0.6.18 | The SSRF protection no longer can be bypassed to connect to other servers in the local network. Thanks to @michaellrowley |CVE-2022-0990|
|
||||
|
||||
|
||||
## Statement regarding Log4j (CVE-2021-44228 and related)
|
||||
|
@ -1,4 +1,3 @@
|
||||
[python: **.py]
|
||||
|
||||
# has to be executed with jinja2 >=2.9 to have autoescape enabled automatically
|
||||
[jinja2: **/templates/**.*ml]
|
||||
extensions=jinja2.ext.autoescape,jinja2.ext.with_
|
18
cps.py
@ -21,29 +21,13 @@ import os
|
||||
import sys
|
||||
|
||||
|
||||
# Add local path to sys.path, so we can import cps
|
||||
# Add local path to sys.path so we can import cps
|
||||
path = os.path.dirname(os.path.abspath(__file__))
|
||||
sys.path.insert(0, path)
|
||||
|
||||
from cps.main import main
|
||||
|
||||
|
||||
def hide_console_windows():
|
||||
import ctypes
|
||||
|
||||
kernel32 = ctypes.WinDLL('kernel32')
|
||||
user32 = ctypes.WinDLL('user32')
|
||||
|
||||
SW_HIDE = 0
|
||||
|
||||
hWnd = kernel32.GetConsoleWindow()
|
||||
if hWnd:
|
||||
user32.ShowWindow(hWnd, SW_HIDE)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
if os.name == "nt":
|
||||
hide_console_windows()
|
||||
main()
|
||||
|
||||
|
||||
|
@ -20,18 +20,16 @@
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from .cw_login import LoginManager
|
||||
|
||||
from flask_login import LoginManager
|
||||
from flask import session
|
||||
|
||||
|
||||
class MyLoginManager(LoginManager):
|
||||
def _session_protection_failed(self):
|
||||
sess = session._get_current_object()
|
||||
_session = session._get_current_object()
|
||||
ident = self._session_identifier_generator()
|
||||
if(sess and not (len(sess) == 1
|
||||
and sess.get('csrf_token', None))) and ident != sess.get('_id', None):
|
||||
if(_session and not (len(_session) == 1
|
||||
and _session.get('csrf_token', None))) and ident != _session.get('_id', None):
|
||||
return super(). _session_protection_failed()
|
||||
return False
|
||||
|
||||
|
||||
|
||||
|
131
cps/__init__.py
@ -31,19 +31,16 @@ from flask_principal import Principal
|
||||
|
||||
from . import logger
|
||||
from .cli import CliParameter
|
||||
from .constants import CONFIG_DIR
|
||||
from .reverseproxy import ReverseProxied
|
||||
from .server import WebServer
|
||||
from .dep_check import dependency_check
|
||||
from .updater import Updater
|
||||
from .babel import babel
|
||||
from . import config_sql
|
||||
from . import cache_buster
|
||||
from . import ub, db
|
||||
|
||||
try:
|
||||
from flask_limiter import Limiter
|
||||
limiter_present = True
|
||||
except ImportError:
|
||||
limiter_present = False
|
||||
try:
|
||||
from flask_wtf.csrf import CSRFProtect
|
||||
wtf_present = True
|
||||
@ -54,31 +51,23 @@ except ImportError:
|
||||
mimetypes.init()
|
||||
mimetypes.add_type('application/xhtml+xml', '.xhtml')
|
||||
mimetypes.add_type('application/epub+zip', '.epub')
|
||||
mimetypes.add_type('application/epub+zip', '.kepub')
|
||||
mimetypes.add_type('application/fb2+zip', '.fb2')
|
||||
mimetypes.add_type('application/x-mobipocket-ebook', '.mobi')
|
||||
mimetypes.add_type('application/octet-stream', '.prc')
|
||||
mimetypes.add_type('application/x-mobipocket-ebook', '.azw')
|
||||
mimetypes.add_type('application/x-mobipocket-ebook', '.azw3')
|
||||
mimetypes.add_type('application/x-mobipocket-ebook', '.prc')
|
||||
mimetypes.add_type('application/vnd.amazon.ebook', '.azw')
|
||||
mimetypes.add_type('application/x-mobi8-ebook', '.azw3')
|
||||
mimetypes.add_type('application/x-cbr', '.cbr')
|
||||
mimetypes.add_type('application/x-cbz', '.cbz')
|
||||
mimetypes.add_type('application/x-tar', '.cbt')
|
||||
mimetypes.add_type('application/x-7z-compressed', '.cb7')
|
||||
mimetypes.add_type('image/vnd.djvu', '.djv')
|
||||
mimetypes.add_type('application/x-cbt', '.cbt')
|
||||
mimetypes.add_type('image/vnd.djvu', '.djvu')
|
||||
mimetypes.add_type('application/mpeg', '.mpeg')
|
||||
mimetypes.add_type('audio/mpeg', '.mp3')
|
||||
mimetypes.add_type('audio/x-m4a', '.m4a')
|
||||
mimetypes.add_type('audio/x-m4a', '.m4b')
|
||||
mimetypes.add_type('audio/x-hx-aac-adts', '.aac')
|
||||
mimetypes.add_type('audio/vnd.dolby.dd-raw', '.ac3')
|
||||
mimetypes.add_type('video/x-ms-asf', '.asf')
|
||||
mimetypes.add_type('audio/ogg', '.ogg')
|
||||
mimetypes.add_type('application/mpeg', '.mp3')
|
||||
mimetypes.add_type('application/mp4', '.m4a')
|
||||
mimetypes.add_type('application/mp4', '.m4b')
|
||||
mimetypes.add_type('application/ogg', '.ogg')
|
||||
mimetypes.add_type('application/ogg', '.oga')
|
||||
mimetypes.add_type('text/css', '.css')
|
||||
mimetypes.add_type('application/x-ms-reader', '.lit')
|
||||
mimetypes.add_type('text/javascript', '.js')
|
||||
mimetypes.add_type('text/rtf', '.rtf')
|
||||
mimetypes.add_type('text/javascript; charset=UTF-8', '.js')
|
||||
|
||||
log = logger.create()
|
||||
|
||||
@ -86,52 +75,53 @@ app = Flask(__name__)
|
||||
app.config.update(
|
||||
SESSION_COOKIE_HTTPONLY=True,
|
||||
SESSION_COOKIE_SAMESITE='Lax',
|
||||
REMEMBER_COOKIE_SAMESITE='Strict',
|
||||
WTF_CSRF_SSL_STRICT=False,
|
||||
SESSION_COOKIE_NAME=os.environ.get('COOKIE_PREFIX', "") + "session",
|
||||
REMEMBER_COOKIE_NAME=os.environ.get('COOKIE_PREFIX', "") + "remember_token"
|
||||
REMEMBER_COOKIE_SAMESITE='Lax', # will be available in flask-login 0.5.1 earliest
|
||||
WTF_CSRF_SSL_STRICT=False
|
||||
)
|
||||
|
||||
lm = MyLoginManager()
|
||||
|
||||
cli_param = CliParameter()
|
||||
config = config_sql._ConfigSQL()
|
||||
|
||||
config = config_sql.ConfigSQL()
|
||||
cli_param = CliParameter()
|
||||
|
||||
if wtf_present:
|
||||
csrf = CSRFProtect()
|
||||
else:
|
||||
csrf = None
|
||||
|
||||
calibre_db = db.CalibreDB(app)
|
||||
calibre_db = db.CalibreDB()
|
||||
|
||||
web_server = WebServer()
|
||||
|
||||
updater_thread = Updater()
|
||||
|
||||
if limiter_present:
|
||||
limiter = Limiter(key_func=True, headers_enabled=True, auto_check=False, swallow_errors=False)
|
||||
else:
|
||||
limiter = None
|
||||
|
||||
|
||||
def create_app():
|
||||
lm.login_view = 'web.login'
|
||||
lm.anonymous_user = ub.Anonymous
|
||||
lm.session_protection = 'strong'
|
||||
|
||||
if csrf:
|
||||
csrf.init_app(app)
|
||||
|
||||
cli_param.init()
|
||||
|
||||
ub.init_db(cli_param.settings_path)
|
||||
ub.init_db(cli_param.settings_path, cli_param.user_credentials)
|
||||
|
||||
# pylint: disable=no-member
|
||||
encrypt_key, error = config_sql.get_encryption_key(os.path.dirname(cli_param.settings_path))
|
||||
config_sql.load_configuration(config, ub.session, cli_param)
|
||||
|
||||
config_sql.load_configuration(ub.session, encrypt_key)
|
||||
config.init_config(ub.session, encrypt_key, cli_param)
|
||||
db.CalibreDB.update_config(config)
|
||||
db.CalibreDB.setup_db(config.config_calibre_dir, cli_param.settings_path)
|
||||
calibre_db.init_db()
|
||||
|
||||
if error:
|
||||
log.error(error)
|
||||
|
||||
ub.password_change(cli_param.user_credentials)
|
||||
updater_thread.init_updater(config, web_server)
|
||||
# Perform dry run of updater and exit afterwards
|
||||
if cli_param.dry_run:
|
||||
updater_thread.dry_run()
|
||||
sys.exit(0)
|
||||
updater_thread.start()
|
||||
|
||||
if sys.version_info < (3, 0):
|
||||
log.info(
|
||||
@ -142,30 +132,15 @@ def create_app():
|
||||
'please update your installation to Python3 ***')
|
||||
web_server.stop(True)
|
||||
sys.exit(5)
|
||||
|
||||
lm.login_view = 'web.login'
|
||||
lm.anonymous_user = ub.Anonymous
|
||||
lm.session_protection = 'strong' if config.config_session == 1 else "basic"
|
||||
|
||||
db.CalibreDB.update_config(config, config.config_calibre_dir, cli_param.settings_path)
|
||||
|
||||
updater_thread.init_updater(config, web_server)
|
||||
# Perform dry run of updater and exit afterward
|
||||
if cli_param.dry_run:
|
||||
updater_thread.dry_run()
|
||||
sys.exit(0)
|
||||
updater_thread.start()
|
||||
requirements = dependency_check()
|
||||
for res in requirements:
|
||||
if res['found'] == "not installed":
|
||||
message = ('Cannot import {name} module, it is needed to run calibre-web, '
|
||||
'please install it using "pip install {name}"').format(name=res["name"])
|
||||
log.info(message)
|
||||
print("*** " + message + " ***")
|
||||
web_server.stop(True)
|
||||
sys.exit(8)
|
||||
for res in requirements + dependency_check(True):
|
||||
log.info('*** "{}" version does not meet the requirements. '
|
||||
if not wtf_present:
|
||||
log.info('*** "flask-WTF" is needed for calibre-web to run. '
|
||||
'Please install it using pip: "pip install flask-WTF" ***')
|
||||
print('*** "flask-WTF" is needed for calibre-web to run. '
|
||||
'Please install it using pip: "pip install flask-WTF" ***')
|
||||
web_server.stop(True)
|
||||
sys.exit(7)
|
||||
for res in dependency_check() + dependency_check(True):
|
||||
log.info('*** "{}" version does not fit the requirements. '
|
||||
'Should: {}, Found: {}, please consider installing required version ***'
|
||||
.format(res['name'],
|
||||
res['target'],
|
||||
@ -175,17 +150,14 @@ def create_app():
|
||||
if os.environ.get('FLASK_DEBUG'):
|
||||
cache_buster.init_cache_busting(app)
|
||||
log.info('Starting Calibre Web...')
|
||||
|
||||
Principal(app)
|
||||
lm.init_app(app)
|
||||
app.secret_key = os.getenv('SECRET_KEY', config_sql.get_flask_session_key(ub.session))
|
||||
|
||||
web_server.init_app(app, config)
|
||||
from .cw_babel import babel, get_locale
|
||||
if hasattr(babel, "localeselector"):
|
||||
babel.init_app(app)
|
||||
babel.localeselector(get_locale)
|
||||
else:
|
||||
babel.init_app(app, locale_selector=get_locale)
|
||||
|
||||
babel.init_app(app)
|
||||
|
||||
from . import services
|
||||
|
||||
@ -193,22 +165,9 @@ def create_app():
|
||||
services.ldap.init_app(app, config)
|
||||
if services.goodreads_support:
|
||||
services.goodreads_support.connect(config.config_goodreads_api_key,
|
||||
config.config_goodreads_api_secret,
|
||||
config.config_use_goodreads)
|
||||
config.store_calibre_uuid(calibre_db, db.Library_Id)
|
||||
# Configure rate limiter
|
||||
# https://limits.readthedocs.io/en/stable/storage.html
|
||||
app.config.update(RATELIMIT_ENABLED=config.config_ratelimiter)
|
||||
if config.config_limiter_uri != "" and not cli_param.memory_backend:
|
||||
app.config.update(RATELIMIT_STORAGE_URI=config.config_limiter_uri)
|
||||
if config.config_limiter_options != "":
|
||||
app.config.update(RATELIMIT_STORAGE_OPTIONS=config.config_limiter_options)
|
||||
try:
|
||||
limiter.init_app(app)
|
||||
except Exception as e:
|
||||
log.error('Wrong Flask Limiter configuration, falling back to default: {}'.format(e))
|
||||
app.config.update(RATELIMIT_STORAGE_URI=None)
|
||||
limiter.init_app(app)
|
||||
|
||||
# Register scheduled tasks
|
||||
from .schedule import register_scheduled_tasks, register_startup_tasks
|
||||
register_scheduled_tasks(config.schedule_reconnect)
|
||||
|
17
cps/about.py
@ -23,15 +23,15 @@
|
||||
import sys
|
||||
import platform
|
||||
import sqlite3
|
||||
from importlib.metadata import metadata
|
||||
from collections import OrderedDict
|
||||
|
||||
import flask
|
||||
import flask_login
|
||||
import jinja2
|
||||
from flask_babel import gettext as _
|
||||
|
||||
from . import db, calibre_db, converter, uploader, constants, dep_check
|
||||
from .render_template import render_title_template
|
||||
from .usermanagement import user_login_required
|
||||
|
||||
|
||||
about = flask.Blueprint('about', __name__)
|
||||
@ -41,18 +41,17 @@ req = dep_check.load_dependencies(False)
|
||||
opt = dep_check.load_dependencies(True)
|
||||
for i in (req + opt):
|
||||
modules[i[1]] = i[0]
|
||||
modules['Jinja2'] = metadata("jinja2")["Version"]
|
||||
if sys.version_info < (3, 12):
|
||||
modules['pySqlite'] = sqlite3.version
|
||||
modules['Jinja2'] = jinja2.__version__
|
||||
modules['pySqlite'] = sqlite3.version
|
||||
modules['SQLite'] = sqlite3.sqlite_version
|
||||
sorted_modules = OrderedDict((sorted(modules.items(), key=lambda x: x[0].casefold())))
|
||||
|
||||
|
||||
def collect_stats():
|
||||
if constants.NIGHTLY_VERSION[0] == "$Format:%H$":
|
||||
calibre_web_version = constants.STABLE_VERSION.replace("b", " Beta")
|
||||
calibre_web_version = constants.STABLE_VERSION['version']
|
||||
else:
|
||||
calibre_web_version = (constants.STABLE_VERSION.replace("b", " Beta") + ' - '
|
||||
calibre_web_version = (constants.STABLE_VERSION['version'] + ' - '
|
||||
+ constants.NIGHTLY_VERSION[0].replace('%', '%%') + ' - '
|
||||
+ constants.NIGHTLY_VERSION[1].replace('%', '%%'))
|
||||
|
||||
@ -75,11 +74,11 @@ def collect_stats():
|
||||
|
||||
|
||||
@about.route("/stats")
|
||||
@user_login_required
|
||||
@flask_login.login_required
|
||||
def stats():
|
||||
counter = calibre_db.session.query(db.Books).count()
|
||||
authors = calibre_db.session.query(db.Authors).count()
|
||||
categories = calibre_db.session.query(db.Tags).count()
|
||||
series = calibre_db.session.query(db.Series).count()
|
||||
return render_title_template('stats.html', bookcounter=counter, authorcounter=authors, versions=collect_stats(),
|
||||
categorycounter=categories, seriecounter=series, title=_("Statistics"), page="stat")
|
||||
categorycounter=categories, seriecounter=series, title=_(u"Statistics"), page="stat")
|
||||
|
603
cps/admin.py
Normal file → Executable file
147
cps/audio.py
@ -1,147 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2024 Ozzieisaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import mutagen
|
||||
import base64
|
||||
from . import cover, logger
|
||||
|
||||
from cps.constants import BookMeta
|
||||
|
||||
log = logger.create()
|
||||
|
||||
def get_audio_file_info(tmp_file_path, original_file_extension, original_file_name, no_cover_processing):
|
||||
tmp_cover_name = None
|
||||
audio_file = mutagen.File(tmp_file_path)
|
||||
comments = None
|
||||
if original_file_extension in [".mp3", ".wav", ".aiff"]:
|
||||
cover_data = list()
|
||||
for key, val in audio_file.tags.items():
|
||||
if key.startswith("APIC:"):
|
||||
cover_data.append(val)
|
||||
if key.startswith("COMM:"):
|
||||
comments = val.text[0]
|
||||
title = audio_file.tags.get('TIT2').text[0] if "TIT2" in audio_file.tags else None
|
||||
author = audio_file.tags.get('TPE1').text[0] if "TPE1" in audio_file.tags else None
|
||||
if author is None:
|
||||
author = audio_file.tags.get('TPE2').text[0] if "TPE2" in audio_file.tags else None
|
||||
tags = audio_file.tags.get('TCON').text[0] if "TCON" in audio_file.tags else None # Genre
|
||||
series = audio_file.tags.get('TALB').text[0] if "TALB" in audio_file.tags else None# Album
|
||||
series_id = audio_file.tags.get('TRCK').text[0] if "TRCK" in audio_file.tags else None # track no.
|
||||
publisher = audio_file.tags.get('TPUB').text[0] if "TPUB" in audio_file.tags else None
|
||||
pubdate = str(audio_file.tags.get('TDRL').text[0]) if "TDRL" in audio_file.tags else None
|
||||
if not pubdate:
|
||||
pubdate = str(audio_file.tags.get('TDRC').text[0]) if "TDRC" in audio_file.tags else None
|
||||
if not pubdate:
|
||||
pubdate = str(audio_file.tags.get('TDOR').text[0]) if "TDOR" in audio_file.tags else None
|
||||
if cover_data and not no_cover_processing:
|
||||
cover_info = cover_data[0]
|
||||
for dat in cover_data:
|
||||
if dat.type == mutagen.id3.PictureType.COVER_FRONT:
|
||||
cover_info = dat
|
||||
break
|
||||
tmp_cover_name = cover.cover_processing(tmp_file_path, cover_info.data, "." + cover_info.mime[-3:])
|
||||
elif original_file_extension in [".ogg", ".flac", ".opus", ".ogv"]:
|
||||
title = audio_file.tags.get('TITLE')[0] if "TITLE" in audio_file else None
|
||||
author = audio_file.tags.get('ARTIST')[0] if "ARTIST" in audio_file else None
|
||||
comments = audio_file.tags.get('COMMENTS')[0] if "COMMENTS" in audio_file else None
|
||||
tags = audio_file.tags.get('GENRE')[0] if "GENRE" in audio_file else None # Genre
|
||||
series = audio_file.tags.get('ALBUM')[0] if "ALBUM" in audio_file else None
|
||||
series_id = audio_file.tags.get('TRACKNUMBER')[0] if "TRACKNUMBER" in audio_file else None
|
||||
publisher = audio_file.tags.get('LABEL')[0] if "LABEL" in audio_file else None
|
||||
pubdate = audio_file.tags.get('DATE')[0] if "DATE" in audio_file else None
|
||||
cover_data = audio_file.tags.get('METADATA_BLOCK_PICTURE')
|
||||
if not no_cover_processing:
|
||||
if cover_data:
|
||||
cover_info = mutagen.flac.Picture(base64.b64decode(cover_data[0]))
|
||||
tmp_cover_name = cover.cover_processing(tmp_file_path, cover_info.data, "." + cover_info.mime[-3:])
|
||||
if hasattr(audio_file, "pictures"):
|
||||
cover_info = audio_file.pictures[0]
|
||||
for dat in audio_file.pictures:
|
||||
if dat.type == mutagen.id3.PictureType.COVER_FRONT:
|
||||
cover_info = dat
|
||||
break
|
||||
tmp_cover_name = cover.cover_processing(tmp_file_path, cover_info.data, "." + cover_info.mime[-3:])
|
||||
elif original_file_extension in [".aac"]:
|
||||
title = audio_file.tags.get('Title').value if "Title" in audio_file else None
|
||||
author = audio_file.tags.get('Artist').value if "Artist" in audio_file else None
|
||||
comments = audio_file.tags.get('Comment').value if "Comment" in audio_file else None
|
||||
tags = audio_file.tags.get('Genre').value if "Genre" in audio_file else None
|
||||
series = audio_file.tags.get('Album').value if "Album" in audio_file else None
|
||||
series_id = audio_file.tags.get('Track').value if "Track" in audio_file else None
|
||||
publisher = audio_file.tags.get('Label').value if "Label" in audio_file else None
|
||||
pubdate = audio_file.tags.get('Year').value if "Year" in audio_file else None
|
||||
cover_data = audio_file.tags['Cover Art (Front)']
|
||||
if cover_data and not no_cover_processing:
|
||||
tmp_cover_name = tmp_file_path + '.jpg'
|
||||
with open(tmp_cover_name, "wb") as cover_file:
|
||||
cover_file.write(cover_data.value.split(b"\x00",1)[1])
|
||||
elif original_file_extension in [".asf"]:
|
||||
title = audio_file.tags.get('Title')[0].value if "Title" in audio_file else None
|
||||
author = audio_file.tags.get('Artist')[0].value if "Artist" in audio_file else None
|
||||
comments = audio_file.tags.get('Comments')[0].value if "Comments" in audio_file else None
|
||||
tags = audio_file.tags.get('Genre')[0].value if "Genre" in audio_file else None
|
||||
series = audio_file.tags.get('Album')[0].value if "Album" in audio_file else None
|
||||
series_id = audio_file.tags.get('Track')[0].value if "Track" in audio_file else None
|
||||
publisher = audio_file.tags.get('Label')[0].value if "Label" in audio_file else None
|
||||
pubdate = audio_file.tags.get('Year')[0].value if "Year" in audio_file else None
|
||||
cover_data = audio_file.tags.get('WM/Picture', None)
|
||||
if cover_data and not no_cover_processing:
|
||||
tmp_cover_name = tmp_file_path + '.jpg'
|
||||
with open(tmp_cover_name, "wb") as cover_file:
|
||||
cover_file.write(cover_data[0].value)
|
||||
elif original_file_extension in [".mp4", ".m4a", ".m4b"]:
|
||||
title = audio_file.tags.get('©nam')[0] if "©nam" in audio_file.tags else None
|
||||
author = audio_file.tags.get('©ART')[0] if "©ART" in audio_file.tags else None
|
||||
comments = audio_file.tags.get('©cmt')[0] if "©cmt" in audio_file.tags else None
|
||||
tags = audio_file.tags.get('©gen')[0] if "©gen" in audio_file.tags else None
|
||||
series = audio_file.tags.get('©alb')[0] if "©alb" in audio_file.tags else None
|
||||
series_id = str(audio_file.tags.get('trkn')[0][0]) if "trkn" in audio_file.tags else None
|
||||
publisher = ""
|
||||
pubdate = audio_file.tags.get('©day')[0] if "©day" in audio_file.tags else None
|
||||
cover_data = audio_file.tags.get('covr', None)
|
||||
if cover_data and not no_cover_processing:
|
||||
cover_type = None
|
||||
for c in cover_data:
|
||||
if c.imageformat == mutagen.mp4.AtomDataType.JPEG:
|
||||
cover_type =".jpg"
|
||||
cover_bin = c
|
||||
break
|
||||
elif c.imageformat == mutagen.mp4.AtomDataType.PNG:
|
||||
cover_type = ".png"
|
||||
cover_bin = c
|
||||
break
|
||||
if cover_type:
|
||||
tmp_cover_name = cover.cover_processing(tmp_file_path, cover_bin, cover_type)
|
||||
else:
|
||||
logger.error("Unknown covertype in file {} ".format(original_file_name))
|
||||
|
||||
return BookMeta(
|
||||
file_path=tmp_file_path,
|
||||
extension=original_file_extension,
|
||||
title=title or original_file_name ,
|
||||
author="Unknown" if author is None else author,
|
||||
cover=tmp_cover_name,
|
||||
description="" if comments is None else comments,
|
||||
tags="" if tags is None else tags,
|
||||
series="" if series is None else series,
|
||||
series_id="1" if series_id is None else series_id.split("/")[0],
|
||||
languages="",
|
||||
publisher= "" if publisher is None else publisher,
|
||||
pubdate="" if pubdate is None else pubdate,
|
||||
identifiers=[],
|
||||
)
|
@ -1,8 +1,7 @@
|
||||
from babel import negotiate_locale
|
||||
from flask_babel import Babel, Locale
|
||||
from babel.core import UnknownLocaleError
|
||||
from flask import request
|
||||
from .cw_login import current_user
|
||||
from flask import request, g
|
||||
|
||||
from . import logger
|
||||
|
||||
@ -11,12 +10,13 @@ log = logger.create()
|
||||
babel = Babel()
|
||||
|
||||
|
||||
@babel.localeselector
|
||||
def get_locale():
|
||||
# if a user is logged in, use the locale from the user settings
|
||||
if current_user is not None and hasattr(current_user, "locale"):
|
||||
# if the account is the guest account bypass the config lang settings
|
||||
if current_user.name != 'Guest':
|
||||
return current_user.locale
|
||||
user = getattr(g, 'user', None)
|
||||
if user is not None and hasattr(user, "locale"):
|
||||
if user.name != 'Guest': # if the account is the guest account bypass the config lang settings
|
||||
return user.locale
|
||||
|
||||
preferred = list()
|
||||
if request.accept_languages:
|
||||
@ -34,7 +34,7 @@ def get_user_locale_language(user_language):
|
||||
|
||||
|
||||
def get_available_locale():
|
||||
return sorted(babel.list_translations(), key=lambda x: x.display_name.lower())
|
||||
return [Locale('en')] + babel.list_translations()
|
||||
|
||||
|
||||
def get_available_translations():
|
90
cps/basic.py
@ -1,90 +0,0 @@
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2018-2019 OzzieIsaacs, cervinko, jkrehm, bodybybuddha, ok11,
|
||||
# andy29485, idalin, Kyosfonica, wuqi, Kennyl, lemmsh,
|
||||
# falgh1, grunjol, csitko, ytils, xybydy, trasba, vrabe,
|
||||
# ruben-herold, marblepebble, JackED42, SiphonSquirrel,
|
||||
# apetresc, nanu-c, mutschler, carderne
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
|
||||
from cps.pagination import Pagination
|
||||
from flask import Blueprint
|
||||
from flask_babel import gettext as _
|
||||
from flask_babel import get_locale
|
||||
from flask import request, redirect, url_for
|
||||
|
||||
from . import logger, isoLanguages
|
||||
from . import db, config
|
||||
from . import calibre_db
|
||||
from .usermanagement import login_required_if_no_ano
|
||||
from .render_template import render_title_template
|
||||
from .web import get_sort_function
|
||||
|
||||
try:
|
||||
from natsort import natsorted as sort
|
||||
except ImportError:
|
||||
sort = sorted # Just use regular sort then, may cause issues with badly named pages in cbz/cbr files
|
||||
|
||||
basic = Blueprint('basic', __name__)
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
||||
@basic.route("/basic", methods=["GET"])
|
||||
@login_required_if_no_ano
|
||||
def index():
|
||||
term = request.args.get("query", "") # default to showing all books
|
||||
limit = 15
|
||||
page = int(request.args.get("page") or 1)
|
||||
off = (page - 1) * limit
|
||||
order = get_sort_function("stored", "search")
|
||||
join = db.books_series_link, db.Books.id == db.books_series_link.c.book, db.Series
|
||||
entries, result_count, pagination = calibre_db.get_search_results(term,
|
||||
config,
|
||||
off,
|
||||
order,
|
||||
limit,
|
||||
*join)
|
||||
return render_title_template('basic_index.html',
|
||||
searchterm=term,
|
||||
pagination=pagination,
|
||||
query=term,
|
||||
adv_searchterm=term,
|
||||
entries=entries,
|
||||
result_count=result_count,
|
||||
title=_("Search"),
|
||||
page="search",
|
||||
order=order[1])
|
||||
|
||||
|
||||
@basic.route("/basic_book/<int:book_id>")
|
||||
@login_required_if_no_ano
|
||||
def show_book(book_id):
|
||||
entries = calibre_db.get_book_read_archived(book_id, config.config_read_column, allow_show_archived=True)
|
||||
if entries:
|
||||
entry = entries[0]
|
||||
for lang_index in range(0, len(entry.languages)):
|
||||
entry.languages[lang_index].language_name = isoLanguages.get_language_name(get_locale(), entry.languages[
|
||||
lang_index].lang_code)
|
||||
entry.ordered_authors = calibre_db.order_authors([entry])
|
||||
|
||||
return render_title_template('basic_detail.html',
|
||||
entry=entry,
|
||||
is_xhr=request.headers.get('X-Requested-With') == 'XMLHttpRequest',
|
||||
title=entry.title,
|
||||
page="book")
|
||||
else:
|
||||
log.debug("Selected book is unavailable. File does not exist or is not accessible")
|
||||
return redirect(url_for("basic.index"))
|
@ -1,48 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2018-2019 OzzieIsaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from . import logger
|
||||
from lxml.etree import ParserError
|
||||
|
||||
log = logger.create()
|
||||
|
||||
try:
|
||||
# at least bleach 6.0 is needed -> incomplatible change from list arguments to set arguments
|
||||
from bleach import clean as clean_html
|
||||
from bleach.sanitizer import ALLOWED_TAGS
|
||||
bleach = True
|
||||
except ImportError:
|
||||
from nh3 import clean as clean_html
|
||||
bleach = False
|
||||
|
||||
|
||||
def clean_string(unsafe_text, book_id=0):
|
||||
try:
|
||||
if bleach:
|
||||
allowed_tags = list(ALLOWED_TAGS)
|
||||
allowed_tags.extend(["p", "span", "div", "pre", "br", "h1", "h2", "h3", "h4", "h5", "h6"])
|
||||
safe_text = clean_html(unsafe_text, tags=set(allowed_tags))
|
||||
else:
|
||||
safe_text = clean_html(unsafe_text)
|
||||
except ParserError as e:
|
||||
log.error("Comments of book {} are corrupted: {}".format(book_id, e))
|
||||
safe_text = ""
|
||||
except TypeError as e:
|
||||
log.error("Comments can't be parsed, maybe 'lxml' is too new, try installing 'bleach': {}".format(e))
|
||||
safe_text = ""
|
||||
return safe_text
|
42
cps/cli.py
@ -29,25 +29,12 @@ from .constants import DEFAULT_SETTINGS_FILE, DEFAULT_GDRIVE_FILE
|
||||
|
||||
def version_info():
|
||||
if _NIGHTLY_VERSION[1].startswith('$Format'):
|
||||
return "Calibre-Web version: %s - unknown git-clone" % _STABLE_VERSION.replace("b", " Beta")
|
||||
return "Calibre-Web version: %s -%s" % (_STABLE_VERSION.replace("b", " Beta"), _NIGHTLY_VERSION[1])
|
||||
return "Calibre-Web version: %s - unknown git-clone" % _STABLE_VERSION['version']
|
||||
return "Calibre-Web version: %s -%s" % (_STABLE_VERSION['version'], _NIGHTLY_VERSION[1])
|
||||
|
||||
|
||||
class CliParameter(object):
|
||||
|
||||
def __init__(self):
|
||||
self.user_credentials = None
|
||||
self.ip_address = None
|
||||
self.allow_localhost = None
|
||||
self.reconnect_enable = None
|
||||
self.memory_backend = None
|
||||
self.dry_run = None
|
||||
self.certfilepath = None
|
||||
self.keyfilepath = None
|
||||
self.gd_path = None
|
||||
self.settings_path = None
|
||||
self.logpath = None
|
||||
|
||||
def init(self):
|
||||
self.arg_parser()
|
||||
|
||||
@ -57,28 +44,22 @@ class CliParameter(object):
|
||||
prog='cps.py')
|
||||
parser.add_argument('-p', metavar='path', help='path and name to settings db, e.g. /opt/cw.db')
|
||||
parser.add_argument('-g', metavar='path', help='path and name to gdrive db, e.g. /opt/gd.db')
|
||||
parser.add_argument('-c', metavar='path', help='path and name to SSL certfile, '
|
||||
'e.g. /opt/test.cert, works only in combination with keyfile')
|
||||
parser.add_argument('-c', metavar='path', help='path and name to SSL certfile, e.g. /opt/test.cert, '
|
||||
'works only in combination with keyfile')
|
||||
parser.add_argument('-k', metavar='path', help='path and name to SSL keyfile, e.g. /opt/test.key, '
|
||||
'works only in combination with certfile')
|
||||
parser.add_argument('-o', metavar='path', help='path and name Calibre-Web logfile')
|
||||
parser.add_argument('-v', '--version', action='version', help='Shows version number '
|
||||
'and exits Calibre-Web',
|
||||
parser.add_argument('-v', '--version', action='version', help='Shows version number and exits Calibre-Web',
|
||||
version=version_info())
|
||||
parser.add_argument('-i', metavar='ip-address', help='Server IP-Address to listen')
|
||||
parser.add_argument('-m', action='store_true',
|
||||
help='Use Memory-backend as limiter backend, use this parameter '
|
||||
'in case of miss configured backend')
|
||||
parser.add_argument('-s', metavar='user:pass',
|
||||
help='Sets specific username to new password and exits Calibre-Web')
|
||||
parser.add_argument('-f', action='store_true', help='Flag is depreciated and will be removed in next version')
|
||||
parser.add_argument('-l', action='store_true', help='Allow loading covers from localhost')
|
||||
parser.add_argument('-d', action='store_true', help='Dry run of updater to check file permissions '
|
||||
'in advance and exits Calibre-Web')
|
||||
parser.add_argument('-r', action='store_true', help='Enable public database reconnect '
|
||||
'route under /reconnect')
|
||||
parser.add_argument('-d', action='store_true', help='Dry run of updater to check file permissions in advance '
|
||||
'and exits Calibre-Web')
|
||||
parser.add_argument('-r', action='store_true', help='Enable public database reconnect route under /reconnect')
|
||||
args = parser.parse_args()
|
||||
|
||||
self.logpath = args.o or ""
|
||||
self.settings_path = args.p or os.path.join(_CONFIG_DIR, DEFAULT_SETTINGS_FILE)
|
||||
self.gd_path = args.g or os.path.join(_CONFIG_DIR, DEFAULT_GDRIVE_FILE)
|
||||
|
||||
@ -115,8 +96,6 @@ class CliParameter(object):
|
||||
if args.k == "":
|
||||
self.keyfilepath = ""
|
||||
|
||||
# overwrite limiter backend
|
||||
self.memory_backend = args.m or None
|
||||
# dry run updater
|
||||
self.dry_run = args.d or None
|
||||
# enable reconnect endpoint for docker database reconnect
|
||||
@ -146,3 +125,6 @@ class CliParameter(object):
|
||||
if self.user_credentials and ":" not in self.user_credentials:
|
||||
print("No valid 'username:password' format")
|
||||
sys.exit(3)
|
||||
|
||||
if args.f:
|
||||
print("Warning: -f flag is depreciated and will be removed in next version")
|
||||
|
81
cps/comic.py
@ -36,12 +36,6 @@ try:
|
||||
from comicapi import __version__ as comic_version
|
||||
except ImportError:
|
||||
comic_version = ''
|
||||
try:
|
||||
from comicapi.comicarchive import load_archive_plugins
|
||||
import comicapi.utils
|
||||
comicapi.utils.add_rar_paths()
|
||||
except ImportError:
|
||||
load_archive_plugins = None
|
||||
except (ImportError, LookupError) as e:
|
||||
log.debug('Cannot import comicapi, extracting comic metadata will not work: %s', e)
|
||||
import zipfile
|
||||
@ -52,12 +46,6 @@ except (ImportError, LookupError) as e:
|
||||
except (ImportError, SyntaxError) as e:
|
||||
log.debug('Cannot import rarfile, extracting cover files from rar files will not work: %s', e)
|
||||
use_rarfile = False
|
||||
try:
|
||||
import py7zr
|
||||
use_7zip = True
|
||||
except (ImportError, SyntaxError) as e:
|
||||
log.debug('Cannot import py7zr, extracting cover files from CB7 files will not work: %s', e)
|
||||
use_7zip = False
|
||||
use_comic_meta = False
|
||||
|
||||
|
||||
@ -93,79 +81,50 @@ def _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_exec
|
||||
cover_data = cf.read(name)
|
||||
break
|
||||
except Exception as ex:
|
||||
log.error('Rarfile failed with error: {}'.format(ex))
|
||||
elif original_file_extension.upper() == '.CB7' and use_7zip:
|
||||
cf = py7zr.SevenZipFile(tmp_file_name)
|
||||
for name in cf.getnames():
|
||||
ext = os.path.splitext(name)
|
||||
if len(ext) > 1:
|
||||
extension = ext[1].lower()
|
||||
if extension in cover.COVER_EXTENSIONS:
|
||||
try:
|
||||
cover_data = cf.read([name])[name].read()
|
||||
except (py7zr.Bad7zFile, OSError) as ex:
|
||||
log.error('7Zip file failed with error: {}'.format(ex))
|
||||
break
|
||||
log.debug('Rarfile failed with error: {}'.format(ex))
|
||||
return cover_data, extension
|
||||
|
||||
|
||||
def _extract_cover(tmp_file_path, original_file_extension, rar_executable):
|
||||
def _extract_cover(tmp_file_name, original_file_extension, rar_executable):
|
||||
cover_data = extension = None
|
||||
if use_comic_meta:
|
||||
try:
|
||||
archive = ComicArchive(tmp_file_path, rar_exe_path=rar_executable)
|
||||
except TypeError:
|
||||
archive = ComicArchive(tmp_file_path)
|
||||
name_list = archive.getPageNameList if hasattr(archive, "getPageNameList") else archive.get_page_name_list
|
||||
for index, name in enumerate(name_list()):
|
||||
archive = ComicArchive(tmp_file_name, rar_exe_path=rar_executable)
|
||||
for index, name in enumerate(archive.getPageNameList()):
|
||||
ext = os.path.splitext(name)
|
||||
if len(ext) > 1:
|
||||
extension = ext[1].lower()
|
||||
if extension in cover.COVER_EXTENSIONS:
|
||||
get_page = archive.getPage if hasattr(archive, "getPageNameList") else archive.get_page
|
||||
cover_data = get_page(index)
|
||||
cover_data = archive.getPage(index)
|
||||
break
|
||||
else:
|
||||
cover_data, extension = _extract_cover_from_archive(original_file_extension, tmp_file_path, rar_executable)
|
||||
return cover.cover_processing(tmp_file_path, cover_data, extension)
|
||||
cover_data, extension = _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_executable)
|
||||
return cover.cover_processing(tmp_file_name, cover_data, extension)
|
||||
|
||||
|
||||
def get_comic_info(tmp_file_path, original_file_name, original_file_extension, rar_executable, no_cover_processing):
|
||||
def get_comic_info(tmp_file_path, original_file_name, original_file_extension, rar_executable):
|
||||
if use_comic_meta:
|
||||
try:
|
||||
archive = ComicArchive(tmp_file_path, rar_exe_path=rar_executable)
|
||||
except TypeError:
|
||||
load_archive_plugins(force=True, rar=rar_executable)
|
||||
archive = ComicArchive(tmp_file_path)
|
||||
if hasattr(archive, "seemsToBeAComicArchive"):
|
||||
seems_archive = archive.seemsToBeAComicArchive
|
||||
else:
|
||||
seems_archive = archive.seems_to_be_a_comic_archive
|
||||
if seems_archive():
|
||||
has_metadata = archive.hasMetadata if hasattr(archive, "hasMetadata") else archive.has_metadata
|
||||
if has_metadata(MetaDataStyle.CIX):
|
||||
archive = ComicArchive(tmp_file_path, rar_exe_path=rar_executable)
|
||||
if archive.seemsToBeAComicArchive():
|
||||
if archive.hasMetadata(MetaDataStyle.CIX):
|
||||
style = MetaDataStyle.CIX
|
||||
elif has_metadata(MetaDataStyle.CBI):
|
||||
elif archive.hasMetadata(MetaDataStyle.CBI):
|
||||
style = MetaDataStyle.CBI
|
||||
else:
|
||||
style = None
|
||||
|
||||
read_metadata = archive.readMetadata if hasattr(archive, "readMetadata") else archive.read_metadata
|
||||
loaded_metadata = read_metadata(style)
|
||||
# if style is not None:
|
||||
loaded_metadata = archive.readMetadata(style)
|
||||
|
||||
lang = loaded_metadata.language or ""
|
||||
loaded_metadata.language = isoLanguages.get_lang3(lang)
|
||||
if not no_cover_processing:
|
||||
cover_file = _extract_cover(tmp_file_path, original_file_extension, rar_executable)
|
||||
else:
|
||||
cover_file = None
|
||||
|
||||
return BookMeta(
|
||||
file_path=tmp_file_path,
|
||||
extension=original_file_extension,
|
||||
title=loaded_metadata.title or original_file_name,
|
||||
author=" & ".join([credit["person"]
|
||||
for credit in loaded_metadata.credits if credit["role"] == "Writer"]) or 'Unknown',
|
||||
cover=cover_file,
|
||||
cover=_extract_cover(tmp_file_path, original_file_extension, rar_executable),
|
||||
description=loaded_metadata.comments or "",
|
||||
tags="",
|
||||
series=loaded_metadata.series or "",
|
||||
@ -174,17 +133,13 @@ def get_comic_info(tmp_file_path, original_file_name, original_file_extension, r
|
||||
publisher="",
|
||||
pubdate="",
|
||||
identifiers=[])
|
||||
if not no_cover_processing:
|
||||
cover_file = _extract_cover(tmp_file_path, original_file_extension, rar_executable)
|
||||
else:
|
||||
cover_file = None
|
||||
|
||||
return BookMeta(
|
||||
file_path=tmp_file_path,
|
||||
extension=original_file_extension,
|
||||
title=original_file_name,
|
||||
author='Unknown',
|
||||
cover=cover_file,
|
||||
author=u'Unknown',
|
||||
cover=_extract_cover(tmp_file_path, original_file_extension, rar_executable),
|
||||
description="",
|
||||
tags="",
|
||||
series="",
|
||||
|
@ -23,10 +23,6 @@ import json
|
||||
from sqlalchemy import Column, String, Integer, SmallInteger, Boolean, BLOB, JSON
|
||||
from sqlalchemy.exc import OperationalError
|
||||
from sqlalchemy.sql.expression import text
|
||||
from sqlalchemy import exists
|
||||
from cryptography.fernet import Fernet
|
||||
import cryptography.exceptions
|
||||
from base64 import urlsafe_b64decode
|
||||
try:
|
||||
# Compatibility with sqlalchemy 2.0
|
||||
from sqlalchemy.orm import declarative_base
|
||||
@ -34,8 +30,7 @@ except ImportError:
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
|
||||
from . import constants, logger
|
||||
from .subproc_wrapper import process_wait
|
||||
from .string_helper import strip_whitespaces
|
||||
|
||||
|
||||
log = logger.create()
|
||||
_Base = declarative_base()
|
||||
@ -48,7 +43,6 @@ class _Flask_Settings(_Base):
|
||||
flask_session_key = Column(BLOB, default=b"")
|
||||
|
||||
def __init__(self, key):
|
||||
super().__init__()
|
||||
self.flask_session_key = key
|
||||
|
||||
|
||||
@ -62,8 +56,7 @@ class _Settings(_Base):
|
||||
mail_port = Column(Integer, default=25)
|
||||
mail_use_ssl = Column(SmallInteger, default=0)
|
||||
mail_login = Column(String, default='mail@example.com')
|
||||
mail_password_e = Column(String)
|
||||
mail_password = Column(String)
|
||||
mail_password = Column(String, default='mypassword')
|
||||
mail_from = Column(String, default='automailer <mail@example.com>')
|
||||
mail_size = Column(Integer, default=25*1024*1024)
|
||||
mail_server_type = Column(SmallInteger, default=0)
|
||||
@ -71,27 +64,24 @@ class _Settings(_Base):
|
||||
|
||||
config_calibre_dir = Column(String)
|
||||
config_calibre_uuid = Column(String)
|
||||
config_calibre_split = Column(Boolean, default=False)
|
||||
config_calibre_split_dir = Column(String)
|
||||
config_port = Column(Integer, default=constants.DEFAULT_PORT)
|
||||
config_external_port = Column(Integer, default=constants.DEFAULT_PORT)
|
||||
config_certfile = Column(String)
|
||||
config_keyfile = Column(String)
|
||||
config_trustedhosts = Column(String, default='')
|
||||
config_calibre_web_title = Column(String, default='Calibre-Web')
|
||||
config_calibre_web_title = Column(String, default=u'Calibre-Web')
|
||||
config_books_per_page = Column(Integer, default=60)
|
||||
config_random_books = Column(Integer, default=4)
|
||||
config_authors_max = Column(Integer, default=0)
|
||||
config_read_column = Column(Integer, default=0)
|
||||
config_title_regex = Column(String,
|
||||
default=r'^(A|The|An|Der|Die|Das|Den|Ein|Eine'
|
||||
r'|Einen|Dem|Des|Einem|Eines|Le|La|Les|L\'|Un|Une)\s+')
|
||||
config_title_regex = Column(String, default=r'^(A|The|An|Der|Die|Das|Den|Ein|Eine|Einen|Dem|Des|Einem|Eines)\s+')
|
||||
# config_mature_content_tags = Column(String, default='')
|
||||
config_theme = Column(Integer, default=0)
|
||||
|
||||
config_log_level = Column(SmallInteger, default=logger.DEFAULT_LOG_LEVEL)
|
||||
config_logfile = Column(String, default=logger.DEFAULT_LOG_FILE)
|
||||
config_logfile = Column(String)
|
||||
config_access_log = Column(SmallInteger, default=0)
|
||||
config_access_logfile = Column(String, default=logger.DEFAULT_ACCESS_LOG)
|
||||
config_access_logfile = Column(String)
|
||||
|
||||
config_uploading = Column(SmallInteger, default=0)
|
||||
config_anonbrowse = Column(SmallInteger, default=0)
|
||||
@ -117,6 +107,7 @@ class _Settings(_Base):
|
||||
|
||||
config_use_goodreads = Column(Boolean, default=False)
|
||||
config_goodreads_api_key = Column(String)
|
||||
config_goodreads_api_secret = Column(String)
|
||||
config_register_email = Column(Boolean, default=False)
|
||||
config_login_type = Column(Integer, default=0)
|
||||
|
||||
@ -126,8 +117,7 @@ class _Settings(_Base):
|
||||
config_ldap_port = Column(SmallInteger, default=389)
|
||||
config_ldap_authentication = Column(SmallInteger, default=constants.LDAP_AUTH_SIMPLE)
|
||||
config_ldap_serv_username = Column(String, default='cn=admin,dc=example,dc=org')
|
||||
config_ldap_serv_password_e = Column(String)
|
||||
config_ldap_serv_password = Column(String)
|
||||
config_ldap_serv_password = Column(String, default="")
|
||||
config_ldap_encryption = Column(SmallInteger, default=0)
|
||||
config_ldap_cacert_path = Column(String, default="")
|
||||
config_ldap_cert_path = Column(String, default="")
|
||||
@ -142,12 +132,10 @@ class _Settings(_Base):
|
||||
|
||||
config_kepubifypath = Column(String, default=None)
|
||||
config_converterpath = Column(String, default=None)
|
||||
config_binariesdir = Column(String, default=None)
|
||||
config_calibre = Column(String)
|
||||
config_rarfile_location = Column(String, default=None)
|
||||
config_upload_formats = Column(String, default=','.join(constants.EXTENSIONS_UPLOAD))
|
||||
config_unicode_filename = Column(Boolean, default=False)
|
||||
config_embed_metadata = Column(Boolean, default=True)
|
||||
|
||||
config_updatechannel = Column(Integer, default=constants.UPDATE_STABLE)
|
||||
|
||||
@ -159,52 +147,35 @@ class _Settings(_Base):
|
||||
schedule_generate_book_covers = Column(Boolean, default=False)
|
||||
schedule_generate_series_covers = Column(Boolean, default=False)
|
||||
schedule_reconnect = Column(Boolean, default=False)
|
||||
schedule_metadata_backup = Column(Boolean, default=False)
|
||||
|
||||
config_password_policy = Column(Boolean, default=True)
|
||||
config_password_min_length = Column(Integer, default=8)
|
||||
config_password_number = Column(Boolean, default=True)
|
||||
config_password_lower = Column(Boolean, default=True)
|
||||
config_password_upper = Column(Boolean, default=True)
|
||||
config_password_character = Column(Boolean, default=True)
|
||||
config_password_special = Column(Boolean, default=True)
|
||||
config_session = Column(Integer, default=1)
|
||||
config_ratelimiter = Column(Boolean, default=True)
|
||||
config_limiter_uri = Column(String, default="")
|
||||
config_limiter_options = Column(String, default="")
|
||||
config_check_extensions = Column(Boolean, default=True)
|
||||
|
||||
def __repr__(self):
|
||||
return self.__class__.__name__
|
||||
|
||||
|
||||
# Class holds all application specific settings in calibre-web
|
||||
class ConfigSQL(object):
|
||||
class _ConfigSQL(object):
|
||||
# pylint: disable=no-member
|
||||
def __init__(self):
|
||||
self.__dict__["dirty"] = list()
|
||||
pass
|
||||
|
||||
def init_config(self, session, secret_key, cli):
|
||||
def init_config(self, session, cli):
|
||||
self._session = session
|
||||
self._settings = None
|
||||
self.db_configured = None
|
||||
self.config_calibre_dir = None
|
||||
self._fernet = Fernet(secret_key)
|
||||
self.cli = cli
|
||||
self.load()
|
||||
self.cli = cli
|
||||
|
||||
change = False
|
||||
|
||||
if self.config_binariesdir is None:
|
||||
if self.config_converterpath == None: # pylint: disable=access-member-before-definition
|
||||
change = True
|
||||
self.config_binariesdir = autodetect_calibre_binaries()
|
||||
self.config_converterpath = autodetect_converter_binary(self.config_binariesdir)
|
||||
self.config_converterpath = autodetect_calibre_binary()
|
||||
|
||||
if self.config_kepubifypath is None:
|
||||
if self.config_kepubifypath == None: # pylint: disable=access-member-before-definition
|
||||
change = True
|
||||
self.config_kepubifypath = autodetect_kepubify_binary()
|
||||
|
||||
if self.config_rarfile_location is None:
|
||||
if self.config_rarfile_location == None: # pylint: disable=access-member-before-definition
|
||||
change = True
|
||||
self.config_rarfile_location = autodetect_unrar_binary()
|
||||
if change:
|
||||
@ -268,19 +239,19 @@ class ConfigSQL(object):
|
||||
|
||||
def list_denied_tags(self):
|
||||
mct = self.config_denied_tags or ""
|
||||
return [strip_whitespaces(t) for t in mct.split(",")]
|
||||
return [t.strip() for t in mct.split(",")]
|
||||
|
||||
def list_allowed_tags(self):
|
||||
mct = self.config_allowed_tags or ""
|
||||
return [strip_whitespaces(t) for t in mct.split(",")]
|
||||
return [t.strip() for t in mct.split(",")]
|
||||
|
||||
def list_denied_column_values(self):
|
||||
mct = self.config_denied_column_value or ""
|
||||
return [strip_whitespaces(t) for t in mct.split(",")]
|
||||
return [t.strip() for t in mct.split(",")]
|
||||
|
||||
def list_allowed_column_values(self):
|
||||
mct = self.config_allowed_column_value or ""
|
||||
return [strip_whitespaces(t) for t in mct.split(",")]
|
||||
return [t.strip() for t in mct.split(",")]
|
||||
|
||||
def get_log_level(self):
|
||||
return logger.get_level_name(self.config_log_level)
|
||||
@ -322,10 +293,10 @@ class ConfigSQL(object):
|
||||
setattr(self, field, new_value)
|
||||
return True
|
||||
|
||||
def to_dict(self):
|
||||
def toDict(self):
|
||||
storage = {}
|
||||
for k, v in self.__dict__.items():
|
||||
if k[0] != '_' and not k.endswith("_e") and not k == "cli":
|
||||
if k[0] != '_' and not k.endswith("password") and not k.endswith("secret") and not k == "cli":
|
||||
storage[k] = v
|
||||
return storage
|
||||
|
||||
@ -339,51 +310,38 @@ class ConfigSQL(object):
|
||||
column = s.__class__.__dict__.get(k)
|
||||
if column.default is not None:
|
||||
v = column.default.arg
|
||||
if k.endswith("_e") and v is not None:
|
||||
try:
|
||||
setattr(self, k, self._fernet.decrypt(v).decode())
|
||||
except cryptography.fernet.InvalidToken:
|
||||
setattr(self, k, "")
|
||||
else:
|
||||
setattr(self, k, v)
|
||||
setattr(self, k, v)
|
||||
|
||||
have_metadata_db = bool(self.config_calibre_dir)
|
||||
if have_metadata_db:
|
||||
db_file = os.path.join(self.config_calibre_dir, 'metadata.db')
|
||||
have_metadata_db = os.path.isfile(db_file)
|
||||
self.db_configured = have_metadata_db
|
||||
|
||||
from . import cli_param
|
||||
constants.EXTENSIONS_UPLOAD = [x.lstrip().rstrip().lower() for x in self.config_upload_formats.split(',')]
|
||||
if os.environ.get('FLASK_DEBUG'):
|
||||
logfile = logger.setup(logger.LOG_TO_STDOUT, logger.logging.DEBUG)
|
||||
else:
|
||||
# pylint: disable=access-member-before-definition
|
||||
logfile = logger.setup(cli_param.logpath or self.config_logfile, self.config_log_level)
|
||||
if logfile != os.path.abspath(self.config_logfile):
|
||||
if logfile != os.path.abspath(cli_param.logpath):
|
||||
log.warning("Log path %s not valid, falling back to default", self.config_logfile)
|
||||
logfile = logger.setup(self.config_logfile, self.config_log_level)
|
||||
if logfile != self.config_logfile:
|
||||
log.warning("Log path %s not valid, falling back to default", self.config_logfile)
|
||||
self.config_logfile = logfile
|
||||
s.config_logfile = logfile
|
||||
self._session.merge(s)
|
||||
try:
|
||||
self._session.commit()
|
||||
except OperationalError as e:
|
||||
log.error('Database error: %s', e)
|
||||
self._session.rollback()
|
||||
self.__dict__["dirty"] = list()
|
||||
|
||||
def save(self):
|
||||
"""Apply all configuration values to the underlying storage."""
|
||||
s = self._read_from_storage() # type: _Settings
|
||||
|
||||
for k in self.dirty:
|
||||
for k, v in self.__dict__.items():
|
||||
if k[0] == '_':
|
||||
continue
|
||||
if hasattr(s, k):
|
||||
if k.endswith("_e"):
|
||||
setattr(s, k, self._fernet.encrypt(self.__dict__[k].encode()))
|
||||
else:
|
||||
setattr(s, k, self.__dict__[k])
|
||||
setattr(s, k, v)
|
||||
|
||||
log.debug("_ConfigSQL updating storage")
|
||||
self._session.merge(s)
|
||||
@ -399,49 +357,20 @@ class ConfigSQL(object):
|
||||
log.error(error)
|
||||
log.warning("invalidating configuration")
|
||||
self.db_configured = False
|
||||
# self.config_calibre_dir = None
|
||||
self.save()
|
||||
|
||||
def get_book_path(self):
|
||||
return self.config_calibre_split_dir if self.config_calibre_split_dir else self.config_calibre_dir
|
||||
|
||||
def store_calibre_uuid(self, calibre_db, Library_table):
|
||||
from . import app
|
||||
try:
|
||||
with app.app_context():
|
||||
calibre_uuid = calibre_db.session.query(Library_table).one_or_none()
|
||||
if self.config_calibre_uuid != calibre_uuid.uuid:
|
||||
self.config_calibre_uuid = calibre_uuid.uuid
|
||||
self.save()
|
||||
calibre_uuid = calibre_db.session.query(Library_table).one_or_none()
|
||||
if self.config_calibre_uuid != calibre_uuid.uuid:
|
||||
self.config_calibre_uuid = calibre_uuid.uuid
|
||||
self.save()
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
def __setattr__(self, attr_name, attr_value):
|
||||
super().__setattr__(attr_name, attr_value)
|
||||
self.__dict__["dirty"].append(attr_name)
|
||||
|
||||
|
||||
def _encrypt_fields(session, secret_key):
|
||||
try:
|
||||
session.query(exists().where(_Settings.mail_password_e)).scalar()
|
||||
except OperationalError:
|
||||
with session.bind.connect() as conn:
|
||||
conn.execute(text("ALTER TABLE settings ADD column 'mail_password_e' String"))
|
||||
conn.execute(text("ALTER TABLE settings ADD column 'config_ldap_serv_password_e' String"))
|
||||
session.commit()
|
||||
crypter = Fernet(secret_key)
|
||||
settings = session.query(_Settings.mail_password, _Settings.config_ldap_serv_password).first()
|
||||
if settings.mail_password:
|
||||
session.query(_Settings).update(
|
||||
{_Settings.mail_password_e: crypter.encrypt(settings.mail_password.encode())})
|
||||
if settings.config_ldap_serv_password:
|
||||
session.query(_Settings).update(
|
||||
{_Settings.config_ldap_serv_password_e: crypter.encrypt(settings.config_ldap_serv_password.encode())})
|
||||
session.commit()
|
||||
|
||||
|
||||
def _migrate_table(session, orm_class, secret_key=None):
|
||||
if secret_key:
|
||||
_encrypt_fields(session, secret_key)
|
||||
def _migrate_table(session, orm_class):
|
||||
changed = False
|
||||
|
||||
for column_name, column in orm_class.__dict__.items():
|
||||
@ -479,37 +408,17 @@ def _migrate_table(session, orm_class, secret_key=None):
|
||||
session.rollback()
|
||||
|
||||
|
||||
def autodetect_calibre_binaries():
|
||||
def autodetect_calibre_binary():
|
||||
if sys.platform == "win32":
|
||||
calibre_path = ["C:\\program files\\calibre\\",
|
||||
"C:\\program files(x86)\\calibre\\",
|
||||
"C:\\program files(x86)\\calibre2\\",
|
||||
"C:\\program files\\calibre2\\"]
|
||||
elif sys.platform.startswith("freebsd"):
|
||||
calibre_path = ["/usr/local/bin/"]
|
||||
calibre_path = ["C:\\program files\\calibre\\ebook-convert.exe",
|
||||
"C:\\program files(x86)\\calibre\\ebook-convert.exe",
|
||||
"C:\\program files(x86)\\calibre2\\ebook-convert.exe",
|
||||
"C:\\program files\\calibre2\\ebook-convert.exe"]
|
||||
else:
|
||||
calibre_path = ["/opt/calibre/"]
|
||||
calibre_path = ["/opt/calibre/ebook-convert"]
|
||||
for element in calibre_path:
|
||||
supported_binary_paths = [os.path.join(element, binary)
|
||||
for binary in constants.SUPPORTED_CALIBRE_BINARIES.values()]
|
||||
if all(os.path.isfile(binary_path) and os.access(binary_path, os.X_OK)
|
||||
for binary_path in supported_binary_paths):
|
||||
values = [process_wait([binary_path, "--version"],
|
||||
pattern=r'\(calibre (.*)\)') for binary_path in supported_binary_paths]
|
||||
if all(values):
|
||||
version = values[0].group(1)
|
||||
log.debug("calibre version %s", version)
|
||||
return element
|
||||
return ""
|
||||
|
||||
|
||||
def autodetect_converter_binary(calibre_path):
|
||||
if sys.platform == "win32":
|
||||
converter_path = os.path.join(calibre_path, "ebook-convert.exe")
|
||||
else:
|
||||
converter_path = os.path.join(calibre_path, "ebook-convert")
|
||||
if calibre_path and os.path.isfile(converter_path) and os.access(converter_path, os.X_OK):
|
||||
return converter_path
|
||||
if os.path.isfile(element) and os.access(element, os.X_OK):
|
||||
return element
|
||||
return ""
|
||||
|
||||
|
||||
@ -517,8 +426,6 @@ def autodetect_unrar_binary():
|
||||
if sys.platform == "win32":
|
||||
calibre_path = ["C:\\program files\\WinRar\\unRAR.exe",
|
||||
"C:\\program files(x86)\\WinRar\\unRAR.exe"]
|
||||
elif sys.platform.startswith("freebsd"):
|
||||
calibre_path = ["/usr/local/bin/unrar"]
|
||||
else:
|
||||
calibre_path = ["/usr/bin/unrar"]
|
||||
for element in calibre_path:
|
||||
@ -531,8 +438,6 @@ def autodetect_kepubify_binary():
|
||||
if sys.platform == "win32":
|
||||
calibre_path = ["C:\\program files\\kepubify\\kepubify-windows-64Bit.exe",
|
||||
"C:\\program files(x86)\\kepubify\\kepubify-windows-64Bit.exe"]
|
||||
elif sys.platform.startswith("freebsd"):
|
||||
calibre_path = ["/usr/local/bin/kepubify"]
|
||||
else:
|
||||
calibre_path = ["/opt/kepubify/kepubify-linux-64bit", "/opt/kepubify/kepubify-linux-32bit"]
|
||||
for element in calibre_path:
|
||||
@ -541,47 +446,28 @@ def autodetect_kepubify_binary():
|
||||
return ""
|
||||
|
||||
|
||||
def _migrate_database(session, secret_key):
|
||||
def _migrate_database(session):
|
||||
# make sure the table is created, if it does not exist
|
||||
_Base.metadata.create_all(session.bind)
|
||||
_migrate_table(session, _Settings, secret_key)
|
||||
_migrate_table(session, _Settings)
|
||||
_migrate_table(session, _Flask_Settings)
|
||||
|
||||
|
||||
def load_configuration(session, secret_key):
|
||||
_migrate_database(session, secret_key)
|
||||
def load_configuration(conf, session, cli):
|
||||
_migrate_database(session)
|
||||
|
||||
if not session.query(_Settings).count():
|
||||
session.add(_Settings())
|
||||
session.commit()
|
||||
# conf = _ConfigSQL()
|
||||
conf.init_config(session, cli)
|
||||
# return conf
|
||||
|
||||
|
||||
def get_flask_session_key(_session):
|
||||
flask_settings = _session.query(_Flask_Settings).one_or_none()
|
||||
if flask_settings is None:
|
||||
if flask_settings == None:
|
||||
flask_settings = _Flask_Settings(os.urandom(32))
|
||||
_session.add(flask_settings)
|
||||
_session.commit()
|
||||
return flask_settings.flask_session_key
|
||||
|
||||
|
||||
def get_encryption_key(key_path):
|
||||
key_file = os.path.join(key_path, ".key")
|
||||
generate = True
|
||||
error = ""
|
||||
key = None
|
||||
if os.path.exists(key_file) and os.path.getsize(key_file) > 32:
|
||||
with open(key_file, "rb") as f:
|
||||
key = f.read()
|
||||
try:
|
||||
urlsafe_b64decode(key)
|
||||
generate = False
|
||||
except ValueError:
|
||||
pass
|
||||
if generate:
|
||||
key = Fernet.generate_key()
|
||||
try:
|
||||
with open(key_file, "wb") as f:
|
||||
f.write(key)
|
||||
except PermissionError as e:
|
||||
error = e
|
||||
return key, error
|
||||
|
@ -19,6 +19,9 @@
|
||||
import sys
|
||||
import os
|
||||
from collections import namedtuple
|
||||
from sqlalchemy import __version__ as sql_version
|
||||
|
||||
sqlalchemy_version2 = ([int(x) for x in sql_version.split('.')] >= [2, 0, 0])
|
||||
|
||||
# APP_MODE - production, development, or test
|
||||
APP_MODE = os.environ.get('APP_MODE', 'production')
|
||||
@ -31,8 +34,6 @@ UPDATER_AVAILABLE = True
|
||||
|
||||
# Base dir is parent of current file, necessary if called from different folder
|
||||
BASE_DIR = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), os.pardir))
|
||||
# if executable file the files should be placed in the parent dir (parallel to the exe file)
|
||||
|
||||
STATIC_DIR = os.path.join(BASE_DIR, 'cps', 'static')
|
||||
TEMPLATES_DIR = os.path.join(BASE_DIR, 'cps', 'templates')
|
||||
TRANSLATIONS_DIR = os.path.join(BASE_DIR, 'cps', 'translations')
|
||||
@ -48,9 +49,6 @@ if HOME_CONFIG:
|
||||
CONFIG_DIR = os.environ.get('CALIBRE_DBPATH', home_dir)
|
||||
else:
|
||||
CONFIG_DIR = os.environ.get('CALIBRE_DBPATH', BASE_DIR)
|
||||
if getattr(sys, 'frozen', False):
|
||||
CONFIG_DIR = os.path.abspath(os.path.join(CONFIG_DIR, os.pardir))
|
||||
|
||||
|
||||
DEFAULT_SETTINGS_FILE = "app.db"
|
||||
DEFAULT_GDRIVE_FILE = "gdrive.db"
|
||||
@ -146,23 +144,17 @@ del env_CALIBRE_PORT
|
||||
|
||||
EXTENSIONS_AUDIO = {'mp3', 'mp4', 'ogg', 'opus', 'wav', 'flac', 'm4a', 'm4b'}
|
||||
EXTENSIONS_CONVERT_FROM = ['pdf', 'epub', 'mobi', 'azw3', 'docx', 'rtf', 'fb2', 'lit', 'lrf',
|
||||
'txt', 'htmlz', 'rtf', 'odt', 'cbz', 'cbr', 'prc']
|
||||
'txt', 'htmlz', 'rtf', 'odt', 'cbz', 'cbr']
|
||||
EXTENSIONS_CONVERT_TO = ['pdf', 'epub', 'mobi', 'azw3', 'docx', 'rtf', 'fb2',
|
||||
'lit', 'lrf', 'txt', 'htmlz', 'rtf', 'odt']
|
||||
EXTENSIONS_UPLOAD = {'txt', 'pdf', 'epub', 'kepub', 'mobi', 'azw', 'azw3', 'cbr', 'cbz', 'cbt', 'cb7', 'djvu', 'djv',
|
||||
EXTENSIONS_UPLOAD = {'txt', 'pdf', 'epub', 'kepub', 'mobi', 'azw', 'azw3', 'cbr', 'cbz', 'cbt', 'djvu',
|
||||
'prc', 'doc', 'docx', 'fb2', 'html', 'rtf', 'lit', 'odt', 'mp3', 'mp4', 'ogg',
|
||||
'opus', 'wav', 'flac', 'm4a', 'm4b'}
|
||||
|
||||
_extension = ""
|
||||
if sys.platform == "win32":
|
||||
_extension = ".exe"
|
||||
SUPPORTED_CALIBRE_BINARIES = {binary: binary + _extension for binary in ["ebook-convert", "calibredb"]}
|
||||
|
||||
|
||||
def has_flag(value, bit_flag):
|
||||
return bit_flag == (bit_flag & (value or 0))
|
||||
|
||||
|
||||
def selected_roles(dictionary):
|
||||
return sum(v for k, v in ALL_ROLES.items() if k in dictionary)
|
||||
|
||||
@ -171,12 +163,13 @@ def selected_roles(dictionary):
|
||||
BookMeta = namedtuple('BookMeta', 'file_path, extension, title, author, cover, description, tags, series, '
|
||||
'series_id, languages, publisher, pubdate, identifiers')
|
||||
|
||||
# python build process likes to have x.y.zbw -> b for beta and w a counting number
|
||||
STABLE_VERSION = '0.6.25b'
|
||||
STABLE_VERSION = {'version': '0.6.19'}
|
||||
|
||||
NIGHTLY_VERSION = dict()
|
||||
NIGHTLY_VERSION[0] = '$Format:%H$'
|
||||
NIGHTLY_VERSION[1] = '$Format:%cI$'
|
||||
# NIGHTLY_VERSION[0] = 'bb7d2c6273ae4560e83950d36d64533343623a57'
|
||||
# NIGHTLY_VERSION[1] = '2018-09-09T10:13:08+02:00'
|
||||
|
||||
# CACHE
|
||||
CACHE_TYPE_THUMBNAILS = 'thumbnails'
|
||||
@ -190,7 +183,7 @@ THUMBNAIL_TYPE_AUTHOR = 3
|
||||
COVER_THUMBNAIL_ORIGINAL = 0
|
||||
COVER_THUMBNAIL_SMALL = 1
|
||||
COVER_THUMBNAIL_MEDIUM = 2
|
||||
COVER_THUMBNAIL_LARGE = 4
|
||||
COVER_THUMBNAIL_LARGE = 3
|
||||
|
||||
# clean-up the module namespace
|
||||
del sys, os, namedtuple
|
||||
|
@ -29,14 +29,13 @@ NO_JPEG_EXTENSIONS = ['.png', '.webp', '.bmp']
|
||||
COVER_EXTENSIONS = ['.png', '.webp', '.bmp', '.jpg', '.jpeg']
|
||||
|
||||
|
||||
def cover_processing(tmp_file_path, img, extension):
|
||||
# tmp_cover_name = os.path.join(os.path.dirname(tmp_file_name), 'cover.jpg')
|
||||
tmp_cover_name = tmp_file_path + '.jpg'
|
||||
def cover_processing(tmp_file_name, img, extension):
|
||||
tmp_cover_name = os.path.join(os.path.dirname(tmp_file_name), 'cover.jpg')
|
||||
if extension in NO_JPEG_EXTENSIONS:
|
||||
if use_IM:
|
||||
with Image(blob=img) as imgc:
|
||||
imgc.format = 'jpeg'
|
||||
imgc.transform_colorspace('srgb')
|
||||
imgc.transform_colorspace('rgb')
|
||||
imgc.save(filename=tmp_cover_name)
|
||||
return tmp_cover_name
|
||||
else:
|
||||
|
@ -1,22 +0,0 @@
|
||||
#
|
||||
# Copyright 2015 Jordan Milne
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Source: https://github.com/JordanMilne/Advocate
|
||||
|
||||
|
||||
from .adapters import ValidatingHTTPAdapter
|
||||
from .api import *
|
||||
from .addrvalidator import AddrValidator
|
||||
from .exceptions import UnacceptableAddressException
|
@ -1,48 +0,0 @@
|
||||
#
|
||||
# Copyright 2015 Jordan Milne
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Source: https://github.com/JordanMilne/Advocate
|
||||
|
||||
from requests.adapters import HTTPAdapter, DEFAULT_POOLBLOCK
|
||||
|
||||
from .addrvalidator import AddrValidator
|
||||
from .exceptions import ProxyDisabledException
|
||||
from .poolmanager import ValidatingPoolManager
|
||||
|
||||
|
||||
class ValidatingHTTPAdapter(HTTPAdapter):
|
||||
__attrs__ = HTTPAdapter.__attrs__ + ['_validator']
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
self._validator = kwargs.pop('validator', None)
|
||||
if not self._validator:
|
||||
self._validator = AddrValidator()
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
def init_poolmanager(self, connections, maxsize, block=DEFAULT_POOLBLOCK,
|
||||
**pool_kwargs):
|
||||
self._pool_connections = connections
|
||||
self._pool_maxsize = maxsize
|
||||
self._pool_block = block
|
||||
self.poolmanager = ValidatingPoolManager(
|
||||
num_pools=connections,
|
||||
maxsize=maxsize,
|
||||
block=block,
|
||||
validator=self._validator,
|
||||
**pool_kwargs
|
||||
)
|
||||
|
||||
def proxy_manager_for(self, proxy, **proxy_kwargs):
|
||||
raise ProxyDisabledException("Proxies cannot be used with Advocate")
|
@ -1,281 +0,0 @@
|
||||
#
|
||||
# Copyright 2015 Jordan Milne
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Source: https://github.com/JordanMilne/Advocate
|
||||
|
||||
import functools
|
||||
import fnmatch
|
||||
import ipaddress
|
||||
import re
|
||||
|
||||
try:
|
||||
import netifaces
|
||||
HAVE_NETIFACES = True
|
||||
except ImportError:
|
||||
netifaces = None
|
||||
HAVE_NETIFACES = False
|
||||
|
||||
from .exceptions import NameserverException, ConfigException
|
||||
|
||||
|
||||
def canonicalize_hostname(hostname):
|
||||
"""Lowercase and punycodify a hostname"""
|
||||
# We do the lowercasing after IDNA encoding because we only want to
|
||||
# lowercase the *ASCII* chars.
|
||||
# TODO: The differences between IDNA2003 and IDNA2008 might be relevant
|
||||
# to us, but both specs are damn confusing.
|
||||
return str(hostname.encode("idna").lower(), 'utf-8')
|
||||
|
||||
|
||||
def determine_local_addresses():
|
||||
"""Get all IPs that refer to this machine according to netifaces"""
|
||||
if not HAVE_NETIFACES:
|
||||
raise ConfigException("Tried to determine local addresses, "
|
||||
"but netifaces module was not importable")
|
||||
ips = []
|
||||
for interface in netifaces.interfaces():
|
||||
if_families = netifaces.ifaddresses(interface)
|
||||
for family_kind in {netifaces.AF_INET, netifaces.AF_INET6}:
|
||||
addrs = if_families.get(family_kind, [])
|
||||
for addr in (x.get("addr", "") for x in addrs):
|
||||
if family_kind == netifaces.AF_INET6:
|
||||
# We can't do anything sensible with the scope here
|
||||
addr = addr.split("%")[0]
|
||||
ips.append(ipaddress.ip_network(addr))
|
||||
return ips
|
||||
|
||||
|
||||
def add_local_address_arg(func):
|
||||
"""Add the "_local_addresses" kwarg if it's missing
|
||||
|
||||
IMO this information shouldn't be cached between calls (what if one of the
|
||||
adapters got a new IP at runtime?,) and we don't want each function to
|
||||
recalculate it. Just recalculate it if the caller didn't provide it for us.
|
||||
"""
|
||||
@functools.wraps(func)
|
||||
def wrapper(self, *args, **kwargs):
|
||||
if "_local_addresses" not in kwargs:
|
||||
if self.autodetect_local_addresses:
|
||||
kwargs["_local_addresses"] = determine_local_addresses()
|
||||
else:
|
||||
kwargs["_local_addresses"] = []
|
||||
return func(self, *args, **kwargs)
|
||||
return wrapper
|
||||
|
||||
|
||||
class AddrValidator:
|
||||
_6TO4_RELAY_NET = ipaddress.ip_network("192.88.99.0/24")
|
||||
# Just the well known prefix, DNS64 servers can set their own
|
||||
# prefix, but in practice most probably don't.
|
||||
_DNS64_WK_PREFIX = ipaddress.ip_network("64:ff9b::/96")
|
||||
DEFAULT_PORT_WHITELIST = {80, 8080, 443, 8443, 8000}
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
ip_blacklist=None,
|
||||
ip_whitelist=None,
|
||||
port_whitelist=None,
|
||||
port_blacklist=None,
|
||||
hostname_blacklist=None,
|
||||
allow_ipv6=False,
|
||||
allow_teredo=False,
|
||||
allow_6to4=False,
|
||||
allow_dns64=False,
|
||||
# Must be explicitly set to "False" if you don't want to try
|
||||
# detecting local interface addresses with netifaces.
|
||||
autodetect_local_addresses=True,
|
||||
):
|
||||
if not port_blacklist and not port_whitelist:
|
||||
# An assortment of common HTTPS? ports.
|
||||
port_whitelist = self.DEFAULT_PORT_WHITELIST.copy()
|
||||
self.ip_blacklist = ip_blacklist or set()
|
||||
self.ip_whitelist = ip_whitelist or set()
|
||||
self.port_blacklist = port_blacklist or set()
|
||||
self.port_whitelist = port_whitelist or set()
|
||||
# TODO: ATM this can contain either regexes or globs that are converted
|
||||
# to regexes upon every check. Create a collection that automagically
|
||||
# converts them to regexes on insert?
|
||||
self.hostname_blacklist = hostname_blacklist or set()
|
||||
self.allow_ipv6 = allow_ipv6
|
||||
self.allow_teredo = allow_teredo
|
||||
self.allow_6to4 = allow_6to4
|
||||
self.allow_dns64 = allow_dns64
|
||||
self.autodetect_local_addresses = autodetect_local_addresses
|
||||
|
||||
@add_local_address_arg
|
||||
def is_ip_allowed(self, addr_ip, _local_addresses=None):
|
||||
if not isinstance(addr_ip,
|
||||
(ipaddress.IPv4Address, ipaddress.IPv6Address)):
|
||||
addr_ip = ipaddress.ip_address(addr_ip)
|
||||
|
||||
# The whitelist should take precedence over the blacklist so we can
|
||||
# punch holes in blacklisted ranges
|
||||
if any(addr_ip in net for net in self.ip_whitelist):
|
||||
return True
|
||||
|
||||
if any(addr_ip in net for net in self.ip_blacklist):
|
||||
return False
|
||||
|
||||
if any(addr_ip in net for net in _local_addresses):
|
||||
return False
|
||||
|
||||
if addr_ip.version == 4:
|
||||
if not addr_ip.is_private:
|
||||
# IPs for carrier-grade NAT. Seems weird that it doesn't set
|
||||
# `is_private`, but we need to check `not is_global`
|
||||
if not ipaddress.ip_network(addr_ip).is_global:
|
||||
return False
|
||||
elif addr_ip.version == 6:
|
||||
# You'd better have a good reason for enabling IPv6
|
||||
# because Advocate's techniques don't work well without NAT.
|
||||
if not self.allow_ipv6:
|
||||
return False
|
||||
|
||||
# v6 addresses can also map to IPv4 addresses! Tricky!
|
||||
v4_nested = []
|
||||
if addr_ip.ipv4_mapped:
|
||||
v4_nested.append(addr_ip.ipv4_mapped)
|
||||
# WTF IPv6? Why you gotta have a billion tunneling mechanisms?
|
||||
# XXX: Do we even really care about these? If we're tunneling
|
||||
# through public servers we shouldn't be able to access
|
||||
# addresses on our private network, right?
|
||||
if addr_ip.sixtofour:
|
||||
if not self.allow_6to4:
|
||||
return False
|
||||
v4_nested.append(addr_ip.sixtofour)
|
||||
if addr_ip.teredo:
|
||||
if not self.allow_teredo:
|
||||
return False
|
||||
# Check both the client *and* server IPs
|
||||
v4_nested.extend(addr_ip.teredo)
|
||||
if addr_ip in self._DNS64_WK_PREFIX:
|
||||
if not self.allow_dns64:
|
||||
return False
|
||||
# When using the well-known prefix the last 4 bytes
|
||||
# are the IPv4 addr
|
||||
v4_nested.append(ipaddress.ip_address(addr_ip.packed[-4:]))
|
||||
|
||||
if not all(self.is_ip_allowed(addr_v4) for addr_v4 in v4_nested):
|
||||
return False
|
||||
|
||||
# fec0::*, apparently deprecated?
|
||||
if addr_ip.is_site_local:
|
||||
return False
|
||||
else:
|
||||
raise ValueError("Unsupported IP version(?): %r" % addr_ip)
|
||||
|
||||
# 169.254.XXX.XXX, AWS uses these for autoconfiguration
|
||||
if addr_ip.is_link_local:
|
||||
return False
|
||||
# 127.0.0.1, ::1, etc.
|
||||
if addr_ip.is_loopback:
|
||||
return False
|
||||
if addr_ip.is_multicast:
|
||||
return False
|
||||
# 192.168.XXX.XXX, 10.XXX.XXX.XXX
|
||||
if addr_ip.is_private:
|
||||
return False
|
||||
# 255.255.255.255, ::ffff:XXXX:XXXX (v6->v4) mapping
|
||||
if addr_ip.is_reserved:
|
||||
return False
|
||||
# There's no reason to connect directly to a 6to4 relay
|
||||
if addr_ip in self._6TO4_RELAY_NET:
|
||||
return False
|
||||
# 0.0.0.0
|
||||
if addr_ip.is_unspecified:
|
||||
return False
|
||||
|
||||
# It doesn't look bad, so... it's must be ok!
|
||||
return True
|
||||
|
||||
def _hostname_matches_pattern(self, hostname, pattern):
|
||||
# If they specified a string, just assume they only want basic globbing.
|
||||
# This stops people from not realizing they're dealing in REs and
|
||||
# not escaping their periods unless they specifically pass in an RE.
|
||||
# This has the added benefit of letting us sanely handle globbed
|
||||
# IDNs by default.
|
||||
if isinstance(pattern, str):
|
||||
# convert the glob to a punycode glob, then a regex
|
||||
pattern = fnmatch.translate(canonicalize_hostname(pattern))
|
||||
|
||||
hostname = canonicalize_hostname(hostname)
|
||||
# Down the line the hostname may get treated as a null-terminated string
|
||||
# (as with `socket.getaddrinfo`.) Try to account for that.
|
||||
#
|
||||
# >>> socket.getaddrinfo("example.com\x00aaaa", 80)
|
||||
# [(2, 1, 6, '', ('93.184.216.34', 80)), [...]
|
||||
no_null_hostname = hostname.split("\x00")[0]
|
||||
|
||||
return any(re.match(pattern, x.strip(".")) for x
|
||||
in (no_null_hostname, hostname))
|
||||
|
||||
def is_hostname_allowed(self, hostname):
|
||||
# Sometimes (like with "external" services that your IP has privileged
|
||||
# access to) you might not always know the IP range to blacklist access
|
||||
# to, or the `A` record might change without you noticing.
|
||||
# For e.x.: `foocorp.external.org`.
|
||||
#
|
||||
# Another option is doing something like:
|
||||
#
|
||||
# for addrinfo in socket.getaddrinfo("foocorp.external.org", 80):
|
||||
# global_validator.ip_blacklist.add(ip_address(addrinfo[4][0]))
|
||||
#
|
||||
# but that's not always a good idea if they're behind a third-party lb.
|
||||
for pattern in self.hostname_blacklist:
|
||||
if self._hostname_matches_pattern(hostname, pattern):
|
||||
return False
|
||||
return True
|
||||
|
||||
@add_local_address_arg
|
||||
def is_addrinfo_allowed(self, addrinfo, _local_addresses=None):
|
||||
assert(len(addrinfo) == 5)
|
||||
# XXX: Do we care about any of the other elements? Guessing not.
|
||||
family, socktype, proto, canonname, sockaddr = addrinfo
|
||||
|
||||
# The 4th elem inaddrinfo may either be a touple of two or four items,
|
||||
# depending on whether we're dealing with IPv4 or v6
|
||||
if len(sockaddr) == 2:
|
||||
# v4
|
||||
ip, port = sockaddr
|
||||
elif len(sockaddr) == 4:
|
||||
# v6
|
||||
# XXX: what *are* `flow_info` and `scope_id`? Anything useful?
|
||||
# Seems like we can figure out all we need about the scope from
|
||||
# the `is_<x>` properties.
|
||||
ip, port, flow_info, scope_id = sockaddr
|
||||
else:
|
||||
raise ValueError("Unexpected addrinfo format %r" % sockaddr)
|
||||
|
||||
# Probably won't help protect against SSRF, but might prevent our being
|
||||
# used to attack others' non-HTTP services. See
|
||||
# http://www.remote.org/jochen/sec/hfpa/
|
||||
if self.port_whitelist and port not in self.port_whitelist:
|
||||
return False
|
||||
if port in self.port_blacklist:
|
||||
return False
|
||||
|
||||
if self.hostname_blacklist:
|
||||
if not canonname:
|
||||
raise NameserverException(
|
||||
"addrinfo must contain the canon name to do blacklisting "
|
||||
"based on hostname. Make sure you use the "
|
||||
"`socket.AI_CANONNAME` flag, and that each record contains "
|
||||
"the canon name. Your DNS server might also be garbage."
|
||||
)
|
||||
|
||||
if not self.is_hostname_allowed(canonname):
|
||||
return False
|
||||
|
||||
return self.is_ip_allowed(ip, _local_addresses=_local_addresses)
|
@ -1,202 +0,0 @@
|
||||
#
|
||||
# Copyright 2015 Jordan Milne
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Source: https://github.com/JordanMilne/Advocate
|
||||
|
||||
"""
|
||||
advocate.api
|
||||
~~~~~~~~~~~~
|
||||
|
||||
This module implements the Requests API, largely a copy/paste from `requests`
|
||||
itself.
|
||||
|
||||
:copyright: (c) 2015 by Jordan Milne.
|
||||
:license: Apache2, see LICENSE for more details.
|
||||
|
||||
"""
|
||||
from collections import OrderedDict
|
||||
import hashlib
|
||||
import pickle
|
||||
|
||||
from requests import Session as RequestsSession
|
||||
|
||||
# import cw_advocate
|
||||
from .adapters import ValidatingHTTPAdapter
|
||||
from .exceptions import MountDisabledException
|
||||
|
||||
|
||||
class Session(RequestsSession):
|
||||
"""Convenience wrapper around `requests.Session` set up for `advocate`ing"""
|
||||
|
||||
__attrs__ = RequestsSession.__attrs__ + ["validator"]
|
||||
DEFAULT_VALIDATOR = None
|
||||
"""
|
||||
User-replaceable default validator to use for all Advocate sessions,
|
||||
includes sessions created by advocate.get()
|
||||
"""
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
self.validator = kwargs.pop("validator", None) or self.DEFAULT_VALIDATOR
|
||||
adapter_kwargs = kwargs.pop("_adapter_kwargs", {})
|
||||
|
||||
# `Session.__init__()` calls `mount()` internally, so we need to allow
|
||||
# it temporarily
|
||||
self.__mount_allowed = True
|
||||
RequestsSession.__init__(self, *args, **kwargs)
|
||||
|
||||
# Drop any existing adapters
|
||||
self.adapters = OrderedDict()
|
||||
|
||||
self.mount("http://", ValidatingHTTPAdapter(validator=self.validator, **adapter_kwargs))
|
||||
self.mount("https://", ValidatingHTTPAdapter(validator=self.validator, **adapter_kwargs))
|
||||
self.__mount_allowed = False
|
||||
|
||||
def mount(self, *args, **kwargs):
|
||||
"""Wrapper around `mount()` to prevent a protection bypass"""
|
||||
if self.__mount_allowed:
|
||||
super().mount(*args, **kwargs)
|
||||
else:
|
||||
raise MountDisabledException(
|
||||
"mount() is disabled to prevent protection bypasses"
|
||||
)
|
||||
|
||||
|
||||
def session(*args, **kwargs):
|
||||
return Session(*args, **kwargs)
|
||||
|
||||
|
||||
def request(method, url, **kwargs):
|
||||
"""Constructs and sends a :class:`Request <Request>`.
|
||||
|
||||
:param method: method for the new :class:`Request` object.
|
||||
:param url: URL for the new :class:`Request` object.
|
||||
:param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`.
|
||||
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
|
||||
:param json: (optional) json data to send in the body of the :class:`Request`.
|
||||
:param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`.
|
||||
:param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`.
|
||||
:param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': ('filename', fileobj)}``) for multipart encoding upload.
|
||||
:param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
|
||||
:param timeout: (optional) How long to wait for the server to send data
|
||||
before giving up, as a float, or a (`connect timeout, read timeout
|
||||
<user/advanced.html#timeouts>`_) tuple.
|
||||
:type timeout: float or tuple
|
||||
:param allow_redirects: (optional) Boolean. Set to True if POST/PUT/DELETE redirect following is allowed.
|
||||
:type allow_redirects: bool
|
||||
:param proxies: (optional) Dictionary mapping protocol to the URL of the proxy.
|
||||
:param verify: (optional) if ``True``, the SSL cert will be verified. A CA_BUNDLE path can also be provided.
|
||||
:param stream: (optional) if ``False``, the response content will be immediately downloaded.
|
||||
:param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
|
||||
:return: :class:`Response <Response>` object
|
||||
:rtype: requests.Response
|
||||
"""
|
||||
|
||||
validator = kwargs.pop("validator", None)
|
||||
with Session(validator=validator) as sess:
|
||||
response = sess.request(method=method, url=url, **kwargs)
|
||||
return response
|
||||
|
||||
|
||||
def get(url, **kwargs):
|
||||
"""Sends a GET request.
|
||||
|
||||
:param url: URL for the new :class:`Request` object.
|
||||
:param **kwargs: Optional arguments that ``request`` takes.
|
||||
:return: :class:`Response <Response>` object
|
||||
:rtype: requests.Response
|
||||
"""
|
||||
|
||||
kwargs.setdefault('allow_redirects', True)
|
||||
return request('get', url, **kwargs)
|
||||
|
||||
|
||||
class RequestsAPIWrapper:
|
||||
"""Provides a `requests.api`-like interface with a specific validator"""
|
||||
|
||||
# Due to how the classes are dynamically constructed pickling may not work
|
||||
# correctly unless loaded within the same interpreter instance.
|
||||
# Enable at your peril.
|
||||
SUPPORT_WRAPPER_PICKLING = False
|
||||
|
||||
def __init__(self, validator):
|
||||
# Do this here to avoid circular import issues
|
||||
try:
|
||||
from .futures import FuturesSession
|
||||
have_requests_futures = True
|
||||
except ImportError as e:
|
||||
have_requests_futures = False
|
||||
|
||||
self.validator = validator
|
||||
outer_self = self
|
||||
|
||||
class _WrappedSession(Session):
|
||||
"""An `advocate.Session` that uses the wrapper's blacklist
|
||||
|
||||
the wrapper is meant to be a transparent replacement for `requests`,
|
||||
so people should be able to subclass `wrapper.Session` and still
|
||||
get the desired validation behaviour
|
||||
"""
|
||||
DEFAULT_VALIDATOR = outer_self.validator
|
||||
|
||||
self._make_wrapper_cls_global(_WrappedSession)
|
||||
|
||||
if have_requests_futures:
|
||||
|
||||
class _WrappedFuturesSession(FuturesSession):
|
||||
"""Like _WrappedSession, but for `FuturesSession`s"""
|
||||
DEFAULT_VALIDATOR = outer_self.validator
|
||||
self._make_wrapper_cls_global(_WrappedFuturesSession)
|
||||
|
||||
self.FuturesSession = _WrappedFuturesSession
|
||||
|
||||
self.request = self._default_arg_wrapper(request)
|
||||
self.get = self._default_arg_wrapper(get)
|
||||
self.Session = _WrappedSession
|
||||
|
||||
def __getattr__(self, item):
|
||||
# This class is meant to mimic the requests base module, so if we don't
|
||||
# have this attribute, it might be on the base module (like the Request
|
||||
# class, etc.)
|
||||
try:
|
||||
return object.__getattribute__(self, item)
|
||||
except AttributeError:
|
||||
from . import cw_advocate
|
||||
return getattr(cw_advocate, item)
|
||||
|
||||
def _default_arg_wrapper(self, fun):
|
||||
def wrapped_func(*args, **kwargs):
|
||||
kwargs.setdefault("validator", self.validator)
|
||||
return fun(*args, **kwargs)
|
||||
return wrapped_func
|
||||
|
||||
def _make_wrapper_cls_global(self, cls):
|
||||
if not self.SUPPORT_WRAPPER_PICKLING:
|
||||
return
|
||||
# Gnarly, but necessary to give pickle a consistent module-level
|
||||
# reference for each wrapper.
|
||||
wrapper_hash = hashlib.sha256(pickle.dumps(self)).hexdigest()
|
||||
cls.__name__ = "_".join((cls.__name__, wrapper_hash))
|
||||
cls.__qualname__ = ".".join((__name__, cls.__name__))
|
||||
if not globals().get(cls.__name__):
|
||||
globals()[cls.__name__] = cls
|
||||
|
||||
|
||||
__all__ = (
|
||||
"get",
|
||||
"request",
|
||||
"session",
|
||||
"Session",
|
||||
"RequestsAPIWrapper",
|
||||
)
|
@ -1,201 +0,0 @@
|
||||
#
|
||||
# Copyright 2015 Jordan Milne
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Source: https://github.com/JordanMilne/Advocate
|
||||
|
||||
import ipaddress
|
||||
import socket
|
||||
from socket import timeout as SocketTimeout
|
||||
|
||||
from urllib3.connection import HTTPSConnection, HTTPConnection
|
||||
from urllib3.exceptions import ConnectTimeoutError
|
||||
from urllib3.util.connection import _set_socket_options
|
||||
from urllib3.util.connection import create_connection as old_create_connection
|
||||
|
||||
from . import addrvalidator
|
||||
from .exceptions import UnacceptableAddressException
|
||||
|
||||
|
||||
def advocate_getaddrinfo(host, port, get_canonname=False):
|
||||
addrinfo = socket.getaddrinfo(
|
||||
host,
|
||||
port,
|
||||
0,
|
||||
socket.SOCK_STREAM,
|
||||
0,
|
||||
# We need what the DNS client sees the hostname as, correctly handles
|
||||
# IDNs and tricky things like `private.foocorp.org\x00.google.com`.
|
||||
# All IDNs will be converted to punycode.
|
||||
socket.AI_CANONNAME if get_canonname else 0,
|
||||
)
|
||||
return fix_addrinfo(addrinfo)
|
||||
|
||||
|
||||
def fix_addrinfo(records):
|
||||
"""
|
||||
Propagate the canonname across records and parse IPs
|
||||
|
||||
I'm not sure if this is just the behaviour of `getaddrinfo` on Linux, but
|
||||
it seems like only the first record in the set has the canonname field
|
||||
populated.
|
||||
"""
|
||||
def fix_record(record, canonname):
|
||||
sa = record[4]
|
||||
sa = (ipaddress.ip_address(sa[0]),) + sa[1:]
|
||||
return record[0], record[1], record[2], canonname, sa
|
||||
|
||||
canonname = None
|
||||
if records:
|
||||
# Apparently the canonical name is only included in the first record?
|
||||
# Add it to all of them.
|
||||
assert(len(records[0]) == 5)
|
||||
canonname = records[0][3]
|
||||
return tuple(fix_record(x, canonname) for x in records)
|
||||
|
||||
|
||||
# Lifted from requests' urllib3, which in turn lifted it from `socket.py`. Oy!
|
||||
def validating_create_connection(address,
|
||||
timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
|
||||
source_address=None, socket_options=None,
|
||||
validator=None):
|
||||
"""Connect to *address* and return the socket object.
|
||||
|
||||
Convenience function. Connect to *address* (a 2-tuple ``(host,
|
||||
port)``) and return the socket object. Passing the optional
|
||||
*timeout* parameter will set the timeout on the socket instance
|
||||
before attempting to connect. If no *timeout* is supplied, the
|
||||
global default timeout setting returned by :func:`getdefaulttimeout`
|
||||
is used. If *source_address* is set it must be a tuple of (host, port)
|
||||
for the socket to bind as a source address before making the connection.
|
||||
An host of '' or port 0 tells the OS to use the default.
|
||||
"""
|
||||
|
||||
host, port = address
|
||||
# We can skip asking for the canon name if we're not doing hostname-based
|
||||
# blacklisting.
|
||||
need_canonname = False
|
||||
if validator.hostname_blacklist:
|
||||
need_canonname = True
|
||||
# We check both the non-canonical and canonical hostnames so we can
|
||||
# catch both of these:
|
||||
# CNAME from nonblacklisted.com -> blacklisted.com
|
||||
# CNAME from blacklisted.com -> nonblacklisted.com
|
||||
if not validator.is_hostname_allowed(host):
|
||||
raise UnacceptableAddressException(host)
|
||||
|
||||
err = None
|
||||
addrinfo = advocate_getaddrinfo(host, port, get_canonname=need_canonname)
|
||||
if addrinfo:
|
||||
if validator.autodetect_local_addresses:
|
||||
local_addresses = addrvalidator.determine_local_addresses()
|
||||
else:
|
||||
local_addresses = []
|
||||
for res in addrinfo:
|
||||
# Are we allowed to connect with this result?
|
||||
if not validator.is_addrinfo_allowed(
|
||||
res,
|
||||
_local_addresses=local_addresses,
|
||||
):
|
||||
continue
|
||||
af, socktype, proto, canonname, sa = res
|
||||
# Unparse the validated IP
|
||||
sa = (sa[0].exploded,) + sa[1:]
|
||||
sock = None
|
||||
try:
|
||||
sock = socket.socket(af, socktype, proto)
|
||||
|
||||
# If provided, set socket level options before connecting.
|
||||
# This is the only addition urllib3 makes to this function.
|
||||
_set_socket_options(sock, socket_options)
|
||||
|
||||
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
|
||||
sock.settimeout(timeout)
|
||||
if source_address:
|
||||
sock.bind(source_address)
|
||||
sock.connect(sa)
|
||||
return sock
|
||||
|
||||
except socket.error as _:
|
||||
err = _
|
||||
if sock is not None:
|
||||
sock.close()
|
||||
sock = None
|
||||
|
||||
if err is None:
|
||||
# If we got here, none of the results were acceptable
|
||||
err = UnacceptableAddressException(address)
|
||||
if err is not None:
|
||||
raise err
|
||||
else:
|
||||
raise socket.error("getaddrinfo returns an empty list")
|
||||
|
||||
|
||||
# TODO: Is there a better way to add this to multiple classes with different
|
||||
# base classes? I tried a mixin, but it used the base method instead.
|
||||
def _validating_new_conn(self):
|
||||
""" Establish a socket connection and set nodelay settings on it.
|
||||
|
||||
:return: New socket connection.
|
||||
"""
|
||||
extra_kw = {}
|
||||
if self.source_address:
|
||||
extra_kw['source_address'] = self.source_address
|
||||
|
||||
if self.socket_options:
|
||||
extra_kw['socket_options'] = self.socket_options
|
||||
|
||||
try:
|
||||
# Hack around HTTPretty's patched sockets
|
||||
# TODO: some better method of hacking around it that checks if we
|
||||
# _would have_ connected to a private addr?
|
||||
conn_func = validating_create_connection
|
||||
if socket.getaddrinfo.__module__.startswith("httpretty"):
|
||||
conn_func = old_create_connection
|
||||
else:
|
||||
extra_kw["validator"] = self._validator
|
||||
|
||||
conn = conn_func(
|
||||
(self.host, self.port),
|
||||
self.timeout,
|
||||
**extra_kw
|
||||
)
|
||||
|
||||
except SocketTimeout:
|
||||
raise ConnectTimeoutError(
|
||||
self, "Connection to %s timed out. (connect timeout=%s)" %
|
||||
(self.host, self.timeout))
|
||||
|
||||
return conn
|
||||
|
||||
|
||||
# Don't silently break if the private API changes across urllib3 versions
|
||||
assert(hasattr(HTTPConnection, '_new_conn'))
|
||||
assert(hasattr(HTTPSConnection, '_new_conn'))
|
||||
|
||||
|
||||
class ValidatingHTTPConnection(HTTPConnection):
|
||||
_new_conn = _validating_new_conn
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
self._validator = kwargs.pop("validator")
|
||||
HTTPConnection.__init__(self, *args, **kwargs)
|
||||
|
||||
|
||||
class ValidatingHTTPSConnection(HTTPSConnection):
|
||||
_new_conn = _validating_new_conn
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
self._validator = kwargs.pop("validator")
|
||||
HTTPSConnection.__init__(self, *args, **kwargs)
|
@ -1,39 +0,0 @@
|
||||
#
|
||||
# Copyright 2015 Jordan Milne
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Source: https://github.com/JordanMilne/Advocate
|
||||
|
||||
from urllib3 import HTTPConnectionPool, HTTPSConnectionPool
|
||||
|
||||
from .connection import (
|
||||
ValidatingHTTPConnection,
|
||||
ValidatingHTTPSConnection,
|
||||
)
|
||||
|
||||
# Don't silently break if the private API changes across urllib3 versions
|
||||
assert(hasattr(HTTPConnectionPool, 'ConnectionCls'))
|
||||
assert(hasattr(HTTPSConnectionPool, 'ConnectionCls'))
|
||||
assert(hasattr(HTTPConnectionPool, 'scheme'))
|
||||
assert(hasattr(HTTPSConnectionPool, 'scheme'))
|
||||
|
||||
|
||||
class ValidatingHTTPConnectionPool(HTTPConnectionPool):
|
||||
scheme = 'http'
|
||||
ConnectionCls = ValidatingHTTPConnection
|
||||
|
||||
|
||||
class ValidatingHTTPSConnectionPool(HTTPSConnectionPool):
|
||||
scheme = 'https'
|
||||
ConnectionCls = ValidatingHTTPSConnection
|
@ -1,39 +0,0 @@
|
||||
#
|
||||
# Copyright 2015 Jordan Milne
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Source: https://github.com/JordanMilne/Advocate
|
||||
|
||||
class AdvocateException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class UnacceptableAddressException(AdvocateException):
|
||||
pass
|
||||
|
||||
|
||||
class NameserverException(AdvocateException):
|
||||
pass
|
||||
|
||||
|
||||
class MountDisabledException(AdvocateException):
|
||||
pass
|
||||
|
||||
|
||||
class ProxyDisabledException(NotImplementedError, AdvocateException):
|
||||
pass
|
||||
|
||||
|
||||
class ConfigException(AdvocateException):
|
||||
pass
|
@ -1,61 +0,0 @@
|
||||
#
|
||||
# Copyright 2015 Jordan Milne
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Source: https://github.com/JordanMilne/Advocate
|
||||
|
||||
import collections
|
||||
import functools
|
||||
|
||||
from urllib3 import PoolManager
|
||||
from urllib3.poolmanager import _default_key_normalizer, PoolKey
|
||||
|
||||
from .connectionpool import (
|
||||
ValidatingHTTPSConnectionPool,
|
||||
ValidatingHTTPConnectionPool,
|
||||
)
|
||||
|
||||
pool_classes_by_scheme = {
|
||||
"http": ValidatingHTTPConnectionPool,
|
||||
"https": ValidatingHTTPSConnectionPool,
|
||||
}
|
||||
|
||||
AdvocatePoolKey = collections.namedtuple('AdvocatePoolKey',
|
||||
PoolKey._fields + ('key_validator',))
|
||||
|
||||
|
||||
def key_normalizer(key_class, request_context):
|
||||
request_context = request_context.copy()
|
||||
# TODO: add ability to serialize validator rules to dict,
|
||||
# allowing pool to be shared between sessions with the same
|
||||
# rules.
|
||||
request_context["validator"] = id(request_context["validator"])
|
||||
return _default_key_normalizer(key_class, request_context)
|
||||
|
||||
|
||||
key_fn_by_scheme = {
|
||||
'http': functools.partial(key_normalizer, AdvocatePoolKey),
|
||||
'https': functools.partial(key_normalizer, AdvocatePoolKey),
|
||||
}
|
||||
|
||||
|
||||
class ValidatingPoolManager(PoolManager):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
# Make sure the API hasn't changed
|
||||
assert (hasattr(self, 'pool_classes_by_scheme'))
|
||||
|
||||
self.pool_classes_by_scheme = pool_classes_by_scheme
|
||||
self.key_fn_by_scheme = key_fn_by_scheme.copy()
|
@ -1,98 +0,0 @@
|
||||
# from .__about__ import __version__
|
||||
from .config import AUTH_HEADER_NAME
|
||||
from .config import COOKIE_DURATION
|
||||
from .config import COOKIE_HTTPONLY
|
||||
from .config import COOKIE_NAME
|
||||
from .config import COOKIE_SECURE
|
||||
from .config import ID_ATTRIBUTE
|
||||
from .config import LOGIN_MESSAGE
|
||||
from .config import LOGIN_MESSAGE_CATEGORY
|
||||
from .config import REFRESH_MESSAGE
|
||||
from .config import REFRESH_MESSAGE_CATEGORY
|
||||
from .login_manager import LoginManager
|
||||
from .mixins import AnonymousUserMixin
|
||||
from .mixins import UserMixin
|
||||
from .signals import session_protected
|
||||
from .signals import user_accessed
|
||||
from .signals import user_loaded_from_cookie
|
||||
from .signals import user_loaded_from_request
|
||||
from .signals import user_logged_in
|
||||
from .signals import user_logged_out
|
||||
from .signals import user_login_confirmed
|
||||
from .signals import user_needs_refresh
|
||||
from .signals import user_unauthorized
|
||||
# from .test_client import FlaskLoginClient
|
||||
from .utils import confirm_login
|
||||
from .utils import current_user
|
||||
from .utils import decode_cookie
|
||||
from .utils import encode_cookie
|
||||
from .utils import fresh_login_required
|
||||
from .utils import login_fresh
|
||||
from .utils import login_remembered
|
||||
from .utils import login_required
|
||||
from .utils import login_url
|
||||
from .utils import login_user
|
||||
from .utils import logout_user
|
||||
from .utils import make_next_param
|
||||
from .utils import set_login_view
|
||||
|
||||
__version_info__ = ("0", "6", "3")
|
||||
__version__ = ".".join(__version_info__)
|
||||
|
||||
|
||||
__all__ = [
|
||||
"__version__",
|
||||
"AUTH_HEADER_NAME",
|
||||
"COOKIE_DURATION",
|
||||
"COOKIE_HTTPONLY",
|
||||
"COOKIE_NAME",
|
||||
"COOKIE_SECURE",
|
||||
"ID_ATTRIBUTE",
|
||||
"LOGIN_MESSAGE",
|
||||
"LOGIN_MESSAGE_CATEGORY",
|
||||
"REFRESH_MESSAGE",
|
||||
"REFRESH_MESSAGE_CATEGORY",
|
||||
"LoginManager",
|
||||
"AnonymousUserMixin",
|
||||
"UserMixin",
|
||||
"session_protected",
|
||||
"user_accessed",
|
||||
"user_loaded_from_cookie",
|
||||
"user_loaded_from_request",
|
||||
"user_logged_in",
|
||||
"user_logged_out",
|
||||
"user_login_confirmed",
|
||||
"user_needs_refresh",
|
||||
"user_unauthorized",
|
||||
# "FlaskLoginClient",
|
||||
"confirm_login",
|
||||
"current_user",
|
||||
"decode_cookie",
|
||||
"encode_cookie",
|
||||
"fresh_login_required",
|
||||
"login_fresh",
|
||||
"login_remembered",
|
||||
"login_required",
|
||||
"login_url",
|
||||
"login_user",
|
||||
"logout_user",
|
||||
"make_next_param",
|
||||
"set_login_view",
|
||||
]
|
||||
|
||||
|
||||
def __getattr__(name):
|
||||
if name == "user_loaded_from_header":
|
||||
import warnings
|
||||
from .signals import _user_loaded_from_header
|
||||
|
||||
warnings.warn(
|
||||
"'user_loaded_from_header' is deprecated and will be"
|
||||
" removed in Flask-Login 0.7. Use"
|
||||
" 'user_loaded_from_request' instead.",
|
||||
DeprecationWarning,
|
||||
stacklevel=2,
|
||||
)
|
||||
return _user_loaded_from_header
|
||||
|
||||
raise AttributeError(name)
|
@ -1,55 +0,0 @@
|
||||
from datetime import timedelta
|
||||
|
||||
#: The default name of the "remember me" cookie (``remember_token``)
|
||||
COOKIE_NAME = "remember_token"
|
||||
|
||||
#: The default time before the "remember me" cookie expires (365 days).
|
||||
COOKIE_DURATION = timedelta(days=365)
|
||||
|
||||
#: Whether the "remember me" cookie requires Secure; defaults to ``False``
|
||||
COOKIE_SECURE = False
|
||||
|
||||
#: Whether the "remember me" cookie uses HttpOnly or not; defaults to ``True``
|
||||
COOKIE_HTTPONLY = True
|
||||
|
||||
#: Whether the "remember me" cookie requires same origin; defaults to ``None``
|
||||
COOKIE_SAMESITE = None
|
||||
|
||||
#: The default flash message to display when users need to log in.
|
||||
LOGIN_MESSAGE = "Please log in to access this page."
|
||||
|
||||
#: The default flash message category to display when users need to log in.
|
||||
LOGIN_MESSAGE_CATEGORY = "message"
|
||||
|
||||
#: The default flash message to display when users need to reauthenticate.
|
||||
REFRESH_MESSAGE = "Please reauthenticate to access this page."
|
||||
|
||||
#: The default flash message category to display when users need to
|
||||
#: reauthenticate.
|
||||
REFRESH_MESSAGE_CATEGORY = "message"
|
||||
|
||||
#: The default attribute to retreive the str id of the user
|
||||
ID_ATTRIBUTE = "get_id"
|
||||
|
||||
#: Default name of the auth header (``Authorization``)
|
||||
AUTH_HEADER_NAME = "Authorization"
|
||||
|
||||
#: A set of session keys that are populated by Flask-Login. Use this set to
|
||||
#: purge keys safely and accurately.
|
||||
SESSION_KEYS = {
|
||||
"_user_id",
|
||||
"_remember",
|
||||
"_remember_seconds",
|
||||
"_id",
|
||||
"_fresh",
|
||||
"next",
|
||||
}
|
||||
|
||||
#: A set of HTTP methods which are exempt from `login_required` and
|
||||
#: `fresh_login_required`. By default, this is just ``OPTIONS``.
|
||||
EXEMPT_METHODS = {"OPTIONS"}
|
||||
|
||||
#: If true, the page the user is attempting to access is stored in the session
|
||||
#: rather than a url parameter when redirecting to the login view; defaults to
|
||||
#: ``False``.
|
||||
USE_SESSION_FOR_NEXT = False
|
@ -1,555 +0,0 @@
|
||||
from datetime import datetime
|
||||
from datetime import timezone
|
||||
from datetime import timedelta
|
||||
import hashlib
|
||||
|
||||
from flask import abort
|
||||
from flask import current_app
|
||||
from flask import flash
|
||||
from flask import g
|
||||
from flask import has_app_context
|
||||
from flask import redirect
|
||||
from flask import request
|
||||
from flask import session
|
||||
from itsdangerous import URLSafeSerializer
|
||||
from flask.json.tag import TaggedJSONSerializer
|
||||
|
||||
from .config import AUTH_HEADER_NAME
|
||||
from .config import COOKIE_DURATION
|
||||
from .config import COOKIE_HTTPONLY
|
||||
from .config import COOKIE_NAME
|
||||
from .config import COOKIE_SAMESITE
|
||||
from .config import COOKIE_SECURE
|
||||
from .config import ID_ATTRIBUTE
|
||||
from .config import LOGIN_MESSAGE
|
||||
from .config import LOGIN_MESSAGE_CATEGORY
|
||||
from .config import REFRESH_MESSAGE
|
||||
from .config import REFRESH_MESSAGE_CATEGORY
|
||||
from .config import SESSION_KEYS
|
||||
from .config import USE_SESSION_FOR_NEXT
|
||||
from .mixins import AnonymousUserMixin
|
||||
from .signals import session_protected
|
||||
from .signals import user_accessed
|
||||
from .signals import user_loaded_from_cookie
|
||||
from .signals import user_loaded_from_request
|
||||
from .signals import user_needs_refresh
|
||||
from .signals import user_unauthorized
|
||||
from .utils import _create_identifier
|
||||
from .utils import _user_context_processor
|
||||
from .utils import confirm_login
|
||||
from .utils import expand_login_view
|
||||
from .utils import login_url as make_login_url
|
||||
from .utils import make_next_param
|
||||
|
||||
|
||||
class LoginManager:
|
||||
"""This object is used to hold the settings used for logging in. Instances
|
||||
of :class:`LoginManager` are *not* bound to specific apps, so you can
|
||||
create one in the main body of your code and then bind it to your
|
||||
app in a factory function.
|
||||
"""
|
||||
|
||||
def __init__(self, app=None, add_context_processor=True):
|
||||
#: A class or factory function that produces an anonymous user, which
|
||||
#: is used when no one is logged in.
|
||||
self.anonymous_user = AnonymousUserMixin
|
||||
|
||||
#: The name of the view to redirect to when the user needs to log in.
|
||||
#: (This can be an absolute URL as well, if your authentication
|
||||
#: machinery is external to your application.)
|
||||
self.login_view = None
|
||||
|
||||
#: Names of views to redirect to when the user needs to log in,
|
||||
#: per blueprint. If the key value is set to None the value of
|
||||
#: :attr:`login_view` will be used instead.
|
||||
self.blueprint_login_views = {}
|
||||
|
||||
#: The message to flash when a user is redirected to the login page.
|
||||
self.login_message = LOGIN_MESSAGE
|
||||
|
||||
#: The message category to flash when a user is redirected to the login
|
||||
#: page.
|
||||
self.login_message_category = LOGIN_MESSAGE_CATEGORY
|
||||
|
||||
#: The name of the view to redirect to when the user needs to
|
||||
#: reauthenticate.
|
||||
self.refresh_view = None
|
||||
|
||||
#: The message to flash when a user is redirected to the 'needs
|
||||
#: refresh' page.
|
||||
self.needs_refresh_message = REFRESH_MESSAGE
|
||||
|
||||
#: The message category to flash when a user is redirected to the
|
||||
#: 'needs refresh' page.
|
||||
self.needs_refresh_message_category = REFRESH_MESSAGE_CATEGORY
|
||||
|
||||
#: The mode to use session protection in. This can be either
|
||||
#: ``'basic'`` (the default) or ``'strong'``, or ``None`` to disable
|
||||
#: it.
|
||||
self.session_protection = "basic"
|
||||
|
||||
#: If present, used to translate flash messages ``self.login_message``
|
||||
#: and ``self.needs_refresh_message``
|
||||
self.localize_callback = None
|
||||
|
||||
self.unauthorized_callback = None
|
||||
|
||||
self.needs_refresh_callback = None
|
||||
|
||||
self.id_attribute = ID_ATTRIBUTE
|
||||
|
||||
self._user_callback = None
|
||||
|
||||
self._header_callback = None
|
||||
|
||||
self._request_callback = None
|
||||
|
||||
self._session_identifier_generator = _create_identifier
|
||||
|
||||
if app is not None:
|
||||
self.init_app(app, add_context_processor)
|
||||
|
||||
def setup_app(self, app, add_context_processor=True): # pragma: no cover
|
||||
"""
|
||||
This method has been deprecated. Please use
|
||||
:meth:`LoginManager.init_app` instead.
|
||||
"""
|
||||
import warnings
|
||||
|
||||
warnings.warn(
|
||||
"'setup_app' is deprecated and will be removed in"
|
||||
" Flask-Login 0.7. Use 'init_app' instead.",
|
||||
DeprecationWarning,
|
||||
stacklevel=2,
|
||||
)
|
||||
self.init_app(app, add_context_processor)
|
||||
|
||||
def init_app(self, app, add_context_processor=True):
|
||||
"""
|
||||
Configures an application. This registers an `after_request` call, and
|
||||
attaches this `LoginManager` to it as `app.login_manager`.
|
||||
|
||||
:param app: The :class:`flask.Flask` object to configure.
|
||||
:type app: :class:`flask.Flask`
|
||||
:param add_context_processor: Whether to add a context processor to
|
||||
the app that adds a `current_user` variable to the template.
|
||||
Defaults to ``True``.
|
||||
:type add_context_processor: bool
|
||||
"""
|
||||
app.login_manager = self
|
||||
app.after_request(self._update_remember_cookie)
|
||||
|
||||
if add_context_processor:
|
||||
app.context_processor(_user_context_processor)
|
||||
|
||||
def unauthorized(self):
|
||||
"""
|
||||
This is called when the user is required to log in. If you register a
|
||||
callback with :meth:`LoginManager.unauthorized_handler`, then it will
|
||||
be called. Otherwise, it will take the following actions:
|
||||
|
||||
- Flash :attr:`LoginManager.login_message` to the user.
|
||||
|
||||
- If the app is using blueprints find the login view for
|
||||
the current blueprint using `blueprint_login_views`. If the app
|
||||
is not using blueprints or the login view for the current
|
||||
blueprint is not specified use the value of `login_view`.
|
||||
|
||||
- Redirect the user to the login view. (The page they were
|
||||
attempting to access will be passed in the ``next`` query
|
||||
string variable, so you can redirect there if present instead
|
||||
of the homepage. Alternatively, it will be added to the session
|
||||
as ``next`` if USE_SESSION_FOR_NEXT is set.)
|
||||
|
||||
If :attr:`LoginManager.login_view` is not defined, then it will simply
|
||||
raise a HTTP 401 (Unauthorized) error instead.
|
||||
|
||||
This should be returned from a view or before/after_request function,
|
||||
otherwise the redirect will have no effect.
|
||||
"""
|
||||
user_unauthorized.send(current_app._get_current_object())
|
||||
|
||||
if self.unauthorized_callback:
|
||||
return self.unauthorized_callback()
|
||||
|
||||
if request.blueprint in self.blueprint_login_views:
|
||||
login_view = self.blueprint_login_views[request.blueprint]
|
||||
else:
|
||||
login_view = self.login_view
|
||||
|
||||
if not login_view:
|
||||
abort(401)
|
||||
|
||||
if self.login_message:
|
||||
if self.localize_callback is not None:
|
||||
flash(
|
||||
self.localize_callback(self.login_message),
|
||||
category=self.login_message_category,
|
||||
)
|
||||
else:
|
||||
flash(self.login_message, category=self.login_message_category)
|
||||
|
||||
config = current_app.config
|
||||
if config.get("USE_SESSION_FOR_NEXT", USE_SESSION_FOR_NEXT):
|
||||
login_url = expand_login_view(login_view)
|
||||
session["_id"] = self._session_identifier_generator()
|
||||
session["next"] = make_next_param(login_url, request.url)
|
||||
redirect_url = make_login_url(login_view)
|
||||
else:
|
||||
redirect_url = make_login_url(login_view, next_url=request.url)
|
||||
|
||||
return redirect(redirect_url)
|
||||
|
||||
def user_loader(self, callback):
|
||||
"""
|
||||
This sets the callback for reloading a user from the session. The
|
||||
function you set should take a user ID (a ``str``) and return a
|
||||
user object, or ``None`` if the user does not exist.
|
||||
|
||||
:param callback: The callback for retrieving a user object.
|
||||
:type callback: callable
|
||||
"""
|
||||
self._user_callback = callback
|
||||
return self.user_callback
|
||||
|
||||
@property
|
||||
def user_callback(self):
|
||||
"""Gets the user_loader callback set by user_loader decorator."""
|
||||
return self._user_callback
|
||||
|
||||
def request_loader(self, callback):
|
||||
"""
|
||||
This sets the callback for loading a user from a Flask request.
|
||||
The function you set should take Flask request object and
|
||||
return a user object, or `None` if the user does not exist.
|
||||
|
||||
:param callback: The callback for retrieving a user object.
|
||||
:type callback: callable
|
||||
"""
|
||||
self._request_callback = callback
|
||||
return self.request_callback
|
||||
|
||||
@property
|
||||
def request_callback(self):
|
||||
"""Gets the request_loader callback set by request_loader decorator."""
|
||||
return self._request_callback
|
||||
|
||||
def unauthorized_handler(self, callback):
|
||||
"""
|
||||
This will set the callback for the `unauthorized` method, which among
|
||||
other things is used by `login_required`. It takes no arguments, and
|
||||
should return a response to be sent to the user instead of their
|
||||
normal view.
|
||||
|
||||
:param callback: The callback for unauthorized users.
|
||||
:type callback: callable
|
||||
"""
|
||||
self.unauthorized_callback = callback
|
||||
return callback
|
||||
|
||||
def needs_refresh_handler(self, callback):
|
||||
"""
|
||||
This will set the callback for the `needs_refresh` method, which among
|
||||
other things is used by `fresh_login_required`. It takes no arguments,
|
||||
and should return a response to be sent to the user instead of their
|
||||
normal view.
|
||||
|
||||
:param callback: The callback for unauthorized users.
|
||||
:type callback: callable
|
||||
"""
|
||||
self.needs_refresh_callback = callback
|
||||
return callback
|
||||
|
||||
def needs_refresh(self):
|
||||
"""
|
||||
This is called when the user is logged in, but they need to be
|
||||
reauthenticated because their session is stale. If you register a
|
||||
callback with `needs_refresh_handler`, then it will be called.
|
||||
Otherwise, it will take the following actions:
|
||||
|
||||
- Flash :attr:`LoginManager.needs_refresh_message` to the user.
|
||||
|
||||
- Redirect the user to :attr:`LoginManager.refresh_view`. (The page
|
||||
they were attempting to access will be passed in the ``next``
|
||||
query string variable, so you can redirect there if present
|
||||
instead of the homepage.)
|
||||
|
||||
If :attr:`LoginManager.refresh_view` is not defined, then it will
|
||||
simply raise a HTTP 401 (Unauthorized) error instead.
|
||||
|
||||
This should be returned from a view or before/after_request function,
|
||||
otherwise the redirect will have no effect.
|
||||
"""
|
||||
user_needs_refresh.send(current_app._get_current_object())
|
||||
|
||||
if self.needs_refresh_callback:
|
||||
return self.needs_refresh_callback()
|
||||
|
||||
if not self.refresh_view:
|
||||
abort(401)
|
||||
|
||||
if self.needs_refresh_message:
|
||||
if self.localize_callback is not None:
|
||||
flash(
|
||||
self.localize_callback(self.needs_refresh_message),
|
||||
category=self.needs_refresh_message_category,
|
||||
)
|
||||
else:
|
||||
flash(
|
||||
self.needs_refresh_message,
|
||||
category=self.needs_refresh_message_category,
|
||||
)
|
||||
|
||||
config = current_app.config
|
||||
if config.get("USE_SESSION_FOR_NEXT", USE_SESSION_FOR_NEXT):
|
||||
login_url = expand_login_view(self.refresh_view)
|
||||
session["_id"] = self._session_identifier_generator()
|
||||
session["next"] = make_next_param(login_url, request.url)
|
||||
redirect_url = make_login_url(self.refresh_view)
|
||||
else:
|
||||
login_url = self.refresh_view
|
||||
redirect_url = make_login_url(login_url, next_url=request.url)
|
||||
|
||||
return redirect(redirect_url)
|
||||
|
||||
def _update_request_context_with_user(self, user=None):
|
||||
"""Store the given user as ctx.user."""
|
||||
|
||||
if user is None:
|
||||
user = self.anonymous_user()
|
||||
|
||||
g._login_user = user
|
||||
|
||||
def _load_user(self):
|
||||
"""Loads user from session or remember_me cookie as applicable"""
|
||||
|
||||
if self._user_callback is None and self._request_callback is None:
|
||||
raise Exception(
|
||||
"Missing user_loader or request_loader. Refer to "
|
||||
"https://flask-login.readthedocs.io/#how-it-works "
|
||||
"for more info."
|
||||
)
|
||||
|
||||
user_accessed.send(current_app._get_current_object())
|
||||
|
||||
# Check SESSION_PROTECTION
|
||||
if self._session_protection_failed():
|
||||
return self._update_request_context_with_user()
|
||||
|
||||
user = None
|
||||
|
||||
# Load user from Flask Session
|
||||
user_id = session.get("_user_id")
|
||||
user_random = session.get("_random")
|
||||
user_session_key = session.get("_id")
|
||||
if (user_id is not None
|
||||
and user_random is not None
|
||||
and user_session_key is not None
|
||||
and self._user_callback is not None):
|
||||
user = self._user_callback(user_id, user_random, user_session_key)
|
||||
|
||||
# Load user from Remember Me Cookie or Request Loader
|
||||
if user is None:
|
||||
config = current_app.config
|
||||
cookie_name = config.get("REMEMBER_COOKIE_NAME", COOKIE_NAME)
|
||||
header_name = config.get("AUTH_HEADER_NAME", AUTH_HEADER_NAME)
|
||||
has_cookie = (
|
||||
cookie_name in request.cookies and session.get("_remember") != "clear"
|
||||
)
|
||||
if has_cookie:
|
||||
cookie = request.cookies[cookie_name]
|
||||
user = self._load_user_from_remember_cookie(cookie)
|
||||
elif self._request_callback:
|
||||
user = self._load_user_from_request(request)
|
||||
elif header_name in request.headers:
|
||||
header = request.headers[header_name]
|
||||
user = self._load_user_from_header(header)
|
||||
if not user:
|
||||
self._update_request_context_with_user()
|
||||
return self._update_request_context_with_user(user)
|
||||
|
||||
def _session_protection_failed(self):
|
||||
sess = session._get_current_object()
|
||||
ident = self._session_identifier_generator()
|
||||
|
||||
app = current_app._get_current_object()
|
||||
mode = app.config.get("SESSION_PROTECTION", self.session_protection)
|
||||
|
||||
if not mode or mode not in ["basic", "strong"]:
|
||||
return False
|
||||
|
||||
# if the sess is empty, it's an anonymous user or just logged out
|
||||
# so we can skip this
|
||||
if sess and ident != sess.get("_id", None):
|
||||
if mode == "basic" or sess.permanent:
|
||||
if sess.get("_fresh") is not False:
|
||||
sess["_fresh"] = False
|
||||
session_protected.send(app)
|
||||
return False
|
||||
elif mode == "strong":
|
||||
for k in SESSION_KEYS:
|
||||
sess.pop(k, None)
|
||||
|
||||
sess["_remember"] = "clear"
|
||||
session_protected.send(app)
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def _load_user_from_remember_cookie(self, cookie):
|
||||
signer_kwargs = dict(
|
||||
key_derivation="hmac", digest_method=hashlib.sha1
|
||||
)
|
||||
try:
|
||||
remember_dict = URLSafeSerializer(
|
||||
current_app.secret_key,
|
||||
salt="remember",
|
||||
serializer=TaggedJSONSerializer(),
|
||||
signer_kwargs=signer_kwargs,
|
||||
).loads(cookie)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
if remember_dict['user'] is not None:
|
||||
session["_user_id"] = remember_dict['user']
|
||||
if "_random" not in session:
|
||||
session["_random"] = remember_dict['random']
|
||||
session["_fresh"] = False
|
||||
user = None
|
||||
if self._user_callback:
|
||||
user = self._user_callback(remember_dict['user'], session["_random"], None)
|
||||
if user is not None:
|
||||
app = current_app._get_current_object()
|
||||
user_loaded_from_cookie.send(app, user=user)
|
||||
# if session was restored from remember me cookie make login valid
|
||||
confirm_login()
|
||||
return user
|
||||
return None
|
||||
|
||||
def _load_user_from_header(self, header):
|
||||
if self._header_callback:
|
||||
user = self._header_callback(header)
|
||||
if user is not None:
|
||||
app = current_app._get_current_object()
|
||||
|
||||
from .signals import _user_loaded_from_header
|
||||
|
||||
_user_loaded_from_header.send(app, user=user)
|
||||
return user
|
||||
return None
|
||||
|
||||
def _load_user_from_request(self, request):
|
||||
if self._request_callback:
|
||||
user = self._request_callback(request)
|
||||
if user is not None:
|
||||
app = current_app._get_current_object()
|
||||
user_loaded_from_request.send(app, user=user)
|
||||
return user
|
||||
return None
|
||||
|
||||
def _update_remember_cookie(self, response):
|
||||
# Don't modify the session unless there's something to do.
|
||||
if "_remember" not in session and current_app.config.get(
|
||||
"REMEMBER_COOKIE_REFRESH_EACH_REQUEST"
|
||||
):
|
||||
session["_remember"] = "set"
|
||||
|
||||
if "_remember" in session:
|
||||
operation = session.pop("_remember", None)
|
||||
|
||||
if operation == "set" and "_user_id" in session:
|
||||
self._set_cookie(response)
|
||||
elif operation == "clear":
|
||||
self._clear_cookie(response)
|
||||
|
||||
return response
|
||||
|
||||
def _set_cookie(self, response):
|
||||
# cookie settings
|
||||
config = current_app.config
|
||||
cookie_name = config.get("REMEMBER_COOKIE_NAME", COOKIE_NAME)
|
||||
domain = config.get("REMEMBER_COOKIE_DOMAIN")
|
||||
path = config.get("REMEMBER_COOKIE_PATH", "/")
|
||||
|
||||
secure = config.get("REMEMBER_COOKIE_SECURE", COOKIE_SECURE)
|
||||
httponly = config.get("REMEMBER_COOKIE_HTTPONLY", COOKIE_HTTPONLY)
|
||||
samesite = config.get("REMEMBER_COOKIE_SAMESITE", COOKIE_SAMESITE)
|
||||
|
||||
if "_remember_seconds" in session:
|
||||
duration = timedelta(seconds=session["_remember_seconds"])
|
||||
else:
|
||||
duration = config.get("REMEMBER_COOKIE_DURATION", COOKIE_DURATION)
|
||||
|
||||
# prepare data
|
||||
max_age = int(current_app.permanent_session_lifetime.total_seconds())
|
||||
signer_kwargs = dict(
|
||||
key_derivation="hmac", digest_method=hashlib.sha1
|
||||
)
|
||||
# save
|
||||
data = URLSafeSerializer(
|
||||
current_app.secret_key,
|
||||
salt="remember",
|
||||
serializer=TaggedJSONSerializer(),
|
||||
signer_kwargs=signer_kwargs,
|
||||
).dumps({"user":session["_user_id"], "random":session["_random"]})
|
||||
|
||||
if isinstance(duration, int):
|
||||
duration = timedelta(seconds=duration)
|
||||
|
||||
try:
|
||||
expires = datetime.now(timezone.utc) + duration
|
||||
except TypeError as e:
|
||||
raise Exception(
|
||||
"REMEMBER_COOKIE_DURATION must be a datetime.timedelta,"
|
||||
f" instead got: {duration}"
|
||||
) from e
|
||||
|
||||
# actually set it
|
||||
response.set_cookie(
|
||||
cookie_name,
|
||||
value=data,
|
||||
expires=expires,
|
||||
domain=domain,
|
||||
path=path,
|
||||
secure=secure,
|
||||
httponly=httponly,
|
||||
samesite=samesite,
|
||||
)
|
||||
|
||||
def _clear_cookie(self, response):
|
||||
config = current_app.config
|
||||
cookie_name = config.get("REMEMBER_COOKIE_NAME", COOKIE_NAME)
|
||||
domain = config.get("REMEMBER_COOKIE_DOMAIN")
|
||||
path = config.get("REMEMBER_COOKIE_PATH", "/")
|
||||
response.delete_cookie(cookie_name, domain=domain, path=path)
|
||||
|
||||
@property
|
||||
def _login_disabled(self):
|
||||
"""Legacy property, use app.config['LOGIN_DISABLED'] instead."""
|
||||
import warnings
|
||||
|
||||
warnings.warn(
|
||||
"'_login_disabled' is deprecated and will be removed in"
|
||||
" Flask-Login 0.7. Use 'LOGIN_DISABLED' in 'app.config'"
|
||||
" instead.",
|
||||
DeprecationWarning,
|
||||
stacklevel=2,
|
||||
)
|
||||
|
||||
if has_app_context():
|
||||
return current_app.config.get("LOGIN_DISABLED", False)
|
||||
return False
|
||||
|
||||
@_login_disabled.setter
|
||||
def _login_disabled(self, newvalue):
|
||||
"""Legacy property setter, use app.config['LOGIN_DISABLED'] instead."""
|
||||
import warnings
|
||||
|
||||
warnings.warn(
|
||||
"'_login_disabled' is deprecated and will be removed in"
|
||||
" Flask-Login 0.7. Use 'LOGIN_DISABLED' in 'app.config'"
|
||||
" instead.",
|
||||
DeprecationWarning,
|
||||
stacklevel=2,
|
||||
)
|
||||
current_app.config["LOGIN_DISABLED"] = newvalue
|
@ -1,65 +0,0 @@
|
||||
class UserMixin:
|
||||
"""
|
||||
This provides default implementations for the methods that Flask-Login
|
||||
expects user objects to have.
|
||||
"""
|
||||
|
||||
# Python 3 implicitly set __hash__ to None if we override __eq__
|
||||
# We set it back to its default implementation
|
||||
__hash__ = object.__hash__
|
||||
|
||||
@property
|
||||
def is_active(self):
|
||||
return True
|
||||
|
||||
@property
|
||||
def is_authenticated(self):
|
||||
return self.is_active
|
||||
|
||||
@property
|
||||
def is_anonymous(self):
|
||||
return False
|
||||
|
||||
def get_id(self):
|
||||
try:
|
||||
return str(self.id)
|
||||
except AttributeError:
|
||||
raise NotImplementedError("No `id` attribute - override `get_id`") from None
|
||||
|
||||
def __eq__(self, other):
|
||||
"""
|
||||
Checks the equality of two `UserMixin` objects using `get_id`.
|
||||
"""
|
||||
if isinstance(other, UserMixin):
|
||||
return self.get_id() == other.get_id()
|
||||
return NotImplemented
|
||||
|
||||
def __ne__(self, other):
|
||||
"""
|
||||
Checks the inequality of two `UserMixin` objects using `get_id`.
|
||||
"""
|
||||
equal = self.__eq__(other)
|
||||
if equal is NotImplemented:
|
||||
return NotImplemented
|
||||
return not equal
|
||||
|
||||
|
||||
class AnonymousUserMixin:
|
||||
"""
|
||||
This is the default object for representing an anonymous user.
|
||||
"""
|
||||
|
||||
@property
|
||||
def is_authenticated(self):
|
||||
return False
|
||||
|
||||
@property
|
||||
def is_active(self):
|
||||
return False
|
||||
|
||||
@property
|
||||
def is_anonymous(self):
|
||||
return True
|
||||
|
||||
def get_id(self):
|
||||
return
|
@ -1,61 +0,0 @@
|
||||
from flask.signals import Namespace
|
||||
|
||||
_signals = Namespace()
|
||||
|
||||
#: Sent when a user is logged in. In addition to the app (which is the
|
||||
#: sender), it is passed `user`, which is the user being logged in.
|
||||
user_logged_in = _signals.signal("logged-in")
|
||||
|
||||
#: Sent when a user is logged out. In addition to the app (which is the
|
||||
#: sender), it is passed `user`, which is the user being logged out.
|
||||
user_logged_out = _signals.signal("logged-out")
|
||||
|
||||
#: Sent when the user is loaded from the cookie. In addition to the app (which
|
||||
#: is the sender), it is passed `user`, which is the user being reloaded.
|
||||
user_loaded_from_cookie = _signals.signal("loaded-from-cookie")
|
||||
|
||||
#: Sent when the user is loaded from the header. In addition to the app (which
|
||||
#: is the #: sender), it is passed `user`, which is the user being reloaded.
|
||||
_user_loaded_from_header = _signals.signal("loaded-from-header")
|
||||
|
||||
#: Sent when the user is loaded from the request. In addition to the app (which
|
||||
#: is the #: sender), it is passed `user`, which is the user being reloaded.
|
||||
user_loaded_from_request = _signals.signal("loaded-from-request")
|
||||
|
||||
#: Sent when a user's login is confirmed, marking it as fresh. (It is not
|
||||
#: called for a normal login.)
|
||||
#: It receives no additional arguments besides the app.
|
||||
user_login_confirmed = _signals.signal("login-confirmed")
|
||||
|
||||
#: Sent when the `unauthorized` method is called on a `LoginManager`. It
|
||||
#: receives no additional arguments besides the app.
|
||||
user_unauthorized = _signals.signal("unauthorized")
|
||||
|
||||
#: Sent when the `needs_refresh` method is called on a `LoginManager`. It
|
||||
#: receives no additional arguments besides the app.
|
||||
user_needs_refresh = _signals.signal("needs-refresh")
|
||||
|
||||
#: Sent whenever the user is accessed/loaded
|
||||
#: receives no additional arguments besides the app.
|
||||
user_accessed = _signals.signal("accessed")
|
||||
|
||||
#: Sent whenever session protection takes effect, and a session is either
|
||||
#: marked non-fresh or deleted. It receives no additional arguments besides
|
||||
#: the app.
|
||||
session_protected = _signals.signal("session-protected")
|
||||
|
||||
|
||||
def __getattr__(name):
|
||||
if name == "user_loaded_from_header":
|
||||
import warnings
|
||||
|
||||
warnings.warn(
|
||||
"'user_loaded_from_header' is deprecated and will be"
|
||||
" removed in Flask-Login 0.7. Use"
|
||||
" 'user_loaded_from_request' instead.",
|
||||
DeprecationWarning,
|
||||
stacklevel=2,
|
||||
)
|
||||
return _user_loaded_from_header
|
||||
|
||||
raise AttributeError(name)
|
@ -1,424 +0,0 @@
|
||||
import hmac
|
||||
import os
|
||||
from functools import wraps
|
||||
from hashlib import sha512
|
||||
from urllib.parse import parse_qs
|
||||
from urllib.parse import urlencode
|
||||
from urllib.parse import urlsplit
|
||||
from urllib.parse import urlunsplit
|
||||
|
||||
from flask import current_app
|
||||
from flask import g
|
||||
from flask import has_request_context
|
||||
from flask import request
|
||||
from flask import session
|
||||
from flask import url_for
|
||||
from werkzeug.local import LocalProxy
|
||||
|
||||
from .config import COOKIE_NAME
|
||||
from .config import EXEMPT_METHODS
|
||||
from .signals import user_logged_in
|
||||
from .signals import user_logged_out
|
||||
from .signals import user_login_confirmed
|
||||
|
||||
#: A proxy for the current user. If no user is logged in, this will be an
|
||||
#: anonymous user
|
||||
current_user = LocalProxy(lambda: _get_user())
|
||||
|
||||
|
||||
def encode_cookie(payload, key=None):
|
||||
"""
|
||||
This will encode a ``str`` value into a cookie, and sign that cookie
|
||||
with the app's secret key.
|
||||
|
||||
:param payload: The value to encode, as `str`.
|
||||
:type payload: str
|
||||
|
||||
:param key: The key to use when creating the cookie digest. If not
|
||||
specified, the SECRET_KEY value from app config will be used.
|
||||
:type key: str
|
||||
"""
|
||||
return f"{payload}|{_cookie_digest(payload, key=key)}"
|
||||
|
||||
|
||||
def decode_cookie(cookie, key=None):
|
||||
"""
|
||||
This decodes a cookie given by `encode_cookie`. If verification of the
|
||||
cookie fails, ``None`` will be implicitly returned.
|
||||
|
||||
:param cookie: An encoded cookie.
|
||||
:type cookie: str
|
||||
|
||||
:param key: The key to use when creating the cookie digest. If not
|
||||
specified, the SECRET_KEY value from app config will be used.
|
||||
:type key: str
|
||||
"""
|
||||
try:
|
||||
payload, digest = cookie.rsplit("|", 1)
|
||||
if hasattr(digest, "decode"):
|
||||
digest = digest.decode("ascii") # pragma: no cover
|
||||
except ValueError:
|
||||
return
|
||||
|
||||
if hmac.compare_digest(_cookie_digest(payload, key=key), digest):
|
||||
return payload
|
||||
|
||||
|
||||
def make_next_param(login_url, current_url):
|
||||
"""
|
||||
Reduces the scheme and host from a given URL so it can be passed to
|
||||
the given `login` URL more efficiently.
|
||||
|
||||
:param login_url: The login URL being redirected to.
|
||||
:type login_url: str
|
||||
:param current_url: The URL to reduce.
|
||||
:type current_url: str
|
||||
"""
|
||||
l_url = urlsplit(login_url)
|
||||
c_url = urlsplit(current_url)
|
||||
|
||||
if (not l_url.scheme or l_url.scheme == c_url.scheme) and (
|
||||
not l_url.netloc or l_url.netloc == c_url.netloc
|
||||
):
|
||||
return urlunsplit(("", "", c_url.path, c_url.query, ""))
|
||||
return current_url
|
||||
|
||||
|
||||
def expand_login_view(login_view):
|
||||
"""
|
||||
Returns the url for the login view, expanding the view name to a url if
|
||||
needed.
|
||||
|
||||
:param login_view: The name of the login view or a URL for the login view.
|
||||
:type login_view: str
|
||||
"""
|
||||
if login_view.startswith(("https://", "http://", "/")):
|
||||
return login_view
|
||||
|
||||
return url_for(login_view)
|
||||
|
||||
|
||||
def login_url(login_view, next_url=None, next_field="next"):
|
||||
"""
|
||||
Creates a URL for redirecting to a login page. If only `login_view` is
|
||||
provided, this will just return the URL for it. If `next_url` is provided,
|
||||
however, this will append a ``next=URL`` parameter to the query string
|
||||
so that the login view can redirect back to that URL. Flask-Login's default
|
||||
unauthorized handler uses this function when redirecting to your login url.
|
||||
To force the host name used, set `FORCE_HOST_FOR_REDIRECTS` to a host. This
|
||||
prevents from redirecting to external sites if request headers Host or
|
||||
X-Forwarded-For are present.
|
||||
|
||||
:param login_view: The name of the login view. (Alternately, the actual
|
||||
URL to the login view.)
|
||||
:type login_view: str
|
||||
:param next_url: The URL to give the login view for redirection.
|
||||
:type next_url: str
|
||||
:param next_field: What field to store the next URL in. (It defaults to
|
||||
``next``.)
|
||||
:type next_field: str
|
||||
"""
|
||||
base = expand_login_view(login_view)
|
||||
|
||||
if next_url is None:
|
||||
return base
|
||||
|
||||
parsed_result = urlsplit(base)
|
||||
md = parse_qs(parsed_result.query, keep_blank_values=True)
|
||||
md[next_field] = make_next_param(base, next_url)
|
||||
netloc = current_app.config.get("FORCE_HOST_FOR_REDIRECTS") or parsed_result.netloc
|
||||
parsed_result = parsed_result._replace(
|
||||
netloc=netloc, query=urlencode(md, doseq=True)
|
||||
)
|
||||
return urlunsplit(parsed_result)
|
||||
|
||||
|
||||
def login_fresh():
|
||||
"""
|
||||
This returns ``True`` if the current login is fresh.
|
||||
"""
|
||||
return session.get("_fresh", False)
|
||||
|
||||
|
||||
def login_remembered():
|
||||
"""
|
||||
This returns ``True`` if the current login is remembered across sessions.
|
||||
"""
|
||||
config = current_app.config
|
||||
cookie_name = config.get("REMEMBER_COOKIE_NAME", COOKIE_NAME)
|
||||
has_cookie = cookie_name in request.cookies and session.get("_remember") != "clear"
|
||||
if has_cookie:
|
||||
cookie = request.cookies[cookie_name]
|
||||
user_id = decode_cookie(cookie)
|
||||
return user_id is not None
|
||||
return False
|
||||
|
||||
|
||||
def login_user(user, remember=False, duration=None, force=False, fresh=True):
|
||||
"""
|
||||
Logs a user in. You should pass the actual user object to this. If the
|
||||
user's `is_active` property is ``False``, they will not be logged in
|
||||
unless `force` is ``True``.
|
||||
|
||||
This will return ``True`` if the log in attempt succeeds, and ``False`` if
|
||||
it fails (i.e. because the user is inactive).
|
||||
|
||||
:param user: The user object to log in.
|
||||
:type user: object
|
||||
:param remember: Whether to remember the user after their session expires.
|
||||
Defaults to ``False``.
|
||||
:type remember: bool
|
||||
:param duration: The amount of time before the remember cookie expires. If
|
||||
``None`` the value set in the settings is used. Defaults to ``None``.
|
||||
:type duration: :class:`datetime.timedelta`
|
||||
:param force: If the user is inactive, setting this to ``True`` will log
|
||||
them in regardless. Defaults to ``False``.
|
||||
:type force: bool
|
||||
:param fresh: setting this to ``False`` will log in the user with a session
|
||||
marked as not "fresh". Defaults to ``True``.
|
||||
:type fresh: bool
|
||||
"""
|
||||
if not force and not user.is_active:
|
||||
return False
|
||||
|
||||
user_id = getattr(user, current_app.login_manager.id_attribute)()
|
||||
session["_user_id"] = user_id
|
||||
session["_fresh"] = fresh
|
||||
session["_id"] = current_app.login_manager._session_identifier_generator()
|
||||
session["_random"] = os.urandom(10).hex()
|
||||
|
||||
if remember:
|
||||
session["_remember"] = "set"
|
||||
if duration is not None:
|
||||
try:
|
||||
# equal to timedelta.total_seconds() but works with Python 2.6
|
||||
session["_remember_seconds"] = (
|
||||
duration.microseconds
|
||||
+ (duration.seconds + duration.days * 24 * 3600) * 10**6
|
||||
) / 10.0**6
|
||||
except AttributeError as e:
|
||||
raise Exception(
|
||||
f"duration must be a datetime.timedelta, instead got: {duration}"
|
||||
) from e
|
||||
|
||||
current_app.login_manager._update_request_context_with_user(user)
|
||||
user_logged_in.send(current_app._get_current_object(), user=_get_user())
|
||||
return True
|
||||
|
||||
|
||||
def logout_user():
|
||||
"""
|
||||
Logs a user out. (You do not need to pass the actual user.) This will
|
||||
also clean up the remember me cookie if it exists.
|
||||
"""
|
||||
|
||||
user = _get_user()
|
||||
|
||||
if "_user_id" in session:
|
||||
session.pop("_user_id")
|
||||
|
||||
if "_fresh" in session:
|
||||
session.pop("_fresh")
|
||||
|
||||
if "_id" in session:
|
||||
session.pop("_id")
|
||||
|
||||
if "_random" in session:
|
||||
session.pop("_random")
|
||||
|
||||
|
||||
cookie_name = current_app.config.get("REMEMBER_COOKIE_NAME", COOKIE_NAME)
|
||||
if cookie_name in request.cookies:
|
||||
session["_remember"] = "clear"
|
||||
if "_remember_seconds" in session:
|
||||
session.pop("_remember_seconds")
|
||||
|
||||
user_logged_out.send(current_app._get_current_object(), user=user)
|
||||
|
||||
current_app.login_manager._update_request_context_with_user()
|
||||
return True
|
||||
|
||||
|
||||
def confirm_login():
|
||||
"""
|
||||
This sets the current session as fresh. Sessions become stale when they
|
||||
are reloaded from a cookie.
|
||||
"""
|
||||
session["_fresh"] = True
|
||||
session["_id"] = current_app.login_manager._session_identifier_generator()
|
||||
user_login_confirmed.send(current_app._get_current_object())
|
||||
|
||||
|
||||
def login_required(func):
|
||||
"""
|
||||
If you decorate a view with this, it will ensure that the current user is
|
||||
logged in and authenticated before calling the actual view. (If they are
|
||||
not, it calls the :attr:`LoginManager.unauthorized` callback.) For
|
||||
example::
|
||||
|
||||
@app.route('/post')
|
||||
@user_login_required
|
||||
def post():
|
||||
pass
|
||||
|
||||
If there are only certain times you need to require that your user is
|
||||
logged in, you can do so with::
|
||||
|
||||
if not current_user.is_authenticated:
|
||||
return current_app.login_manager.unauthorized()
|
||||
|
||||
...which is essentially the code that this function adds to your views.
|
||||
|
||||
It can be convenient to globally turn off authentication when unit testing.
|
||||
To enable this, if the application configuration variable `LOGIN_DISABLED`
|
||||
is set to `True`, this decorator will be ignored.
|
||||
|
||||
.. Note ::
|
||||
|
||||
Per `W3 guidelines for CORS preflight requests
|
||||
<http://www.w3.org/TR/cors/#cross-origin-request-with-preflight-0>`_,
|
||||
HTTP ``OPTIONS`` requests are exempt from login checks.
|
||||
|
||||
:param func: The view function to decorate.
|
||||
:type func: function
|
||||
"""
|
||||
|
||||
@wraps(func)
|
||||
def decorated_view(*args, **kwargs):
|
||||
if request.method in EXEMPT_METHODS or current_app.config.get("LOGIN_DISABLED"):
|
||||
pass
|
||||
elif not current_user.is_authenticated:
|
||||
return current_app.login_manager.unauthorized()
|
||||
|
||||
# flask 1.x compatibility
|
||||
# current_app.ensure_sync is only available in Flask >= 2.0
|
||||
if callable(getattr(current_app, "ensure_sync", None)):
|
||||
return current_app.ensure_sync(func)(*args, **kwargs)
|
||||
return func(*args, **kwargs)
|
||||
|
||||
return decorated_view
|
||||
|
||||
|
||||
def fresh_login_required(func):
|
||||
"""
|
||||
If you decorate a view with this, it will ensure that the current user's
|
||||
login is fresh - i.e. their session was not restored from a 'remember me'
|
||||
cookie. Sensitive operations, like changing a password or e-mail, should
|
||||
be protected with this, to impede the efforts of cookie thieves.
|
||||
|
||||
If the user is not authenticated, :meth:`LoginManager.unauthorized` is
|
||||
called as normal. If they are authenticated, but their session is not
|
||||
fresh, it will call :meth:`LoginManager.needs_refresh` instead. (In that
|
||||
case, you will need to provide a :attr:`LoginManager.refresh_view`.)
|
||||
|
||||
Behaves identically to the :func:`login_required` decorator with respect
|
||||
to configuration variables.
|
||||
|
||||
.. Note ::
|
||||
|
||||
Per `W3 guidelines for CORS preflight requests
|
||||
<http://www.w3.org/TR/cors/#cross-origin-request-with-preflight-0>`_,
|
||||
HTTP ``OPTIONS`` requests are exempt from login checks.
|
||||
|
||||
:param func: The view function to decorate.
|
||||
:type func: function
|
||||
"""
|
||||
|
||||
@wraps(func)
|
||||
def decorated_view(*args, **kwargs):
|
||||
if request.method in EXEMPT_METHODS or current_app.config.get("LOGIN_DISABLED"):
|
||||
pass
|
||||
elif not current_user.is_authenticated:
|
||||
return current_app.login_manager.unauthorized()
|
||||
elif not login_fresh():
|
||||
return current_app.login_manager.needs_refresh()
|
||||
try:
|
||||
# current_app.ensure_sync available in Flask >= 2.0
|
||||
return current_app.ensure_sync(func)(*args, **kwargs)
|
||||
except AttributeError: # pragma: no cover
|
||||
return func(*args, **kwargs)
|
||||
|
||||
return decorated_view
|
||||
|
||||
|
||||
def set_login_view(login_view, blueprint=None):
|
||||
"""
|
||||
Sets the login view for the app or blueprint. If a blueprint is passed,
|
||||
the login view is set for this blueprint on ``blueprint_login_views``.
|
||||
|
||||
:param login_view: The user object to log in.
|
||||
:type login_view: str
|
||||
:param blueprint: The blueprint which this login view should be set on.
|
||||
Defaults to ``None``.
|
||||
:type blueprint: object
|
||||
"""
|
||||
|
||||
num_login_views = len(current_app.login_manager.blueprint_login_views)
|
||||
if blueprint is not None or num_login_views != 0:
|
||||
(current_app.login_manager.blueprint_login_views[blueprint.name]) = login_view
|
||||
|
||||
if (
|
||||
current_app.login_manager.login_view is not None
|
||||
and None not in current_app.login_manager.blueprint_login_views
|
||||
):
|
||||
(
|
||||
current_app.login_manager.blueprint_login_views[None]
|
||||
) = current_app.login_manager.login_view
|
||||
|
||||
current_app.login_manager.login_view = None
|
||||
else:
|
||||
current_app.login_manager.login_view = login_view
|
||||
|
||||
|
||||
def _get_user():
|
||||
if has_request_context():
|
||||
if "flask_httpauth_user" in g:
|
||||
if g.flask_httpauth_user is not None:
|
||||
return g.flask_httpauth_user
|
||||
if "_login_user" not in g:
|
||||
current_app.login_manager._load_user()
|
||||
|
||||
return g._login_user
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def _cookie_digest(payload, key=None):
|
||||
key = _secret_key(key)
|
||||
|
||||
return hmac.new(key, payload.encode("utf-8"), sha512).hexdigest()
|
||||
|
||||
|
||||
def _get_remote_addr():
|
||||
address = request.headers.get("X-Forwarded-For", request.remote_addr)
|
||||
if address is not None:
|
||||
# An 'X-Forwarded-For' header includes a comma separated list of the
|
||||
# addresses, the first address being the actual remote address.
|
||||
address = address.encode("utf-8").split(b",")[0].strip()
|
||||
return address
|
||||
|
||||
|
||||
def _create_identifier():
|
||||
user_agent = request.headers.get("User-Agent")
|
||||
if user_agent is not None:
|
||||
user_agent = user_agent.encode("utf-8")
|
||||
base = f"{_get_remote_addr()}|{user_agent}"
|
||||
if str is bytes:
|
||||
base = str(base, "utf-8", errors="replace") # pragma: no cover
|
||||
h = sha512()
|
||||
h.update(base.encode("utf8"))
|
||||
return h.hexdigest()
|
||||
|
||||
|
||||
def _user_context_processor():
|
||||
return dict(current_user=_get_user())
|
||||
|
||||
|
||||
def _secret_key(key=None):
|
||||
if key is None:
|
||||
key = current_app.config["SECRET_KEY"]
|
||||
|
||||
if isinstance(key, str): # pragma: no cover
|
||||
key = key.encode("latin1") # ensure bytes
|
||||
|
||||
return key
|
380
cps/db.py
@ -20,11 +20,9 @@
|
||||
import os
|
||||
import re
|
||||
import json
|
||||
from datetime import datetime, timezone
|
||||
from datetime import datetime
|
||||
from urllib.parse import quote
|
||||
import unidecode
|
||||
# from weakref import WeakSet
|
||||
from uuid import uuid4
|
||||
|
||||
from sqlite3 import OperationalError as sqliteOperationalError
|
||||
from sqlalchemy import create_engine
|
||||
@ -42,14 +40,16 @@ except ImportError:
|
||||
from sqlalchemy.pool import StaticPool
|
||||
from sqlalchemy.sql.expression import and_, true, false, text, func, or_
|
||||
from sqlalchemy.ext.associationproxy import association_proxy
|
||||
from .cw_login import current_user
|
||||
from flask_login import current_user
|
||||
from flask_babel import gettext as _
|
||||
from flask_babel import get_locale
|
||||
from flask import flash, g, Flask
|
||||
from flask import flash
|
||||
|
||||
from . import logger, ub, isoLanguages
|
||||
from .pagination import Pagination
|
||||
from .string_helper import strip_whitespaces
|
||||
|
||||
from weakref import WeakSet
|
||||
|
||||
|
||||
log = logger.create()
|
||||
|
||||
@ -102,19 +102,8 @@ class Identifiers(Base):
|
||||
type = Column(String(collation='NOCASE'), nullable=False, default="isbn")
|
||||
val = Column(String(collation='NOCASE'), nullable=False)
|
||||
book = Column(Integer, ForeignKey('books.id'), nullable=False)
|
||||
amazon = {
|
||||
"jp": "co.jp",
|
||||
"uk": "co.uk",
|
||||
"us": "com",
|
||||
"au": "com.au",
|
||||
"be": "com.be",
|
||||
"br": "com.br",
|
||||
"tr": "com.tr",
|
||||
"mx": "com.mx",
|
||||
}
|
||||
|
||||
def __init__(self, val, id_type, book):
|
||||
super().__init__()
|
||||
self.val = val
|
||||
self.type = id_type
|
||||
self.book = book
|
||||
@ -122,97 +111,66 @@ class Identifiers(Base):
|
||||
def format_type(self):
|
||||
format_type = self.type.lower()
|
||||
if format_type == 'amazon':
|
||||
return "Amazon"
|
||||
return u"Amazon"
|
||||
elif format_type.startswith("amazon_"):
|
||||
label_amazon = "Amazon.{0}"
|
||||
country_code = format_type[7:].lower()
|
||||
if country_code not in self.amazon:
|
||||
return label_amazon.format(country_code)
|
||||
return label_amazon.format(self.amazon[country_code])
|
||||
return u"Amazon.{0}".format(format_type[7:])
|
||||
elif format_type == "isbn":
|
||||
return "ISBN"
|
||||
return u"ISBN"
|
||||
elif format_type == "doi":
|
||||
return "DOI"
|
||||
return u"DOI"
|
||||
elif format_type == "douban":
|
||||
return "Douban"
|
||||
return u"Douban"
|
||||
elif format_type == "goodreads":
|
||||
return "Goodreads"
|
||||
return u"Goodreads"
|
||||
elif format_type == "babelio":
|
||||
return "Babelio"
|
||||
return u"Babelio"
|
||||
elif format_type == "google":
|
||||
return "Google Books"
|
||||
return u"Google Books"
|
||||
elif format_type == "kobo":
|
||||
return "Kobo"
|
||||
elif format_type == "barnesnoble":
|
||||
return "Barnes & Noble"
|
||||
return u"Kobo"
|
||||
elif format_type == "litres":
|
||||
return "ЛитРес"
|
||||
return u"ЛитРес"
|
||||
elif format_type == "issn":
|
||||
return "ISSN"
|
||||
return u"ISSN"
|
||||
elif format_type == "isfdb":
|
||||
return "ISFDB"
|
||||
elif format_type == "storygraph":
|
||||
return "StoryGraph"
|
||||
elif format_type == "ebooks":
|
||||
return "eBooks.com"
|
||||
elif format_type == "smashwords":
|
||||
return "Smashwords"
|
||||
return u"ISFDB"
|
||||
if format_type == "lubimyczytac":
|
||||
return "Lubimyczytac"
|
||||
if format_type == "databazeknih":
|
||||
return "Databáze knih"
|
||||
return u"Lubimyczytac"
|
||||
else:
|
||||
return self.type
|
||||
|
||||
def __repr__(self):
|
||||
format_type = self.type.lower()
|
||||
if format_type == "amazon" or format_type == "asin":
|
||||
return "https://amazon.com/dp/{0}".format(self.val)
|
||||
return u"https://amazon.com/dp/{0}".format(self.val)
|
||||
elif format_type.startswith('amazon_'):
|
||||
link_amazon = "https://amazon.{0}/dp/{1}"
|
||||
country_code = format_type[7:].lower()
|
||||
if country_code not in self.amazon:
|
||||
return link_amazon.format(country_code, self.val)
|
||||
return link_amazon.format(self.amazon[country_code], self.val)
|
||||
return u"https://amazon.{0}/dp/{1}".format(format_type[7:], self.val)
|
||||
elif format_type == "isbn":
|
||||
return "https://www.worldcat.org/isbn/{0}".format(self.val)
|
||||
return u"https://www.worldcat.org/isbn/{0}".format(self.val)
|
||||
elif format_type == "doi":
|
||||
return "https://dx.doi.org/{0}".format(self.val)
|
||||
return u"https://dx.doi.org/{0}".format(self.val)
|
||||
elif format_type == "goodreads":
|
||||
return "https://www.goodreads.com/book/show/{0}".format(self.val)
|
||||
return u"https://www.goodreads.com/book/show/{0}".format(self.val)
|
||||
elif format_type == "babelio":
|
||||
return "https://www.babelio.com/livres/titre/{0}".format(self.val)
|
||||
return u"https://www.babelio.com/livres/titre/{0}".format(self.val)
|
||||
elif format_type == "douban":
|
||||
return "https://book.douban.com/subject/{0}".format(self.val)
|
||||
return u"https://book.douban.com/subject/{0}".format(self.val)
|
||||
elif format_type == "google":
|
||||
return "https://books.google.com/books?id={0}".format(self.val)
|
||||
return u"https://books.google.com/books?id={0}".format(self.val)
|
||||
elif format_type == "kobo":
|
||||
return "https://www.kobo.com/ebook/{0}".format(self.val)
|
||||
elif format_type == "barnesnoble":
|
||||
return "https://www.barnesandnoble.com/w/{0}".format(self.val)
|
||||
return u"https://www.kobo.com/ebook/{0}".format(self.val)
|
||||
elif format_type == "lubimyczytac":
|
||||
return "https://lubimyczytac.pl/ksiazka/{0}/ksiazka".format(self.val)
|
||||
return u"https://lubimyczytac.pl/ksiazka/{0}/ksiazka".format(self.val)
|
||||
elif format_type == "litres":
|
||||
return "https://www.litres.ru/{0}".format(self.val)
|
||||
return u"https://www.litres.ru/{0}".format(self.val)
|
||||
elif format_type == "issn":
|
||||
return "https://portal.issn.org/resource/ISSN/{0}".format(self.val)
|
||||
return u"https://portal.issn.org/resource/ISSN/{0}".format(self.val)
|
||||
elif format_type == "isfdb":
|
||||
return "https://www.isfdb.org/cgi-bin/pl.cgi?{0}".format(self.val)
|
||||
elif format_type == "databazeknih":
|
||||
return "https://www.databazeknih.cz/knihy/{0}".format(self.val)
|
||||
elif format_type == "storygraph":
|
||||
return "https://app.thestorygraph.com/books/{0}".format(self.val)
|
||||
elif format_type == "ebooks":
|
||||
return "https://www.ebooks.com/en-us/book/{0}".format(self.val)
|
||||
elif format_type == "smashwords":
|
||||
return "https://www.smashwords.com/books/view/{0}".format(self.val)
|
||||
return u"http://www.isfdb.org/cgi-bin/pl.cgi?{0}".format(self.val)
|
||||
elif self.val.lower().startswith("javascript:"):
|
||||
return quote(self.val)
|
||||
elif self.val.lower().startswith("data:"):
|
||||
link, __, __ = str.partition(self.val, ",")
|
||||
return link
|
||||
else:
|
||||
return "{0}".format(self.val)
|
||||
return u"{0}".format(self.val)
|
||||
|
||||
|
||||
class Comments(Base):
|
||||
@ -223,7 +181,6 @@ class Comments(Base):
|
||||
text = Column(String(collation='NOCASE'), nullable=False)
|
||||
|
||||
def __init__(self, comment, book):
|
||||
super().__init__()
|
||||
self.text = comment
|
||||
self.book = book
|
||||
|
||||
@ -231,7 +188,7 @@ class Comments(Base):
|
||||
return self.text
|
||||
|
||||
def __repr__(self):
|
||||
return "<Comments({0})>".format(self.text)
|
||||
return u"<Comments({0})>".format(self.text)
|
||||
|
||||
|
||||
class Tags(Base):
|
||||
@ -241,17 +198,13 @@ class Tags(Base):
|
||||
name = Column(String(collation='NOCASE'), unique=True, nullable=False)
|
||||
|
||||
def __init__(self, name):
|
||||
super().__init__()
|
||||
self.name = name
|
||||
|
||||
def get(self):
|
||||
return self.name
|
||||
|
||||
def __eq__(self, other):
|
||||
return self.name == other
|
||||
|
||||
def __repr__(self):
|
||||
return "<Tags('{0})>".format(self.name)
|
||||
return u"<Tags('{0})>".format(self.name)
|
||||
|
||||
|
||||
class Authors(Base):
|
||||
@ -262,8 +215,7 @@ class Authors(Base):
|
||||
sort = Column(String(collation='NOCASE'))
|
||||
link = Column(String, nullable=False, default="")
|
||||
|
||||
def __init__(self, name, sort, link=""):
|
||||
super().__init__()
|
||||
def __init__(self, name, sort, link):
|
||||
self.name = name
|
||||
self.sort = sort
|
||||
self.link = link
|
||||
@ -271,11 +223,8 @@ class Authors(Base):
|
||||
def get(self):
|
||||
return self.name
|
||||
|
||||
def __eq__(self, other):
|
||||
return self.name == other
|
||||
|
||||
def __repr__(self):
|
||||
return "<Authors('{0},{1}{2}')>".format(self.name, self.sort, self.link)
|
||||
return u"<Authors('{0},{1}{2}')>".format(self.name, self.sort, self.link)
|
||||
|
||||
|
||||
class Series(Base):
|
||||
@ -286,18 +235,14 @@ class Series(Base):
|
||||
sort = Column(String(collation='NOCASE'))
|
||||
|
||||
def __init__(self, name, sort):
|
||||
super().__init__()
|
||||
self.name = name
|
||||
self.sort = sort
|
||||
|
||||
def get(self):
|
||||
return self.name
|
||||
|
||||
def __eq__(self, other):
|
||||
return self.name == other
|
||||
|
||||
def __repr__(self):
|
||||
return "<Series('{0},{1}')>".format(self.name, self.sort)
|
||||
return u"<Series('{0},{1}')>".format(self.name, self.sort)
|
||||
|
||||
|
||||
class Ratings(Base):
|
||||
@ -307,17 +252,13 @@ class Ratings(Base):
|
||||
rating = Column(Integer, CheckConstraint('rating>-1 AND rating<11'), unique=True)
|
||||
|
||||
def __init__(self, rating):
|
||||
super().__init__()
|
||||
self.rating = rating
|
||||
|
||||
def get(self):
|
||||
return self.rating
|
||||
|
||||
def __eq__(self, other):
|
||||
return self.rating == other
|
||||
|
||||
def __repr__(self):
|
||||
return "<Ratings('{0}')>".format(self.rating)
|
||||
return u"<Ratings('{0}')>".format(self.rating)
|
||||
|
||||
|
||||
class Languages(Base):
|
||||
@ -327,20 +268,16 @@ class Languages(Base):
|
||||
lang_code = Column(String(collation='NOCASE'), nullable=False, unique=True)
|
||||
|
||||
def __init__(self, lang_code):
|
||||
super().__init__()
|
||||
self.lang_code = lang_code
|
||||
|
||||
def get(self):
|
||||
if hasattr(self, "language_name"):
|
||||
if self.language_name:
|
||||
return self.language_name
|
||||
else:
|
||||
return self.lang_code
|
||||
|
||||
def __eq__(self, other):
|
||||
return self.lang_code == other
|
||||
|
||||
def __repr__(self):
|
||||
return "<Languages('{0}')>".format(self.lang_code)
|
||||
return u"<Languages('{0}')>".format(self.lang_code)
|
||||
|
||||
|
||||
class Publishers(Base):
|
||||
@ -351,18 +288,14 @@ class Publishers(Base):
|
||||
sort = Column(String(collation='NOCASE'))
|
||||
|
||||
def __init__(self, name, sort):
|
||||
super().__init__()
|
||||
self.name = name
|
||||
self.sort = sort
|
||||
|
||||
def get(self):
|
||||
return self.name
|
||||
|
||||
def __eq__(self, other):
|
||||
return self.name == other
|
||||
|
||||
def __repr__(self):
|
||||
return "<Publishers('{0},{1}')>".format(self.name, self.sort)
|
||||
return u"<Publishers('{0},{1}')>".format(self.name, self.sort)
|
||||
|
||||
|
||||
class Data(Base):
|
||||
@ -376,7 +309,6 @@ class Data(Base):
|
||||
name = Column(String, nullable=False)
|
||||
|
||||
def __init__(self, book, book_format, uncompressed_size, name):
|
||||
super().__init__()
|
||||
self.book = book
|
||||
self.format = book_format
|
||||
self.uncompressed_size = uncompressed_size
|
||||
@ -387,17 +319,7 @@ class Data(Base):
|
||||
return self.name
|
||||
|
||||
def __repr__(self):
|
||||
return "<Data('{0},{1}{2}{3}')>".format(self.book, self.format, self.uncompressed_size, self.name)
|
||||
|
||||
|
||||
class Metadata_Dirtied(Base):
|
||||
__tablename__ = 'metadata_dirtied'
|
||||
id = Column(Integer, primary_key=True, autoincrement=True)
|
||||
book = Column(Integer, ForeignKey('books.id'), nullable=False, unique=True)
|
||||
|
||||
def __init__(self, book):
|
||||
super().__init__()
|
||||
self.book = book
|
||||
return u"<Data('{0},{1}{2}{3}')>".format(self.book, self.format, self.uncompressed_size, self.name)
|
||||
|
||||
|
||||
class Books(Base):
|
||||
@ -409,10 +331,10 @@ class Books(Base):
|
||||
title = Column(String(collation='NOCASE'), nullable=False, default='Unknown')
|
||||
sort = Column(String(collation='NOCASE'))
|
||||
author_sort = Column(String(collation='NOCASE'))
|
||||
timestamp = Column(TIMESTAMP, default=lambda: datetime.now(timezone.utc))
|
||||
timestamp = Column(TIMESTAMP, default=datetime.utcnow)
|
||||
pubdate = Column(TIMESTAMP, default=DEFAULT_PUBDATE)
|
||||
series_index = Column(String, nullable=False, default="1.0")
|
||||
last_modified = Column(TIMESTAMP, default=lambda: datetime.now(timezone.utc))
|
||||
last_modified = Column(TIMESTAMP, default=datetime.utcnow)
|
||||
path = Column(String, default="", nullable=False)
|
||||
has_cover = Column(Integer, default=0)
|
||||
uuid = Column(String)
|
||||
@ -431,7 +353,6 @@ class Books(Base):
|
||||
|
||||
def __init__(self, title, sort, author_sort, timestamp, pubdate, series_index, last_modified, path, has_cover,
|
||||
authors, tags, languages=None):
|
||||
super().__init__()
|
||||
self.title = title
|
||||
self.sort = sort
|
||||
self.author_sort = author_sort
|
||||
@ -440,12 +361,12 @@ class Books(Base):
|
||||
self.series_index = series_index
|
||||
self.last_modified = last_modified
|
||||
self.path = path
|
||||
self.has_cover = (has_cover is not None)
|
||||
self.has_cover = (has_cover != None)
|
||||
|
||||
def __repr__(self):
|
||||
return "<Books('{0},{1}{2}{3}{4}{5}{6}{7}{8}')>".format(self.title, self.sort, self.author_sort,
|
||||
self.timestamp, self.pubdate, self.series_index,
|
||||
self.last_modified, self.path, self.has_cover)
|
||||
return u"<Books('{0},{1}{2}{3}{4}{5}{6}{7}{8}')>".format(self.title, self.sort, self.author_sort,
|
||||
self.timestamp, self.pubdate, self.series_index,
|
||||
self.last_modified, self.path, self.has_cover)
|
||||
|
||||
@property
|
||||
def atom_timestamp(self):
|
||||
@ -469,35 +390,6 @@ class CustomColumns(Base):
|
||||
display_dict = json.loads(self.display)
|
||||
return display_dict
|
||||
|
||||
def to_json(self, value, extra, sequence):
|
||||
content = dict()
|
||||
content['table'] = "custom_column_" + str(self.id)
|
||||
content['column'] = "value"
|
||||
content['datatype'] = self.datatype
|
||||
content['is_multiple'] = None if not self.is_multiple else "|"
|
||||
content['kind'] = "field"
|
||||
content['name'] = self.name
|
||||
content['search_terms'] = ['#' + self.label]
|
||||
content['label'] = self.label
|
||||
content['colnum'] = self.id
|
||||
content['display'] = self.get_display_dict()
|
||||
content['is_custom'] = True
|
||||
content['is_category'] = self.datatype in ['text', 'rating', 'enumeration', 'series']
|
||||
content['link_column'] = "value"
|
||||
content['category_sort'] = "value"
|
||||
content['is_csp'] = False
|
||||
content['is_editable'] = self.editable
|
||||
content['rec_index'] = sequence + 22 # toDo why ??
|
||||
if isinstance(value, datetime):
|
||||
content['#value#'] = {"__class__": "datetime.datetime",
|
||||
"__value__": value.strftime("%Y-%m-%dT%H:%M:%S+00:00")}
|
||||
else:
|
||||
content['#value#'] = value
|
||||
content['#extra#'] = extra
|
||||
content['is_multiple2'] = {} if not self.is_multiple else {"cache_to_list": "|", "ui_to_list": ",",
|
||||
"list_to_ui": ", "}
|
||||
return json.dumps(content, ensure_ascii=False)
|
||||
|
||||
|
||||
class AlchemyEncoder(json.JSONEncoder):
|
||||
|
||||
@ -540,25 +432,35 @@ class AlchemyEncoder(json.JSONEncoder):
|
||||
|
||||
|
||||
class CalibreDB:
|
||||
_init = False
|
||||
engine = None
|
||||
config = None
|
||||
config_calibre_dir = None
|
||||
app_db_path = None
|
||||
session_factory = None
|
||||
# This is a WeakSet so that references here don't keep other CalibreDB
|
||||
# instances alive once they reach the end of their respective scopes
|
||||
instances = WeakSet()
|
||||
|
||||
def __init__(self, _app: Flask=None): # , expire_on_commit=True, init=False):
|
||||
def __init__(self, expire_on_commit=True, init=False):
|
||||
""" Initialize a new CalibreDB session
|
||||
"""
|
||||
self.Session = None
|
||||
#if init:
|
||||
# self.init_db(expire_on_commit)
|
||||
if _app is not None and not _app._got_first_request:
|
||||
self.init_app(_app)
|
||||
self.session = None
|
||||
if init:
|
||||
self.init_db(expire_on_commit)
|
||||
|
||||
def init_app(self, _app):
|
||||
_app.teardown_appcontext(self.teardown)
|
||||
|
||||
def init_db(self, expire_on_commit=True):
|
||||
if self._init:
|
||||
self.init_session(expire_on_commit)
|
||||
|
||||
self.instances.add(self)
|
||||
|
||||
def init_session(self, expire_on_commit=True):
|
||||
self.session = self.session_factory()
|
||||
self.session.expire_on_commit = expire_on_commit
|
||||
self.update_title_sort(self.config)
|
||||
|
||||
@classmethod
|
||||
def setup_db_cc_classes(cls, cc):
|
||||
global cc_classes
|
||||
cc_ids = []
|
||||
books_custom_column_links = {}
|
||||
for row in cc:
|
||||
@ -626,6 +528,8 @@ class CalibreDB:
|
||||
secondary=books_custom_column_links[cc_id[0]],
|
||||
backref='books'))
|
||||
|
||||
return cc_classes
|
||||
|
||||
@classmethod
|
||||
def check_valid_db(cls, config_calibre_dir, app_db_path, config_calibre_uuid):
|
||||
if not config_calibre_dir:
|
||||
@ -645,6 +549,7 @@ class CalibreDB:
|
||||
local_session = scoped_session(sessionmaker())
|
||||
local_session.configure(bind=connection)
|
||||
database_uuid = local_session().query(Library_Id).one_or_none()
|
||||
# local_session.dispose()
|
||||
|
||||
check_engine.connect()
|
||||
db_change = config_calibre_uuid != database_uuid.uuid
|
||||
@ -652,30 +557,13 @@ class CalibreDB:
|
||||
return False, False
|
||||
return True, db_change
|
||||
|
||||
def teardown(self, exception):
|
||||
ctx = g.get("lib_sql")
|
||||
if ctx:
|
||||
ctx.close()
|
||||
|
||||
@property
|
||||
def session(self):
|
||||
# connect or get active connection
|
||||
if not g.get("lib_sql"):
|
||||
g.lib_sql = self.connect()
|
||||
return g.lib_sql
|
||||
|
||||
@classmethod
|
||||
def update_config(cls, config, config_calibre_dir, app_db_path):
|
||||
def update_config(cls, config):
|
||||
cls.config = config
|
||||
cls.config_calibre_dir = config_calibre_dir
|
||||
cls.app_db_path = app_db_path
|
||||
|
||||
|
||||
def connect(self):
|
||||
return self.setup_db(self.config_calibre_dir, self.app_db_path)
|
||||
|
||||
@classmethod
|
||||
def setup_db(cls, config_calibre_dir, app_db_path):
|
||||
cls.dispose()
|
||||
|
||||
if not config_calibre_dir:
|
||||
cls.config.invalidate()
|
||||
@ -687,17 +575,16 @@ class CalibreDB:
|
||||
return None
|
||||
|
||||
try:
|
||||
engine = create_engine('sqlite://',
|
||||
cls.engine = create_engine('sqlite://',
|
||||
echo=False,
|
||||
isolation_level="SERIALIZABLE",
|
||||
connect_args={'check_same_thread': False},
|
||||
poolclass=StaticPool)
|
||||
with engine.begin() as connection:
|
||||
connection.execute(text('PRAGMA cache_size = 10000;'))
|
||||
with cls.engine.begin() as connection:
|
||||
connection.execute(text("attach database '{}' as calibre;".format(dbpath)))
|
||||
connection.execute(text("attach database '{}' as app_settings;".format(app_db_path)))
|
||||
|
||||
conn = engine.connect()
|
||||
conn = cls.engine.connect()
|
||||
# conn.text_factory = lambda b: b.decode(errors = 'ignore') possible fix for #1302
|
||||
except Exception as ex:
|
||||
cls.config.invalidate(ex)
|
||||
@ -713,10 +600,13 @@ class CalibreDB:
|
||||
log.error_or_exception(e)
|
||||
return None
|
||||
|
||||
return scoped_session(sessionmaker(autocommit=False,
|
||||
autoflush=False,
|
||||
bind=engine, future=True))
|
||||
cls.session_factory = scoped_session(sessionmaker(autocommit=False,
|
||||
autoflush=True,
|
||||
bind=cls.engine))
|
||||
for inst in cls.instances:
|
||||
inst.init_session()
|
||||
|
||||
cls._init = True
|
||||
|
||||
def get_book(self, book_id):
|
||||
return self.session.query(Books).filter(Books.id == book_id).first()
|
||||
@ -751,24 +641,12 @@ class CalibreDB:
|
||||
def get_book_format(self, book_id, file_format):
|
||||
return self.session.query(Data).filter(Data.book == book_id).filter(Data.format == file_format).first()
|
||||
|
||||
def set_metadata_dirty(self, book_id):
|
||||
if not self.session.query(Metadata_Dirtied).filter(Metadata_Dirtied.book == book_id).one_or_none():
|
||||
self.session.add(Metadata_Dirtied(book_id))
|
||||
|
||||
def delete_dirty_metadata(self, book_id):
|
||||
try:
|
||||
self.session.query(Metadata_Dirtied).filter(Metadata_Dirtied.book == book_id).delete()
|
||||
self.session.commit()
|
||||
except (OperationalError) as e:
|
||||
self.session.rollback()
|
||||
log.error("Database error: {}".format(e))
|
||||
|
||||
# Language and content filters for displaying in the UI
|
||||
def common_filters(self, allow_show_archived=False, return_all_languages=False):
|
||||
if not allow_show_archived:
|
||||
archived_books = (ub.session.query(ub.ArchivedBook)
|
||||
.filter(ub.ArchivedBook.user_id==int(current_user.id))
|
||||
.filter(ub.ArchivedBook.is_archived==True)
|
||||
.filter(ub.ArchivedBook.user_id == int(current_user.id))
|
||||
.filter(ub.ArchivedBook.is_archived == True)
|
||||
.all())
|
||||
archived_book_ids = [archived_book.book_id for archived_book in archived_books]
|
||||
archived_filter = Books.id.notin_(archived_book_ids)
|
||||
@ -888,7 +766,8 @@ class CalibreDB:
|
||||
entries = list()
|
||||
pagination = list()
|
||||
try:
|
||||
pagination = Pagination(page, pagesize, query.count())
|
||||
pagination = Pagination(page, pagesize,
|
||||
len(query.all()))
|
||||
entries = query.order_by(*order).offset(off).limit(pagesize).all()
|
||||
except Exception as ex:
|
||||
log.error_or_exception(ex)
|
||||
@ -898,6 +777,8 @@ class CalibreDB:
|
||||
|
||||
# Orders all Authors in the list according to authors sort
|
||||
def order_authors(self, entries, list_return=False, combined=False):
|
||||
# entries_copy = copy.deepcopy(entries)
|
||||
# entries_copy =[]
|
||||
for entry in entries:
|
||||
if combined:
|
||||
sort_authors = entry.Books.author_sort.split('&')
|
||||
@ -909,12 +790,10 @@ class CalibreDB:
|
||||
authors_ordered = list()
|
||||
# error = False
|
||||
for auth in sort_authors:
|
||||
auth = strip_whitespaces(auth)
|
||||
results = self.session.query(Authors).filter(Authors.sort == auth).all()
|
||||
results = self.session.query(Authors).filter(Authors.sort == auth.lstrip().strip()).all()
|
||||
# ToDo: How to handle not found author name
|
||||
if not len(results):
|
||||
book_id = entry.id if isinstance(entry, Books) else entry[0].id
|
||||
log.error("Author '{}' of book {} not found to display name in right order".format(auth, book_id))
|
||||
log.error("Author {} not found to display name in right order".format(auth.strip()))
|
||||
# error = True
|
||||
break
|
||||
for r in results:
|
||||
@ -936,8 +815,7 @@ class CalibreDB:
|
||||
|
||||
def get_typeahead(self, database, query, replace=('', ''), tag_filter=true()):
|
||||
query = query or ''
|
||||
self.create_functions()
|
||||
# self.session.connection().connection.connection.create_function("lower", 1, lcase)
|
||||
self.session.connection().connection.connection.create_function("lower", 1, lcase)
|
||||
entries = self.session.query(database).filter(tag_filter). \
|
||||
filter(func.lower(database.name).ilike("%" + query + "%")).all()
|
||||
# json_dumps = json.dumps([dict(name=escape(r.name.replace(*replace))) for r in entries])
|
||||
@ -945,8 +823,7 @@ class CalibreDB:
|
||||
return json_dumps
|
||||
|
||||
def check_exists_book(self, authr, title):
|
||||
self.create_functions()
|
||||
# self.session.connection().connection.connection.create_function("lower", 1, lcase)
|
||||
self.session.connection().connection.connection.create_function("lower", 1, lcase)
|
||||
q = list()
|
||||
author_terms = re.split(r'\s*&\s*', authr)
|
||||
for author_term in author_terms:
|
||||
@ -956,9 +833,8 @@ class CalibreDB:
|
||||
.filter(and_(Books.authors.any(and_(*q)), func.lower(Books.title).ilike("%" + title + "%"))).first()
|
||||
|
||||
def search_query(self, term, config, *join):
|
||||
strip_whitespaces(term).lower()
|
||||
self.create_functions()
|
||||
# self.session.connection().connection.connection.create_function("lower", 1, lcase)
|
||||
term.strip().lower()
|
||||
self.session.connection().connection.connection.create_function("lower", 1, lcase)
|
||||
q = list()
|
||||
author_terms = re.split("[, ]+", term)
|
||||
for author_term in author_terms:
|
||||
@ -1009,7 +885,7 @@ class CalibreDB:
|
||||
pagination = None
|
||||
result = self.search_query(term, config, *join).order_by(*order).all()
|
||||
result_count = len(result)
|
||||
if offset is not None and limit is not None:
|
||||
if offset != None and limit != None:
|
||||
offset = int(offset)
|
||||
limit_all = offset + int(limit)
|
||||
pagination = Pagination((offset / (int(limit)) + 1), limit, result_count)
|
||||
@ -1039,7 +915,7 @@ class CalibreDB:
|
||||
if not return_all_languages:
|
||||
no_lang_count = (self.session.query(Books)
|
||||
.outerjoin(books_languages_link).outerjoin(Languages)
|
||||
.filter(Languages.lang_code==None)
|
||||
.filter(Languages.lang_code == None)
|
||||
.filter(self.common_filters())
|
||||
.count())
|
||||
if no_lang_count:
|
||||
@ -1056,7 +932,7 @@ class CalibreDB:
|
||||
lang.name = isoLanguages.get_language_name(get_locale(), lang.lang_code)
|
||||
return sorted(languages, key=lambda x: x.name, reverse=reverse_order)
|
||||
|
||||
def create_functions(self, config=None):
|
||||
def update_title_sort(self, config, conn=None):
|
||||
# user defined sort function for calibre databases (Series, etc.)
|
||||
def _title_sort(title):
|
||||
# calibre sort stuff
|
||||
@ -1065,27 +941,51 @@ class CalibreDB:
|
||||
if match:
|
||||
prep = match.group(1)
|
||||
title = title[len(prep):] + ', ' + prep
|
||||
return strip_whitespaces(title)
|
||||
return title.strip()
|
||||
|
||||
conn = conn or self.session.connection().connection.connection
|
||||
try:
|
||||
# sqlalchemy <1.4.24 and sqlalchemy 2.0
|
||||
conn = self.session.connection().connection.driver_connection
|
||||
except AttributeError:
|
||||
# sqlalchemy >1.4.24
|
||||
conn = self.session.connection().connection.connection
|
||||
try:
|
||||
if config:
|
||||
conn.create_function("title_sort", 1, _title_sort)
|
||||
conn.create_function('uuid4', 0, lambda: str(uuid4()))
|
||||
conn.create_function("lower", 1, lcase)
|
||||
conn.create_function("title_sort", 1, _title_sort)
|
||||
except sqliteOperationalError:
|
||||
pass
|
||||
|
||||
@classmethod
|
||||
def dispose(cls):
|
||||
# global session
|
||||
|
||||
for inst in cls.instances:
|
||||
old_session = inst.session
|
||||
inst.session = None
|
||||
if old_session:
|
||||
try:
|
||||
old_session.close()
|
||||
except Exception:
|
||||
pass
|
||||
if old_session.bind:
|
||||
try:
|
||||
old_session.bind.dispose()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
for attr in list(Books.__dict__.keys()):
|
||||
if attr.startswith("custom_column_"):
|
||||
setattr(Books, attr, None)
|
||||
|
||||
for db_class in cc_classes.values():
|
||||
Base.metadata.remove(db_class.__table__)
|
||||
cc_classes.clear()
|
||||
|
||||
for table in reversed(Base.metadata.sorted_tables):
|
||||
name = table.key
|
||||
if name.startswith("custom_column_") or name.startswith("books_custom_column_"):
|
||||
if table is not None:
|
||||
Base.metadata.remove(table)
|
||||
|
||||
def reconnect_db(self, config, app_db_path):
|
||||
# self.dispose()
|
||||
# self.engine.dispose()
|
||||
self.dispose()
|
||||
self.engine.dispose()
|
||||
self.setup_db(config.config_calibre_dir, app_db_path)
|
||||
self.update_config(config, config.config_calibre_dir, app_db_path)
|
||||
self.update_config(config)
|
||||
|
||||
|
||||
def lcase(s):
|
||||
@ -1108,3 +1008,9 @@ class Category:
|
||||
self.id = cat_id
|
||||
self.rating = rating
|
||||
self.count = 1
|
||||
|
||||
'''class Count:
|
||||
count = None
|
||||
|
||||
def __init__(self, count):
|
||||
self.count = count'''
|
||||
|
@ -23,17 +23,16 @@ import zipfile
|
||||
import json
|
||||
from io import BytesIO
|
||||
from flask_babel.speaklater import LazyString
|
||||
from importlib.metadata import metadata
|
||||
|
||||
import os
|
||||
|
||||
from flask import send_file
|
||||
from flask import send_file, __version__
|
||||
|
||||
from . import logger, config
|
||||
from .about import collect_stats
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
||||
class lazyEncoder(json.JSONEncoder):
|
||||
def default(self, obj):
|
||||
if isinstance(obj, LazyString):
|
||||
@ -41,7 +40,6 @@ class lazyEncoder(json.JSONEncoder):
|
||||
# Let the base class default method raise the TypeError
|
||||
return json.JSONEncoder.default(self, obj)
|
||||
|
||||
|
||||
def assemble_logfiles(file_name):
|
||||
log_list = sorted(glob.glob(file_name + '*'), reverse=True)
|
||||
wfd = BytesIO()
|
||||
@ -49,8 +47,7 @@ def assemble_logfiles(file_name):
|
||||
with open(f, 'rb') as fd:
|
||||
shutil.copyfileobj(fd, wfd)
|
||||
wfd.seek(0)
|
||||
version = metadata("flask")["Version"]
|
||||
if int(version.split('.')[0]) < 2:
|
||||
if int(__version__.split('.')[0]) < 2:
|
||||
return send_file(wfd,
|
||||
as_attachment=True,
|
||||
attachment_filename=os.path.basename(file_name))
|
||||
@ -68,13 +65,12 @@ def send_debug():
|
||||
file_list.remove(element)
|
||||
memory_zip = BytesIO()
|
||||
with zipfile.ZipFile(memory_zip, 'w', compression=zipfile.ZIP_DEFLATED) as zf:
|
||||
zf.writestr('settings.txt', json.dumps(config.to_dict(), sort_keys=True, indent=2))
|
||||
zf.writestr('settings.txt', json.dumps(config.toDict(), sort_keys=True, indent=2))
|
||||
zf.writestr('libs.txt', json.dumps(collect_stats(), sort_keys=True, indent=2, cls=lazyEncoder))
|
||||
for fp in file_list:
|
||||
zf.write(fp, os.path.basename(fp))
|
||||
memory_zip.seek(0)
|
||||
version = metadata("flask")["Version"]
|
||||
if int(version.split('.')[0]) < 2:
|
||||
if int(__version__.split('.')[0]) < 2:
|
||||
return send_file(memory_zip,
|
||||
as_attachment=True,
|
||||
attachment_filename="Calibre-Web-debug-pack.zip")
|
||||
|
@ -39,24 +39,8 @@ def load_dependencies(optional=False):
|
||||
with open(req_path, 'r') as f:
|
||||
for line in f:
|
||||
if not line.startswith('#') and not line == '\n' and not line.startswith('git'):
|
||||
res = re.match(r'(.*?)([<=>\s]+)([\d\.]+),?\s?([<=>\s]+)?([\d\.]+)?(?:\s?;\s?'
|
||||
r'(?:(python_version)\s?([<=>]+)\s?\'([\d\.]+)\'|'
|
||||
r'(sys_platform)\s?([\!=]+)\s?\'([\w]+)\'))?', line.strip())
|
||||
res = re.match(r'(.*?)([<=>\s]+)([\d\.]+),?\s?([<=>\s]+)?([\d\.]+)?', line.strip())
|
||||
try:
|
||||
if res.group(7) and res.group(8):
|
||||
val = res.group(8).split(".")
|
||||
if not eval(str(sys.version_info[0]) + "." + "{:02d}".format(sys.version_info[1]) +
|
||||
res.group(7) + val[0] + "." + "{:02d}".format(int(val[1]))):
|
||||
continue
|
||||
elif res.group(10) and res.group(11):
|
||||
# only installed if platform is eqal, don't check if platform is not equal
|
||||
if res.group(10) == "==":
|
||||
if sys.platform != res.group(11):
|
||||
continue
|
||||
# installed if platform is not eqal, don't check if platform is equal
|
||||
elif res.group(10) == "!=":
|
||||
if sys.platform == res.group(11):
|
||||
continue
|
||||
if getattr(sys, 'frozen', False):
|
||||
dep_version = exe_deps[res.group(1).lower().replace('_', '-')]
|
||||
else:
|
||||
@ -74,12 +58,10 @@ def load_dependencies(optional=False):
|
||||
|
||||
def dependency_check(optional=False):
|
||||
d = list()
|
||||
dep_version_int = None
|
||||
low_check = None
|
||||
deps = load_dependencies(optional)
|
||||
for dep in deps:
|
||||
try:
|
||||
dep_version_int = [int(x) if x.isnumeric() else 0 for x in dep[0].split('.')[:3]]
|
||||
dep_version_int = [int(x) for x in dep[0].split('.')]
|
||||
low_check = [int(x) for x in dep[3].split('.')]
|
||||
high_check = [int(x) for x in dep[5].split('.')]
|
||||
except AttributeError:
|
||||
|
963
cps/editbooks.py
Normal file → Executable file
@ -1,61 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2024 OzzieIsaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
from uuid import uuid4
|
||||
import os
|
||||
|
||||
from .file_helper import get_temp_dir
|
||||
from .subproc_wrapper import process_open
|
||||
from . import logger, config
|
||||
from .constants import SUPPORTED_CALIBRE_BINARIES
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
||||
def do_calibre_export(book_id, book_format):
|
||||
try:
|
||||
quotes = [4, 6]
|
||||
tmp_dir = get_temp_dir()
|
||||
calibredb_binarypath = get_calibre_binarypath("calibredb")
|
||||
temp_file_name = str(uuid4())
|
||||
my_env = os.environ.copy()
|
||||
if config.config_calibre_split:
|
||||
my_env['CALIBRE_OVERRIDE_DATABASE_PATH'] = os.path.join(config.config_calibre_dir, "metadata.db")
|
||||
library_path = config.get_book_path()
|
||||
opf_command = [calibredb_binarypath, 'export', '--dont-write-opf', '--with-library', library_path,
|
||||
'--to-dir', tmp_dir, '--formats', book_format, "--template", "{}".format(temp_file_name),
|
||||
str(book_id)]
|
||||
p = process_open(opf_command, quotes, my_env)
|
||||
_, err = p.communicate()
|
||||
if err:
|
||||
log.error('Metadata embedder encountered an error: %s', err)
|
||||
return tmp_dir, temp_file_name
|
||||
except OSError as ex:
|
||||
# ToDo real error handling
|
||||
log.error_or_exception(ex)
|
||||
return None, None
|
||||
|
||||
|
||||
def get_calibre_binarypath(binary):
|
||||
binariesdir = config.config_binariesdir
|
||||
if binariesdir:
|
||||
try:
|
||||
return os.path.join(binariesdir, SUPPORTED_CALIBRE_BINARIES[binary])
|
||||
except KeyError as ex:
|
||||
log.error("Binary not supported by Calibre-Web: %s", SUPPORTED_CALIBRE_BINARIES[binary])
|
||||
pass
|
||||
return ""
|
143
cps/epub.py
@ -21,59 +21,41 @@ import zipfile
|
||||
from lxml import etree
|
||||
|
||||
from . import isoLanguages, cover
|
||||
from . import config, logger
|
||||
from .helper import split_authors
|
||||
from .epub_helper import get_content_opf, default_ns
|
||||
from .constants import BookMeta
|
||||
from .string_helper import strip_whitespaces
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
||||
def _extract_cover(zip_file, cover_file, cover_path, tmp_file_name):
|
||||
if cover_file is None:
|
||||
return None
|
||||
|
||||
cf = extension = None
|
||||
zip_cover_path = os.path.join(cover_path, cover_file).replace('\\', '/')
|
||||
|
||||
prefix = os.path.splitext(tmp_file_name)[0]
|
||||
tmp_cover_name = prefix + '.' + os.path.basename(zip_cover_path)
|
||||
ext = os.path.splitext(tmp_cover_name)
|
||||
if len(ext) > 1:
|
||||
extension = ext[1].lower()
|
||||
if extension in cover.COVER_EXTENSIONS:
|
||||
cf = zip_file.read(zip_cover_path)
|
||||
return cover.cover_processing(tmp_file_name, cf, extension)
|
||||
|
||||
|
||||
def get_epub_layout(book, book_data):
|
||||
file_path = os.path.normpath(os.path.join(config.get_book_path(),
|
||||
book.path, book_data.name + "." + book_data.format.lower()))
|
||||
|
||||
try:
|
||||
tree, __ = get_content_opf(file_path, default_ns)
|
||||
p = tree.xpath('/pkg:package/pkg:metadata', namespaces=default_ns)[0]
|
||||
|
||||
layout = p.xpath('pkg:meta[@property="rendition:layout"]/text()', namespaces=default_ns)
|
||||
except (etree.XMLSyntaxError, KeyError, IndexError, OSError, UnicodeDecodeError) as e:
|
||||
log.error("Could not parse epub metadata of book {} during kobo sync: {}".format(book.id, e))
|
||||
layout = []
|
||||
|
||||
if len(layout) == 0:
|
||||
return None
|
||||
else:
|
||||
return layout[0]
|
||||
cf = extension = None
|
||||
zip_cover_path = os.path.join(cover_path, cover_file).replace('\\', '/')
|
||||
|
||||
prefix = os.path.splitext(tmp_file_name)[0]
|
||||
tmp_cover_name = prefix + '.' + os.path.basename(zip_cover_path)
|
||||
ext = os.path.splitext(tmp_cover_name)
|
||||
if len(ext) > 1:
|
||||
extension = ext[1].lower()
|
||||
if extension in cover.COVER_EXTENSIONS:
|
||||
cf = zip_file.read(zip_cover_path)
|
||||
return cover.cover_processing(tmp_file_name, cf, extension)
|
||||
|
||||
|
||||
def get_epub_info(tmp_file_path, original_file_name, original_file_extension, no_cover_processing):
|
||||
def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
|
||||
ns = {
|
||||
'n': 'urn:oasis:names:tc:opendocument:xmlns:container',
|
||||
'pkg': 'http://www.idpf.org/2007/opf',
|
||||
'dc': 'http://purl.org/dc/elements/1.1/'
|
||||
}
|
||||
|
||||
tree, cf_name = get_content_opf(tmp_file_path, ns)
|
||||
epub_zip = zipfile.ZipFile(tmp_file_path)
|
||||
|
||||
txt = epub_zip.read('META-INF/container.xml')
|
||||
tree = etree.fromstring(txt)
|
||||
cf_name = tree.xpath('n:rootfiles/n:rootfile/@full-path', namespaces=ns)[0]
|
||||
cf = epub_zip.read(cf_name)
|
||||
tree = etree.fromstring(cf)
|
||||
|
||||
cover_path = os.path.dirname(cf_name)
|
||||
|
||||
@ -91,20 +73,20 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension, no
|
||||
elif s == 'date':
|
||||
epub_metadata[s] = tmp[0][:10]
|
||||
else:
|
||||
epub_metadata[s] = strip_whitespaces(tmp[0])
|
||||
epub_metadata[s] = tmp[0]
|
||||
else:
|
||||
epub_metadata[s] = 'Unknown'
|
||||
|
||||
if epub_metadata['subject'] == 'Unknown':
|
||||
epub_metadata['subject'] = ''
|
||||
|
||||
if epub_metadata['publisher'] == 'Unknown':
|
||||
if epub_metadata['publisher'] == u'Unknown':
|
||||
epub_metadata['publisher'] = ''
|
||||
|
||||
if epub_metadata['date'] == 'Unknown':
|
||||
if epub_metadata['date'] == u'Unknown':
|
||||
epub_metadata['date'] = ''
|
||||
|
||||
if epub_metadata['description'] == 'Unknown':
|
||||
if epub_metadata['description'] == u'Unknown':
|
||||
description = tree.xpath("//*[local-name() = 'description']/text()")
|
||||
if len(description) > 0:
|
||||
epub_metadata['description'] = description
|
||||
@ -116,22 +98,15 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension, no
|
||||
|
||||
epub_metadata = parse_epub_series(ns, tree, epub_metadata)
|
||||
|
||||
epub_zip = zipfile.ZipFile(tmp_file_path)
|
||||
if not no_cover_processing:
|
||||
cover_file = parse_epub_cover(ns, tree, epub_zip, cover_path, tmp_file_path)
|
||||
else:
|
||||
cover_file = None
|
||||
cover_file = parse_epub_cover(ns, tree, epub_zip, cover_path, tmp_file_path)
|
||||
|
||||
identifiers = []
|
||||
for node in p.xpath('dc:identifier', namespaces=ns):
|
||||
try:
|
||||
identifier_name = node.attrib.values()[-1]
|
||||
except IndexError:
|
||||
continue
|
||||
identifier_value = node.text
|
||||
if identifier_name in ('uuid', 'calibre') or identifier_value is None:
|
||||
continue
|
||||
identifiers.append([identifier_name, identifier_value])
|
||||
identifier_name=node.attrib.values()[-1];
|
||||
identifier_value=node.text;
|
||||
if identifier_name in ('uuid','calibre'):
|
||||
continue;
|
||||
identifiers.append( [identifier_name, identifier_value] )
|
||||
|
||||
if not epub_metadata['title']:
|
||||
title = original_file_name
|
||||
@ -156,40 +131,40 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension, no
|
||||
|
||||
def parse_epub_cover(ns, tree, epub_zip, cover_path, tmp_file_path):
|
||||
cover_section = tree.xpath("/pkg:package/pkg:manifest/pkg:item[@id='cover-image']/@href", namespaces=ns)
|
||||
cover_file = None
|
||||
# if len(cover_section) > 0:
|
||||
for cs in cover_section:
|
||||
cover_file = _extract_cover(epub_zip, cs, cover_path, tmp_file_path)
|
||||
if cover_file:
|
||||
return cover_file
|
||||
|
||||
meta_cover = tree.xpath("/pkg:package/pkg:metadata/pkg:meta[@name='cover']/@content", namespaces=ns)
|
||||
if len(meta_cover) > 0:
|
||||
cover_section = tree.xpath(
|
||||
"/pkg:package/pkg:manifest/pkg:item[@id='"+meta_cover[0]+"']/@href", namespaces=ns)
|
||||
if not cover_section:
|
||||
cover_section = tree.xpath(
|
||||
"/pkg:package/pkg:manifest/pkg:item[@properties='" + meta_cover[0] + "']/@href", namespaces=ns)
|
||||
else:
|
||||
cover_section = tree.xpath("/pkg:package/pkg:guide/pkg:reference/@href", namespaces=ns)
|
||||
|
||||
cover_file = None
|
||||
for cs in cover_section:
|
||||
if cs.endswith('.xhtml') or cs.endswith('.html'):
|
||||
markup = epub_zip.read(os.path.join(cover_path, cs))
|
||||
markup_tree = etree.fromstring(markup)
|
||||
# no matter xhtml or html with no namespace
|
||||
img_src = markup_tree.xpath("//*[local-name() = 'img']/@src")
|
||||
# Alternative image source
|
||||
if not len(img_src):
|
||||
img_src = markup_tree.xpath("//attribute::*[contains(local-name(), 'href')]")
|
||||
if len(img_src):
|
||||
# img_src maybe start with "../"" so fullpath join then relpath to cwd
|
||||
filename = os.path.relpath(os.path.join(os.path.dirname(os.path.join(cover_path, cover_section[0])),
|
||||
img_src[0]))
|
||||
cover_file = _extract_cover(epub_zip, filename, "", tmp_file_path)
|
||||
else:
|
||||
cover_file = _extract_cover(epub_zip, cs, cover_path, tmp_file_path)
|
||||
if cover_file:
|
||||
break
|
||||
if not cover_file:
|
||||
meta_cover = tree.xpath("/pkg:package/pkg:metadata/pkg:meta[@name='cover']/@content", namespaces=ns)
|
||||
if len(meta_cover) > 0:
|
||||
cover_section = tree.xpath(
|
||||
"/pkg:package/pkg:manifest/pkg:item[@id='"+meta_cover[0]+"']/@href", namespaces=ns)
|
||||
if not cover_section:
|
||||
cover_section = tree.xpath(
|
||||
"/pkg:package/pkg:manifest/pkg:item[@properties='" + meta_cover[0] + "']/@href", namespaces=ns)
|
||||
else:
|
||||
cover_section = tree.xpath("/pkg:package/pkg:guide/pkg:reference/@href", namespaces=ns)
|
||||
for cs in cover_section:
|
||||
filetype = cs.rsplit('.', 1)[-1]
|
||||
if filetype == "xhtml" or filetype == "html": # if cover is (x)html format
|
||||
markup = epub_zip.read(os.path.join(cover_path, cs))
|
||||
markup_tree = etree.fromstring(markup)
|
||||
# no matter xhtml or html with no namespace
|
||||
img_src = markup_tree.xpath("//*[local-name() = 'img']/@src")
|
||||
# Alternative image source
|
||||
if not len(img_src):
|
||||
img_src = markup_tree.xpath("//attribute::*[contains(local-name(), 'href')]")
|
||||
if len(img_src):
|
||||
# img_src maybe start with "../"" so fullpath join then relpath to cwd
|
||||
filename = os.path.relpath(os.path.join(os.path.dirname(os.path.join(cover_path, cover_section[0])),
|
||||
img_src[0]))
|
||||
cover_file = _extract_cover(epub_zip, filename, "", tmp_file_path)
|
||||
else:
|
||||
cover_file = _extract_cover(epub_zip, cs, cover_path, tmp_file_path)
|
||||
if cover_file: break
|
||||
return cover_file
|
||||
|
||||
|
||||
|
@ -1,169 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2018 lemmsh, Kennyl, Kyosfonica, matthazinski
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import zipfile
|
||||
from lxml import etree
|
||||
|
||||
from . import isoLanguages
|
||||
|
||||
default_ns = {
|
||||
'n': 'urn:oasis:names:tc:opendocument:xmlns:container',
|
||||
'pkg': 'http://www.idpf.org/2007/opf',
|
||||
}
|
||||
|
||||
OPF_NAMESPACE = "http://www.idpf.org/2007/opf"
|
||||
PURL_NAMESPACE = "http://purl.org/dc/elements/1.1/"
|
||||
|
||||
OPF = "{%s}" % OPF_NAMESPACE
|
||||
PURL = "{%s}" % PURL_NAMESPACE
|
||||
|
||||
etree.register_namespace("opf", OPF_NAMESPACE)
|
||||
etree.register_namespace("dc", PURL_NAMESPACE)
|
||||
|
||||
OPF_NS = {None: OPF_NAMESPACE} # the default namespace (no prefix)
|
||||
NSMAP = {'dc': PURL_NAMESPACE, 'opf': OPF_NAMESPACE}
|
||||
|
||||
|
||||
def updateEpub(src, dest, filename, data, ):
|
||||
# create a temp copy of the archive without filename
|
||||
with zipfile.ZipFile(src, 'r') as zin:
|
||||
with zipfile.ZipFile(dest, 'w') as zout:
|
||||
zout.comment = zin.comment # preserve the comment
|
||||
for item in zin.infolist():
|
||||
if item.filename != filename:
|
||||
zout.writestr(item, zin.read(item.filename))
|
||||
|
||||
# now add filename with its new data
|
||||
with zipfile.ZipFile(dest, mode='a', compression=zipfile.ZIP_DEFLATED) as zf:
|
||||
zf.writestr(filename, data)
|
||||
|
||||
|
||||
def get_content_opf(file_path, ns=None):
|
||||
if ns is None:
|
||||
ns = default_ns
|
||||
epubZip = zipfile.ZipFile(file_path)
|
||||
txt = epubZip.read('META-INF/container.xml')
|
||||
tree = etree.fromstring(txt)
|
||||
cf_name = tree.xpath('n:rootfiles/n:rootfile/@full-path', namespaces=ns)[0]
|
||||
cf = epubZip.read(cf_name)
|
||||
|
||||
return etree.fromstring(cf), cf_name
|
||||
|
||||
|
||||
def create_new_metadata_backup(book, custom_columns, export_language, translated_cover_name, lang_type=3):
|
||||
# generate root package element
|
||||
package = etree.Element(OPF + "package", nsmap=OPF_NS)
|
||||
package.set("unique-identifier", "uuid_id")
|
||||
package.set("version", "2.0")
|
||||
|
||||
# generate metadata element and all sub elements of it
|
||||
metadata = etree.SubElement(package, "metadata", nsmap=NSMAP)
|
||||
identifier = etree.SubElement(metadata, PURL + "identifier", id="calibre_id", nsmap=NSMAP)
|
||||
identifier.set(OPF + "scheme", "calibre")
|
||||
identifier.text = str(book.id)
|
||||
identifier2 = etree.SubElement(metadata, PURL + "identifier", id="uuid_id", nsmap=NSMAP)
|
||||
identifier2.set(OPF + "scheme", "uuid")
|
||||
identifier2.text = book.uuid
|
||||
for i in book.identifiers:
|
||||
identifier = etree.SubElement(metadata, PURL + "identifier", nsmap=NSMAP)
|
||||
identifier.set(OPF + "scheme", i.format_type())
|
||||
identifier.text = str(i.val)
|
||||
title = etree.SubElement(metadata, PURL + "title", nsmap=NSMAP)
|
||||
title.text = book.title
|
||||
for author in book.authors:
|
||||
creator = etree.SubElement(metadata, PURL + "creator", nsmap=NSMAP)
|
||||
creator.text = str(author.name)
|
||||
creator.set(OPF + "file-as", book.author_sort) # ToDo Check
|
||||
creator.set(OPF + "role", "aut")
|
||||
contributor = etree.SubElement(metadata, PURL + "contributor", nsmap=NSMAP)
|
||||
contributor.text = "calibre (5.7.2) [https://calibre-ebook.com]"
|
||||
contributor.set(OPF + "file-as", "calibre") # ToDo Check
|
||||
contributor.set(OPF + "role", "bkp")
|
||||
|
||||
date = etree.SubElement(metadata, PURL + "date", nsmap=NSMAP)
|
||||
date.text = '{d.year:04}-{d.month:02}-{d.day:02}T{d.hour:02}:{d.minute:02}:{d.second:02}'.format(d=book.pubdate)
|
||||
if book.comments and book.comments[0].text:
|
||||
for b in book.comments:
|
||||
description = etree.SubElement(metadata, PURL + "description", nsmap=NSMAP)
|
||||
description.text = b.text
|
||||
for b in book.publishers:
|
||||
publisher = etree.SubElement(metadata, PURL + "publisher", nsmap=NSMAP)
|
||||
publisher.text = str(b.name)
|
||||
if not book.languages:
|
||||
language = etree.SubElement(metadata, PURL + "language", nsmap=NSMAP)
|
||||
language.text = export_language
|
||||
else:
|
||||
for b in book.languages:
|
||||
language = etree.SubElement(metadata, PURL + "language", nsmap=NSMAP)
|
||||
language.text = str(b.lang_code) if lang_type == 3 else isoLanguages.get(part3=b.lang_code).part1
|
||||
for b in book.tags:
|
||||
subject = etree.SubElement(metadata, PURL + "subject", nsmap=NSMAP)
|
||||
subject.text = str(b.name)
|
||||
etree.SubElement(metadata, "meta", name="calibre:author_link_map",
|
||||
content="{" + ", ".join(['"' + str(a.name) + '": ""' for a in book.authors]) + "}",
|
||||
nsmap=NSMAP)
|
||||
for b in book.series:
|
||||
etree.SubElement(metadata, "meta", name="calibre:series",
|
||||
content=str(str(b.name)),
|
||||
nsmap=NSMAP)
|
||||
if book.series:
|
||||
etree.SubElement(metadata, "meta", name="calibre:series_index",
|
||||
content=str(book.series_index),
|
||||
nsmap=NSMAP)
|
||||
if len(book.ratings) and book.ratings[0].rating > 0:
|
||||
etree.SubElement(metadata, "meta", name="calibre:rating",
|
||||
content=str(book.ratings[0].rating),
|
||||
nsmap=NSMAP)
|
||||
etree.SubElement(metadata, "meta", name="calibre:timestamp",
|
||||
content='{d.year:04}-{d.month:02}-{d.day:02}T{d.hour:02}:{d.minute:02}:{d.second:02}'.format(
|
||||
d=book.timestamp),
|
||||
nsmap=NSMAP)
|
||||
etree.SubElement(metadata, "meta", name="calibre:title_sort",
|
||||
content=book.sort,
|
||||
nsmap=NSMAP)
|
||||
sequence = 0
|
||||
for cc in custom_columns:
|
||||
value = None
|
||||
extra = None
|
||||
cc_entry = getattr(book, "custom_column_" + str(cc.id))
|
||||
if cc_entry.__len__():
|
||||
value = [c.value for c in cc_entry] if cc.is_multiple else cc_entry[0].value
|
||||
extra = cc_entry[0].extra if hasattr(cc_entry[0], "extra") else None
|
||||
etree.SubElement(metadata, "meta", name="calibre:user_metadata:#{}".format(cc.label),
|
||||
content=cc.to_json(value, extra, sequence),
|
||||
nsmap=NSMAP)
|
||||
sequence += 1
|
||||
|
||||
# generate guide element and all sub elements of it
|
||||
# Title is translated from default export language
|
||||
guide = etree.SubElement(package, "guide")
|
||||
etree.SubElement(guide, "reference", type="cover", title=translated_cover_name, href="cover.jpg")
|
||||
|
||||
return package
|
||||
|
||||
|
||||
def replace_metadata(tree, package):
|
||||
rep_element = tree.xpath('/pkg:package/pkg:metadata', namespaces=default_ns)[0]
|
||||
new_element = package.xpath('//metadata', namespaces=default_ns)[0]
|
||||
tree.replace(rep_element, new_element)
|
||||
return etree.tostring(tree,
|
||||
xml_declaration=True,
|
||||
encoding='utf-8',
|
||||
pretty_print=True).decode('utf-8')
|
||||
|
||||
|
@ -31,43 +31,27 @@ from . import config, app, logger, services
|
||||
log = logger.create()
|
||||
|
||||
# custom error page
|
||||
|
||||
def error_http(error):
|
||||
return render_template('http_error.html',
|
||||
error_code="Error {0}".format(error.code),
|
||||
error_name=error.name,
|
||||
issue=False,
|
||||
goto_admin=False,
|
||||
unconfigured=not config.db_configured,
|
||||
instance=config.config_calibre_web_title
|
||||
), error.code
|
||||
|
||||
|
||||
def internal_error(error):
|
||||
if (isinstance(error.original_exception, AttributeError) and
|
||||
error.original_exception.args[0] == "'NoneType' object has no attribute 'query'"
|
||||
and error.original_exception.name == "query"):
|
||||
return render_template('http_error.html',
|
||||
error_code="Database Error",
|
||||
error_name='The library used is invalid or has permission errors',
|
||||
issue=False,
|
||||
goto_admin=True,
|
||||
unconfigured=False,
|
||||
error_stack="",
|
||||
instance=config.config_calibre_web_title
|
||||
), 500
|
||||
return render_template('http_error.html',
|
||||
error_code="500 Internal Server Error",
|
||||
error_name='The server encountered an internal error and was unable to complete your '
|
||||
'request. There is an error in the application.',
|
||||
issue=True,
|
||||
goto_admin=False,
|
||||
unconfigured=False,
|
||||
error_stack=traceback.format_exc().split("\n"),
|
||||
instance=config.config_calibre_web_title
|
||||
), 500
|
||||
|
||||
|
||||
def init_errorhandler():
|
||||
# http error handling
|
||||
for ex in default_exceptions:
|
||||
@ -76,6 +60,7 @@ def init_errorhandler():
|
||||
elif ex == 500:
|
||||
app.register_error_handler(ex, internal_error)
|
||||
|
||||
|
||||
if services.ldap:
|
||||
# Only way of catching the LDAPException upon logging in with LDAP server down
|
||||
@app.errorhandler(services.ldap.LDAPException)
|
||||
|
14
cps/fb2.py
@ -38,19 +38,19 @@ def get_fb2_info(tmp_file_path, original_file_extension):
|
||||
if len(last_name):
|
||||
last_name = last_name[0]
|
||||
else:
|
||||
last_name = ''
|
||||
last_name = u''
|
||||
middle_name = element.xpath('fb:middle-name/text()', namespaces=ns)
|
||||
if len(middle_name):
|
||||
middle_name = middle_name[0]
|
||||
else:
|
||||
middle_name = ''
|
||||
middle_name = u''
|
||||
first_name = element.xpath('fb:first-name/text()', namespaces=ns)
|
||||
if len(first_name):
|
||||
first_name = first_name[0]
|
||||
else:
|
||||
first_name = ''
|
||||
return (first_name + ' '
|
||||
+ middle_name + ' '
|
||||
first_name = u''
|
||||
return (first_name + u' '
|
||||
+ middle_name + u' '
|
||||
+ last_name)
|
||||
|
||||
author = str(", ".join(map(get_author, authors)))
|
||||
@ -59,12 +59,12 @@ def get_fb2_info(tmp_file_path, original_file_extension):
|
||||
if len(title):
|
||||
title = str(title[0])
|
||||
else:
|
||||
title = ''
|
||||
title = u''
|
||||
description = tree.xpath('/fb:FictionBook/fb:description/fb:publish-info/fb:book-name/text()', namespaces=ns)
|
||||
if len(description):
|
||||
description = str(description[0])
|
||||
else:
|
||||
description = ''
|
||||
description = u''
|
||||
|
||||
return BookMeta(
|
||||
file_path=tmp_file_path,
|
||||
|
@ -1,83 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2023 OzzieIsaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from tempfile import gettempdir
|
||||
import os
|
||||
import shutil
|
||||
import zipfile
|
||||
import mimetypes
|
||||
from io import BytesIO
|
||||
|
||||
from . import logger
|
||||
|
||||
log = logger.create()
|
||||
|
||||
try:
|
||||
import magic
|
||||
error = None
|
||||
except ImportError as e:
|
||||
error = "Cannot import python-magic, checking uploaded file metadata will not work: {}".format(e)
|
||||
|
||||
|
||||
def get_mimetype(ext):
|
||||
# overwrite some mimetypes for proper file detection
|
||||
mimes = {".fb2": "text/xml",
|
||||
".cbz": "application/zip",
|
||||
".cbr": "application/x-rar"
|
||||
}
|
||||
return mimes.get(ext, mimetypes.types_map[ext])
|
||||
|
||||
|
||||
def get_temp_dir():
|
||||
tmp_dir = os.path.join(gettempdir(), 'calibre_web')
|
||||
if not os.path.isdir(tmp_dir):
|
||||
os.mkdir(tmp_dir)
|
||||
return tmp_dir
|
||||
|
||||
|
||||
def del_temp_dir():
|
||||
tmp_dir = os.path.join(gettempdir(), 'calibre_web')
|
||||
shutil.rmtree(tmp_dir)
|
||||
|
||||
|
||||
def validate_mime_type(file_buffer, allowed_extensions):
|
||||
if error:
|
||||
log.error(error)
|
||||
return False
|
||||
mime = magic.Magic(mime=True)
|
||||
allowed_mimetypes = list()
|
||||
for x in allowed_extensions:
|
||||
try:
|
||||
allowed_mimetypes.append(get_mimetype("." + x))
|
||||
except KeyError:
|
||||
log.error("Unkown mimetype for Extension: {}".format(x))
|
||||
tmp_mime_type = mime.from_buffer(file_buffer.read())
|
||||
file_buffer.seek(0)
|
||||
if any(mime_type in tmp_mime_type for mime_type in allowed_mimetypes):
|
||||
return True
|
||||
# Some epubs show up as zip mimetypes
|
||||
elif "zip" in tmp_mime_type:
|
||||
try:
|
||||
with zipfile.ZipFile(BytesIO(file_buffer.read()), 'r') as epub:
|
||||
file_buffer.seek(0)
|
||||
if "mimetype" in epub.namelist():
|
||||
return True
|
||||
except:
|
||||
file_buffer.seek(0)
|
||||
log.error("Mimetype '{}' not found in allowed types".format(tmp_mime_type))
|
||||
return False
|
@ -23,17 +23,17 @@
|
||||
import os
|
||||
import hashlib
|
||||
import json
|
||||
import tempfile
|
||||
from uuid import uuid4
|
||||
from time import time
|
||||
from shutil import move, copyfile
|
||||
|
||||
from flask import Blueprint, flash, request, redirect, url_for, abort
|
||||
from flask_babel import gettext as _
|
||||
from flask_login import login_required
|
||||
|
||||
from . import logger, gdriveutils, config, ub, calibre_db, csrf
|
||||
from .admin import admin_required
|
||||
from .file_helper import get_temp_dir
|
||||
from .usermanagement import user_login_required
|
||||
|
||||
gdrive = Blueprint('gdrive', __name__, url_prefix='/gdrive')
|
||||
log = logger.create()
|
||||
@ -45,17 +45,17 @@ except ImportError as err:
|
||||
|
||||
current_milli_time = lambda: int(round(time() * 1000))
|
||||
|
||||
gdrive_watch_callback_token = 'target=calibreweb-watch_files' # nosec
|
||||
gdrive_watch_callback_token = 'target=calibreweb-watch_files' #nosec
|
||||
|
||||
|
||||
@gdrive.route("/authenticate")
|
||||
@user_login_required
|
||||
@login_required
|
||||
@admin_required
|
||||
def authenticate_google_drive():
|
||||
try:
|
||||
authUrl = gdriveutils.Gauth.Instance().auth.GetAuthUrl()
|
||||
except gdriveutils.InvalidConfigError:
|
||||
flash(_('Google Drive setup not completed, try to deactivate and activate Google Drive again'),
|
||||
flash(_(u'Google Drive setup not completed, try to deactivate and activate Google Drive again'),
|
||||
category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
return redirect(authUrl)
|
||||
@ -76,7 +76,7 @@ def google_drive_callback():
|
||||
|
||||
|
||||
@gdrive.route("/watch/subscribe")
|
||||
@user_login_required
|
||||
@login_required
|
||||
@admin_required
|
||||
def watch_gdrive():
|
||||
if not config.config_google_drive_watch_changes_response:
|
||||
@ -86,15 +86,14 @@ def watch_gdrive():
|
||||
notification_id = str(uuid4())
|
||||
try:
|
||||
result = gdriveutils.watchChange(gdriveutils.Gdrive.Instance().drive, notification_id,
|
||||
'web_hook', address, gdrive_watch_callback_token, current_milli_time() + 604800*1000)
|
||||
|
||||
'web_hook', address, gdrive_watch_callback_token, current_milli_time() + 604800*1000)
|
||||
config.config_google_drive_watch_changes_response = result
|
||||
config.save()
|
||||
except HttpError as e:
|
||||
reason = json.loads(e.content)['error']['errors'][0]
|
||||
if reason['reason'] == 'push.webhookUrlUnauthorized':
|
||||
flash(_('Callback domain is not verified, '
|
||||
'please follow steps to verify domain in google developer console'), category="error")
|
||||
reason=json.loads(e.content)['error']['errors'][0]
|
||||
if reason['reason'] == u'push.webhookUrlUnauthorized':
|
||||
flash(_(u'Callback domain is not verified, '
|
||||
u'please follow steps to verify domain in google developer console'), category="error")
|
||||
else:
|
||||
flash(reason['message'], category="error")
|
||||
|
||||
@ -102,7 +101,7 @@ def watch_gdrive():
|
||||
|
||||
|
||||
@gdrive.route("/watch/revoke")
|
||||
@user_login_required
|
||||
@login_required
|
||||
@admin_required
|
||||
def revoke_watch_gdrive():
|
||||
last_watch_response = config.config_google_drive_watch_changes_response
|
||||
@ -116,7 +115,6 @@ def revoke_watch_gdrive():
|
||||
config.save()
|
||||
return redirect(url_for('admin.db_configuration'))
|
||||
|
||||
|
||||
try:
|
||||
@csrf.exempt
|
||||
@gdrive.route("/watch/callback", methods=['GET', 'POST'])
|
||||
@ -140,8 +138,10 @@ try:
|
||||
if response:
|
||||
dbpath = os.path.join(config.config_calibre_dir, "metadata.db").encode()
|
||||
if not response['deleted'] and response['file']['title'] == 'metadata.db' \
|
||||
and response['file']['md5Checksum'] != hashlib.md5(dbpath): # nosec
|
||||
tmp_dir = get_temp_dir()
|
||||
and response['file']['md5Checksum'] != hashlib.md5(dbpath): # nosec
|
||||
tmp_dir = os.path.join(tempfile.gettempdir(), 'calibre_web')
|
||||
if not os.path.isdir(tmp_dir):
|
||||
os.mkdir(tmp_dir)
|
||||
|
||||
log.info('Database file updated')
|
||||
copyfile(dbpath, os.path.join(tmp_dir, "metadata.db_" + str(current_milli_time())))
|
||||
|
@ -21,10 +21,7 @@ import json
|
||||
import shutil
|
||||
import chardet
|
||||
import ssl
|
||||
import sqlite3
|
||||
import mimetypes
|
||||
|
||||
from werkzeug.datastructures import Headers
|
||||
from flask import Response, stream_with_context
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy import Column, UniqueConstraint
|
||||
@ -37,6 +34,7 @@ except ImportError:
|
||||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy.exc import OperationalError, InvalidRequestError, IntegrityError
|
||||
from sqlalchemy.orm.exc import StaleDataError
|
||||
from sqlalchemy.sql.expression import text
|
||||
|
||||
try:
|
||||
from httplib2 import __version__ as httplib2_version
|
||||
@ -66,7 +64,7 @@ except ImportError as err:
|
||||
importError = err
|
||||
gdrive_support = False
|
||||
|
||||
from . import logger, cli_param, config, db
|
||||
from . import logger, cli_param, config
|
||||
from .constants import CONFIG_DIR as _CONFIG_DIR
|
||||
|
||||
|
||||
@ -149,7 +147,7 @@ engine = create_engine('sqlite:///{0}'.format(cli_param.gd_path), echo=False)
|
||||
Base = declarative_base()
|
||||
|
||||
# Open session for database connection
|
||||
Session = sessionmaker(autoflush=False)
|
||||
Session = sessionmaker()
|
||||
Session.configure(bind=engine)
|
||||
session = scoped_session(Session)
|
||||
|
||||
@ -176,12 +174,30 @@ class PermissionAdded(Base):
|
||||
return str(self.gdrive_id)
|
||||
|
||||
|
||||
def migrate():
|
||||
if not engine.dialect.has_table(engine.connect(), "permissions_added"):
|
||||
PermissionAdded.__table__.create(bind = engine)
|
||||
for sql in session.execute(text("select sql from sqlite_master where type='table'")):
|
||||
if 'CREATE TABLE gdrive_ids' in sql[0]:
|
||||
currUniqueConstraint = 'UNIQUE (gdrive_id)'
|
||||
if currUniqueConstraint in sql[0]:
|
||||
sql=sql[0].replace(currUniqueConstraint, 'UNIQUE (gdrive_id, path)')
|
||||
sql=sql.replace(GdriveId.__tablename__, GdriveId.__tablename__ + '2')
|
||||
session.execute(sql)
|
||||
session.execute("INSERT INTO gdrive_ids2 (id, gdrive_id, path) SELECT id, "
|
||||
"gdrive_id, path FROM gdrive_ids;")
|
||||
session.commit()
|
||||
session.execute('DROP TABLE %s' % 'gdrive_ids')
|
||||
session.execute('ALTER TABLE gdrive_ids2 RENAME to gdrive_ids')
|
||||
break
|
||||
|
||||
if not os.path.exists(cli_param.gd_path):
|
||||
try:
|
||||
Base.metadata.create_all(engine)
|
||||
except Exception as ex:
|
||||
log.error("Error connect to database: {} - {}".format(cli_param.gd_path, ex))
|
||||
raise
|
||||
migrate()
|
||||
|
||||
|
||||
def getDrive(drive=None, gauth=None):
|
||||
@ -210,7 +226,6 @@ def getDrive(drive=None, gauth=None):
|
||||
log.error("Google Drive error: {}".format(e))
|
||||
return drive
|
||||
|
||||
|
||||
def listRootFolders():
|
||||
try:
|
||||
drive = getDrive(Gdrive.Instance().drive)
|
||||
@ -228,7 +243,7 @@ def getEbooksFolder(drive):
|
||||
|
||||
def getFolderInFolder(parentId, folderName, drive):
|
||||
# drive = getDrive(drive)
|
||||
query = ""
|
||||
query=""
|
||||
if folderName:
|
||||
query = "title = '%s' and " % folderName.replace("'", r"\'")
|
||||
folder = query + "'%s' in parents and mimeType = 'application/vnd.google-apps.folder'" \
|
||||
@ -239,7 +254,6 @@ def getFolderInFolder(parentId, folderName, drive):
|
||||
else:
|
||||
return fileList[0]
|
||||
|
||||
|
||||
# Search for id of root folder in gdrive database, if not found request from gdrive and store in internal database
|
||||
def getEbooksFolderId(drive=None):
|
||||
storedPathName = session.query(GdriveId).filter(GdriveId.path == '/').first()
|
||||
@ -261,20 +275,17 @@ def getEbooksFolderId(drive=None):
|
||||
return gDriveId.gdrive_id
|
||||
|
||||
|
||||
def getFile(pathId, fileName, drive, nocase):
|
||||
metaDataFile = "'%s' in parents and trashed = false and title contains '%s'" % (pathId, fileName.replace("'", r"\'"))
|
||||
def getFile(pathId, fileName, drive):
|
||||
metaDataFile = "'%s' in parents and trashed = false and title = '%s'" % (pathId, fileName.replace("'", r"\'"))
|
||||
fileList = drive.ListFile({'q': metaDataFile}).GetList()
|
||||
if fileList.__len__() == 0:
|
||||
return None
|
||||
if nocase:
|
||||
return fileList[0] if db.lcase(fileList[0]['title']) == db.lcase(fileName) else None
|
||||
for f in fileList:
|
||||
if f['title'] == fileName:
|
||||
return f
|
||||
return None
|
||||
else:
|
||||
return fileList[0]
|
||||
|
||||
|
||||
def getFolderId(path, drive):
|
||||
# drive = getDrive(drive)
|
||||
currentFolderId = None
|
||||
try:
|
||||
currentFolderId = getEbooksFolderId(drive)
|
||||
@ -308,7 +319,7 @@ def getFolderId(path, drive):
|
||||
session.commit()
|
||||
else:
|
||||
currentFolderId = storedPathName.gdrive_id
|
||||
except (OperationalError, IntegrityError, StaleDataError, sqlite3.IntegrityError) as ex:
|
||||
except (OperationalError, IntegrityError, StaleDataError) as ex:
|
||||
log.error_or_exception('Database error: {}'.format(ex))
|
||||
session.rollback()
|
||||
except ApiRequestError as ex:
|
||||
@ -319,7 +330,7 @@ def getFolderId(path, drive):
|
||||
return currentFolderId
|
||||
|
||||
|
||||
def getFileFromEbooksFolder(path, fileName, nocase=False):
|
||||
def getFileFromEbooksFolder(path, fileName):
|
||||
drive = getDrive(Gdrive.Instance().drive)
|
||||
if path:
|
||||
# sqlCheckPath=path if path[-1] =='/' else path + '/'
|
||||
@ -327,13 +338,13 @@ def getFileFromEbooksFolder(path, fileName, nocase=False):
|
||||
else:
|
||||
folderId = getEbooksFolderId(drive)
|
||||
if folderId:
|
||||
return getFile(folderId, fileName, drive, nocase)
|
||||
return getFile(folderId, fileName, drive)
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def moveGdriveFileRemote(origin_file_id, new_title):
|
||||
origin_file_id['title'] = new_title
|
||||
origin_file_id['title']= new_title
|
||||
origin_file_id.Upload()
|
||||
|
||||
|
||||
@ -343,37 +354,33 @@ def downloadFile(path, filename, output):
|
||||
f.GetContentFile(output)
|
||||
|
||||
|
||||
def moveGdriveFolderRemote(origin_file, target_folder, single_book=False):
|
||||
def moveGdriveFolderRemote(origin_file, target_folder):
|
||||
drive = getDrive(Gdrive.Instance().drive)
|
||||
previous_parents = ",".join([parent["id"] for parent in origin_file.get('parents')])
|
||||
children = drive.auth.service.children().list(folderId=previous_parents).execute()
|
||||
if single_book:
|
||||
gFileTargetDir = getFileFromEbooksFolder(None, target_folder, nocase=True)
|
||||
if gFileTargetDir:
|
||||
# Move the file to the new folder
|
||||
drive.auth.service.files().update(fileId=origin_file['id'],
|
||||
addParents=gFileTargetDir['id'],
|
||||
removeParents=previous_parents,
|
||||
fields='id, parents').execute()
|
||||
else:
|
||||
gFileTargetDir = drive.CreateFile(
|
||||
{'title': target_folder, 'parents': [{"kind": "drive#fileLink", 'id': getEbooksFolderId()}],
|
||||
"mimeType": "application/vnd.google-apps.folder"})
|
||||
gFileTargetDir.Upload()
|
||||
# Move the file to the new folder
|
||||
drive.auth.service.files().update(fileId=origin_file['id'],
|
||||
addParents=gFileTargetDir['id'],
|
||||
removeParents=previous_parents,
|
||||
fields='id, parents').execute()
|
||||
elif origin_file['title'] != target_folder:
|
||||
#gFileTargetDir = getFileFromEbooksFolder(None, target_folder, nocase=True)
|
||||
#if gFileTargetDir:
|
||||
deleteDatabasePath(origin_file['title'])
|
||||
gFileTargetDir = getFileFromEbooksFolder(None, target_folder)
|
||||
if not gFileTargetDir:
|
||||
gFileTargetDir = drive.CreateFile(
|
||||
{'title': target_folder, 'parents': [{"kind": "drive#fileLink", 'id': getEbooksFolderId()}],
|
||||
"mimeType": "application/vnd.google-apps.folder"})
|
||||
gFileTargetDir.Upload()
|
||||
# Move the file to the new folder
|
||||
drive.auth.service.files().update(fileId=origin_file['id'],
|
||||
addParents=gFileTargetDir['id'],
|
||||
removeParents=previous_parents,
|
||||
fields='id, parents').execute()
|
||||
|
||||
elif gFileTargetDir['title'] != target_folder:
|
||||
# Folder is not existing, create, and move folder
|
||||
drive.auth.service.files().patch(fileId=origin_file['id'],
|
||||
body={'title': target_folder},
|
||||
fields='title').execute()
|
||||
|
||||
else:
|
||||
# Move the file to the new folder
|
||||
drive.auth.service.files().update(fileId=origin_file['id'],
|
||||
addParents=gFileTargetDir['id'],
|
||||
removeParents=previous_parents,
|
||||
fields='id, parents').execute()
|
||||
# if previous_parents has no children anymore, delete original fileparent
|
||||
if len(children['items']) == 1:
|
||||
deleteDatabaseEntry(previous_parents)
|
||||
@ -381,20 +388,20 @@ def moveGdriveFolderRemote(origin_file, target_folder, single_book=False):
|
||||
|
||||
|
||||
def copyToDrive(drive, uploadFile, createRoot, replaceFiles,
|
||||
ignoreFiles=None,
|
||||
parent=None, prevDir=''):
|
||||
ignoreFiles=None,
|
||||
parent=None, prevDir=''):
|
||||
ignoreFiles = ignoreFiles or []
|
||||
drive = getDrive(drive)
|
||||
isInitial = not bool(parent)
|
||||
if not parent:
|
||||
parent = getEbooksFolder(drive)
|
||||
if os.path.isdir(os.path.join(prevDir, uploadFile)):
|
||||
if os.path.isdir(os.path.join(prevDir,uploadFile)):
|
||||
existingFolder = drive.ListFile({'q': "title = '%s' and '%s' in parents and trashed = false" %
|
||||
(os.path.basename(uploadFile).replace("'", r"\'"), parent['id'])}).GetList()
|
||||
if len(existingFolder) == 0 and (not isInitial or createRoot):
|
||||
parent = drive.CreateFile({'title': os.path.basename(uploadFile),
|
||||
'parents': [{"kind": "drive#fileLink", 'id': parent['id']}],
|
||||
"mimeType": "application/vnd.google-apps.folder"})
|
||||
"mimeType": "application/vnd.google-apps.folder"})
|
||||
parent.Upload()
|
||||
else:
|
||||
if (not isInitial or createRoot) and len(existingFolder) > 0:
|
||||
@ -410,42 +417,39 @@ def copyToDrive(drive, uploadFile, createRoot, replaceFiles,
|
||||
driveFile = existingFiles[0]
|
||||
else:
|
||||
driveFile = drive.CreateFile({'title': os.path.basename(uploadFile).replace("'", r"\'"),
|
||||
'parents': [{"kind": "drive#fileLink", 'id': parent['id']}], })
|
||||
'parents': [{"kind":"drive#fileLink", 'id': parent['id']}], })
|
||||
driveFile.SetContentFile(os.path.join(prevDir, uploadFile))
|
||||
driveFile.Upload()
|
||||
|
||||
|
||||
def uploadFileToEbooksFolder(destFile, f, string=False):
|
||||
def uploadFileToEbooksFolder(destFile, f):
|
||||
drive = getDrive(Gdrive.Instance().drive)
|
||||
parent = getEbooksFolder(drive)
|
||||
splitDir = destFile.split('/')
|
||||
for i, x in enumerate(splitDir):
|
||||
if i == len(splitDir)-1:
|
||||
existing_Files = drive.ListFile({'q': "title = '%s' and '%s' in parents and trashed = false" %
|
||||
(x.replace("'", r"\'"), parent['id'])}).GetList()
|
||||
(x.replace("'", r"\'"), parent['id'])}).GetList()
|
||||
if len(existing_Files) > 0:
|
||||
driveFile = existing_Files[0]
|
||||
else:
|
||||
driveFile = drive.CreateFile({'title': x,
|
||||
'parents': [{"kind": "drive#fileLink", 'id': parent['id']}], })
|
||||
if not string:
|
||||
driveFile.SetContentFile(f)
|
||||
else:
|
||||
driveFile.SetContentString(f)
|
||||
driveFile.SetContentFile(f)
|
||||
driveFile.Upload()
|
||||
else:
|
||||
existing_Folder = drive.ListFile({'q': "title = '%s' and '%s' in parents and trashed = false" %
|
||||
(x.replace("'", r"\'"), parent['id'])}).GetList()
|
||||
(x.replace("'", r"\'"), parent['id'])}).GetList()
|
||||
if len(existing_Folder) == 0:
|
||||
parent = drive.CreateFile({'title': x, 'parents': [{"kind": "drive#fileLink", 'id': parent['id']}],
|
||||
"mimeType": "application/vnd.google-apps.folder"})
|
||||
"mimeType": "application/vnd.google-apps.folder"})
|
||||
parent.Upload()
|
||||
else:
|
||||
parent = existing_Folder[0]
|
||||
|
||||
|
||||
def watchChange(drive, channel_id, channel_type, channel_address,
|
||||
channel_token=None, expiration=None):
|
||||
channel_token=None, expiration=None):
|
||||
# Watch for all changes to a user's Drive.
|
||||
# Args:
|
||||
# service: Drive API service instance.
|
||||
@ -516,7 +520,7 @@ def stopChannel(drive, channel_id, resource_id):
|
||||
return drive.auth.service.channels().stop(body=body).execute()
|
||||
|
||||
|
||||
def getChangeById(drive, change_id):
|
||||
def getChangeById (drive, change_id):
|
||||
# Print a single Change resource information.
|
||||
#
|
||||
# Args:
|
||||
@ -550,10 +554,9 @@ def updateGdriveCalibreFromLocal():
|
||||
if os.path.isdir(os.path.join(config.config_calibre_dir, x)):
|
||||
shutil.rmtree(os.path.join(config.config_calibre_dir, x))
|
||||
|
||||
|
||||
# update gdrive.db on edit of books title
|
||||
def updateDatabaseOnEdit(ID, newPath):
|
||||
sqlCheckPath = newPath if newPath[-1] == '/' else newPath + '/'
|
||||
def updateDatabaseOnEdit(ID,newPath):
|
||||
sqlCheckPath = newPath if newPath[-1] == '/' else newPath + u'/'
|
||||
storedPathName = session.query(GdriveId).filter(GdriveId.gdrive_id == ID).first()
|
||||
if storedPathName:
|
||||
storedPathName.path = sqlCheckPath
|
||||
@ -573,17 +576,8 @@ def deleteDatabaseEntry(ID):
|
||||
log.error_or_exception('Database error: {}'.format(ex))
|
||||
session.rollback()
|
||||
|
||||
def deleteDatabasePath(Pathname):
|
||||
session.query(GdriveId).filter(GdriveId.path.contains(Pathname)).delete()
|
||||
try:
|
||||
session.commit()
|
||||
except OperationalError as ex:
|
||||
log.error_or_exception('Database error: {}'.format(ex))
|
||||
session.rollback()
|
||||
|
||||
|
||||
# Gets cover file from gdrive
|
||||
# ToDo: Check is this right everyone get read permissions on cover files?
|
||||
def get_cover_via_gdrive(cover_path):
|
||||
df = getFileFromEbooksFolder(cover_path, 'cover.jpg')
|
||||
if df:
|
||||
@ -597,33 +591,6 @@ def get_cover_via_gdrive(cover_path):
|
||||
permissionAdded = PermissionAdded()
|
||||
permissionAdded.gdrive_id = df['id']
|
||||
session.add(permissionAdded)
|
||||
try:
|
||||
session.commit()
|
||||
except (OperationalError, IntegrityError) as ex:
|
||||
log.error_or_exception('Database error: {}'.format(ex))
|
||||
session.rollback()
|
||||
headers = Headers()
|
||||
headers["Content-Type"] = 'image/jpeg'
|
||||
resp, content = df.auth.Get_Http_Object().request(df.metadata.get('downloadUrl'), headers=headers)
|
||||
return content
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
# Gets cover file from gdrive
|
||||
def get_metadata_backup_via_gdrive(metadata_path):
|
||||
df = getFileFromEbooksFolder(metadata_path, 'metadata.opf')
|
||||
if df:
|
||||
if not session.query(PermissionAdded).filter(PermissionAdded.gdrive_id == df['id']).first():
|
||||
df.GetPermissions()
|
||||
df.InsertPermission({
|
||||
'type': 'anyone',
|
||||
'value': 'anyone',
|
||||
'role': 'writer', # ToDo needs write access
|
||||
'withLink': True})
|
||||
permissionAdded = PermissionAdded()
|
||||
permissionAdded.gdrive_id = df['id']
|
||||
session.add(permissionAdded)
|
||||
try:
|
||||
session.commit()
|
||||
except OperationalError as ex:
|
||||
@ -633,7 +600,6 @@ def get_metadata_backup_via_gdrive(metadata_path):
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
# Creates chunks for downloading big files
|
||||
def partial(total_byte_len, part_size_limit):
|
||||
s = []
|
||||
@ -642,7 +608,6 @@ def partial(total_byte_len, part_size_limit):
|
||||
s.append([p, last])
|
||||
return s
|
||||
|
||||
|
||||
# downloads files in chunks from gdrive
|
||||
def do_gdrive_download(df, headers, convert_encoding=False):
|
||||
total_size = int(df.metadata.get('fileSize'))
|
||||
@ -682,7 +647,6 @@ oauth_scope:
|
||||
- https://www.googleapis.com/auth/drive
|
||||
"""
|
||||
|
||||
|
||||
def update_settings(client_id, client_secret, redirect_uri):
|
||||
if redirect_uri.endswith('/'):
|
||||
redirect_uri = redirect_uri[:-1]
|
||||
|
@ -16,9 +16,8 @@
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from datetime import datetime
|
||||
from gevent.pywsgi import WSGIHandler
|
||||
|
||||
from gevent.pywsgi import WSGIHandler
|
||||
|
||||
class MyWSGIHandler(WSGIHandler):
|
||||
def get_environ(self):
|
||||
@ -27,26 +26,4 @@ class MyWSGIHandler(WSGIHandler):
|
||||
env['RAW_URI'] = path
|
||||
return env
|
||||
|
||||
def format_request(self):
|
||||
now = datetime.now().replace(microsecond=0)
|
||||
length = self.response_length or '-'
|
||||
if self.time_finish:
|
||||
delta = '%.6f' % (self.time_finish - self.time_start)
|
||||
else:
|
||||
delta = '-'
|
||||
forwarded = self.environ.get('HTTP_X_FORWARDED_FOR', None)
|
||||
if forwarded:
|
||||
client_address = forwarded
|
||||
else:
|
||||
client_address = self.client_address[0] if isinstance(self.client_address, tuple) else self.client_address
|
||||
return '%s - - [%s] "%s" %s %s %s' % (
|
||||
client_address or '-',
|
||||
now,
|
||||
self.requestline or '',
|
||||
# Use the native string version of the status, saved so we don't have to
|
||||
# decode. But fallback to the encoded 'status' in case of subclasses
|
||||
# (Is that really necessary? At least there's no overhead.)
|
||||
(self._orig_status or self.status or '000').split()[0],
|
||||
length,
|
||||
delta)
|
||||
|
||||
|
693
cps/helper.py
Normal file → Executable file
@ -15,17 +15,24 @@
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
import sys
|
||||
|
||||
from .iso_language_names import LANGUAGE_NAMES as _LANGUAGE_NAMES
|
||||
from . import logger
|
||||
from .string_helper import strip_whitespaces
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
||||
try:
|
||||
from iso639 import languages, __version__
|
||||
get = languages.get
|
||||
except ImportError:
|
||||
from pycountry import languages as pyc_languages
|
||||
try:
|
||||
import pkg_resources
|
||||
__version__ = pkg_resources.get_distribution('pycountry').version + ' (PyCountry)'
|
||||
del pkg_resources
|
||||
except (ImportError, Exception):
|
||||
__version__ = "? (PyCountry)"
|
||||
|
||||
def _copy_fields(l):
|
||||
l.part1 = getattr(l, 'alpha_2', None)
|
||||
@ -39,48 +46,35 @@ try:
|
||||
return _copy_fields(pyc_languages.get(alpha_2=part1))
|
||||
if name is not None:
|
||||
return _copy_fields(pyc_languages.get(name=name))
|
||||
except ImportError as ex:
|
||||
if sys.version_info >= (3, 12):
|
||||
print("Python 3.12 isn't compatible with iso-639. Please install pycountry.")
|
||||
from iso639 import languages
|
||||
get = languages.get
|
||||
|
||||
|
||||
def get_language_names(locale):
|
||||
names = _LANGUAGE_NAMES.get(str(locale))
|
||||
if names is None:
|
||||
names = _LANGUAGE_NAMES.get(locale.language)
|
||||
return names
|
||||
return _LANGUAGE_NAMES.get(str(locale))
|
||||
|
||||
|
||||
def get_language_name(locale, lang_code):
|
||||
UNKNOWN_TRANSLATION = "Unknown"
|
||||
names = get_language_names(locale)
|
||||
if names is None:
|
||||
log.error(f"Missing language names for locale: {str(locale)}/{locale.language}")
|
||||
return UNKNOWN_TRANSLATION
|
||||
|
||||
name = names.get(lang_code, UNKNOWN_TRANSLATION)
|
||||
if name == UNKNOWN_TRANSLATION:
|
||||
log.error("Missing translation for language name: {}".format(lang_code))
|
||||
|
||||
return name
|
||||
try:
|
||||
return get_language_names(locale)[lang_code]
|
||||
except KeyError:
|
||||
log.error('Missing translation for language name: {}'.format(lang_code))
|
||||
return "Unknown"
|
||||
|
||||
|
||||
def get_language_code_from_name(locale, language_names, remainder=None):
|
||||
language_names = set(strip_whitespaces(x).lower() for x in language_names if x)
|
||||
def get_language_codes(locale, language_names, remainder=None):
|
||||
language_names = set(x.strip().lower() for x in language_names if x)
|
||||
lang = list()
|
||||
for key, val in get_language_names(locale).items():
|
||||
val = val.lower()
|
||||
if val in language_names:
|
||||
lang.append(key)
|
||||
language_names.remove(val)
|
||||
for k, v in get_language_names(locale).items():
|
||||
v = v.lower()
|
||||
if v in language_names:
|
||||
lang.append(k)
|
||||
language_names.remove(v)
|
||||
if remainder is not None and language_names:
|
||||
remainder.extend(language_names)
|
||||
return lang
|
||||
|
||||
|
||||
def get_valid_language_codes_from_code(locale, language_names, remainder=None):
|
||||
|
||||
def get_valid_language_codes(locale, language_names, remainder=None):
|
||||
lang = list()
|
||||
if "" in language_names:
|
||||
language_names.remove("")
|
||||
@ -101,6 +95,6 @@ def get_lang3(lang):
|
||||
ret_value = lang
|
||||
else:
|
||||
ret_value = ""
|
||||
except (KeyError, AttributeError):
|
||||
except KeyError:
|
||||
ret_value = lang
|
||||
return ret_value
|
||||
|
@ -27,9 +27,10 @@ import datetime
|
||||
import mimetypes
|
||||
from uuid import uuid4
|
||||
|
||||
from flask import Blueprint, request, url_for, g
|
||||
# from babel.dates import format_date
|
||||
from flask import Blueprint, request, url_for
|
||||
from flask_babel import format_date
|
||||
from .cw_login import current_user
|
||||
from flask_login import current_user
|
||||
|
||||
from . import constants, logger
|
||||
|
||||
@ -43,8 +44,6 @@ def url_for_other_page(page):
|
||||
args = request.view_args.copy()
|
||||
args['page'] = page
|
||||
for get, val in request.args.items():
|
||||
if get == "page":
|
||||
continue
|
||||
args[get] = val
|
||||
return url_for(request.endpoint, **args)
|
||||
|
||||
@ -113,12 +112,21 @@ def yesno(value, yes, no):
|
||||
|
||||
@jinjia.app_template_filter('formatfloat')
|
||||
def formatfloat(value, decimals=1):
|
||||
if not value or (isinstance(value, str) and not value.is_numeric()):
|
||||
return value
|
||||
formated_value = ('{0:.' + str(decimals) + 'f}').format(value)
|
||||
if formated_value.endswith('.' + "0" * decimals):
|
||||
formated_value = formated_value.rstrip('0').rstrip('.')
|
||||
return formated_value
|
||||
value = 0 if not value else value
|
||||
return ('{0:.' + str(decimals) + 'f}').format(value).rstrip('0').rstrip('.')
|
||||
|
||||
|
||||
@jinjia.app_template_filter('formatseriesindex')
|
||||
def formatseriesindex_filter(series_index):
|
||||
if series_index:
|
||||
try:
|
||||
if int(series_index) - series_index == 0:
|
||||
return int(series_index)
|
||||
else:
|
||||
return series_index
|
||||
except ValueError:
|
||||
return series_index
|
||||
return 0
|
||||
|
||||
|
||||
@jinjia.app_template_filter('escapedlink')
|
||||
@ -172,12 +180,3 @@ def get_cover_srcset(series):
|
||||
url = url_for('web.get_series_cover', series_id=series.id, resolution=shortname, c=cache_timestamp())
|
||||
srcset.append(f'{url} {resolution}x')
|
||||
return ', '.join(srcset)
|
||||
|
||||
|
||||
@jinjia.app_template_filter('music')
|
||||
def contains_music(book_formats):
|
||||
result = False
|
||||
for format in book_formats:
|
||||
if format.format.lower() in g.constants.EXTENSIONS_AUDIO:
|
||||
result = True
|
||||
return result
|
||||
|
385
cps/kobo.py
@ -18,10 +18,9 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import base64
|
||||
from datetime import datetime, timezone
|
||||
import datetime
|
||||
import os
|
||||
import uuid
|
||||
import zipfile
|
||||
from time import gmtime, strftime
|
||||
import json
|
||||
from urllib.parse import unquote
|
||||
@ -36,7 +35,7 @@ from flask import (
|
||||
redirect,
|
||||
abort
|
||||
)
|
||||
from .cw_login import current_user
|
||||
from flask_login import current_user
|
||||
from werkzeug.datastructures import Headers
|
||||
from sqlalchemy import func
|
||||
from sqlalchemy.sql.expression import and_, or_
|
||||
@ -44,10 +43,9 @@ from sqlalchemy.exc import StatementError
|
||||
from sqlalchemy.sql import select
|
||||
import requests
|
||||
|
||||
|
||||
from . import config, logger, kobo_auth, db, calibre_db, helper, shelf as shelf_lib, ub, csrf, kobo_sync_status
|
||||
from . import isoLanguages
|
||||
from .epub import get_epub_layout
|
||||
from .constants import COVER_THUMBNAIL_SMALL, COVER_THUMBNAIL_MEDIUM, COVER_THUMBNAIL_LARGE
|
||||
from .constants import sqlalchemy_version2, COVER_THUMBNAIL_SMALL
|
||||
from .helper import get_download_link
|
||||
from .services import SyncToken as SyncToken
|
||||
from .web import download_required
|
||||
@ -55,7 +53,7 @@ from .kobo_auth import requires_kobo_auth, get_auth_token
|
||||
|
||||
KOBO_FORMATS = {"KEPUB": ["KEPUB"], "EPUB": ["EPUB3", "EPUB"]}
|
||||
KOBO_STOREAPI_URL = "https://storeapi.kobo.com"
|
||||
KOBO_IMAGEHOST_URL = "https://cdn.kobo.com/book-images"
|
||||
KOBO_IMAGEHOST_URL = "https://kbimages1-a.akamaihd.net"
|
||||
|
||||
SYNC_ITEM_LIMIT = 100
|
||||
|
||||
@ -106,29 +104,24 @@ def make_request_to_kobo_store(sync_token=None):
|
||||
return store_response
|
||||
|
||||
|
||||
def redirect_or_proxy_request(auth=False):
|
||||
def redirect_or_proxy_request():
|
||||
if config.config_kobo_proxy:
|
||||
try:
|
||||
if request.method == "GET":
|
||||
alfa = redirect(get_store_url_for_current_request(), 307)
|
||||
return alfa
|
||||
else:
|
||||
# The Kobo device turns other request types into GET requests on redirects,
|
||||
# so we instead proxy to the Kobo store ourselves.
|
||||
store_response = make_request_to_kobo_store()
|
||||
if request.method == "GET":
|
||||
return redirect(get_store_url_for_current_request(), 307)
|
||||
else:
|
||||
# The Kobo device turns other request types into GET requests on redirects,
|
||||
# so we instead proxy to the Kobo store ourselves.
|
||||
store_response = make_request_to_kobo_store()
|
||||
|
||||
response_headers = store_response.headers
|
||||
for header_key in CONNECTION_SPECIFIC_HEADERS:
|
||||
response_headers.pop(header_key, default=None)
|
||||
response_headers = store_response.headers
|
||||
for header_key in CONNECTION_SPECIFIC_HEADERS:
|
||||
response_headers.pop(header_key, default=None)
|
||||
|
||||
return make_response(
|
||||
store_response.content, store_response.status_code, response_headers.items()
|
||||
)
|
||||
except Exception as e:
|
||||
log.error("Failed to receive or parse response from Kobo's endpoint: {}".format(e))
|
||||
if auth:
|
||||
return make_calibre_web_auth_response()
|
||||
return make_response(jsonify({}))
|
||||
return make_response(
|
||||
store_response.content, store_response.status_code, response_headers.items()
|
||||
)
|
||||
else:
|
||||
return make_response(jsonify({}))
|
||||
|
||||
|
||||
def convert_to_kobo_timestamp_string(timestamp):
|
||||
@ -136,34 +129,30 @@ def convert_to_kobo_timestamp_string(timestamp):
|
||||
return timestamp.strftime("%Y-%m-%dT%H:%M:%SZ")
|
||||
except AttributeError as exc:
|
||||
log.debug("Timestamp not valid: {}".format(exc))
|
||||
return datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
|
||||
return datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ")
|
||||
|
||||
|
||||
@kobo.route("/v1/library/sync")
|
||||
@requires_kobo_auth
|
||||
# @download_required
|
||||
@download_required
|
||||
def HandleSyncRequest():
|
||||
if not current_user.role_download():
|
||||
log.info("Users need download permissions for syncing library to Kobo reader")
|
||||
return abort(403)
|
||||
sync_token = SyncToken.SyncToken.from_headers(request.headers)
|
||||
log.info("Kobo library sync request received")
|
||||
log.info("Kobo library sync request received.")
|
||||
log.debug("SyncToken: {}".format(sync_token))
|
||||
log.debug("Download link format {}".format(get_download_url_for_book('[bookid]', '[bookformat]')))
|
||||
if not current_app.wsgi_app.is_proxied:
|
||||
log.debug('Kobo: Received unproxied request, changed request port to external server port')
|
||||
|
||||
# if no books synced don't respect sync_token
|
||||
if not ub.session.query(ub.KoboSyncedBooks).filter(ub.KoboSyncedBooks.user_id == current_user.id).count():
|
||||
sync_token.books_last_modified = datetime.min
|
||||
sync_token.books_last_created = datetime.min
|
||||
sync_token.reading_state_last_modified = datetime.min
|
||||
sync_token.books_last_modified = datetime.datetime.min
|
||||
sync_token.books_last_created = datetime.datetime.min
|
||||
sync_token.reading_state_last_modified = datetime.datetime.min
|
||||
|
||||
new_books_last_modified = sync_token.books_last_modified # needed for sync selected shelfs only
|
||||
new_books_last_created = sync_token.books_last_created # needed to distinguish between new and changed entitlement
|
||||
new_reading_state_last_modified = sync_token.reading_state_last_modified
|
||||
|
||||
new_archived_last_modified = datetime.min
|
||||
new_archived_last_modified = datetime.datetime.min
|
||||
sync_results = []
|
||||
|
||||
# We reload the book database so that the user gets a fresh view of the library
|
||||
@ -173,10 +162,16 @@ def HandleSyncRequest():
|
||||
only_kobo_shelves = current_user.kobo_only_shelves_sync
|
||||
|
||||
if only_kobo_shelves:
|
||||
changed_entries = calibre_db.session.query(db.Books,
|
||||
ub.ArchivedBook.last_modified,
|
||||
ub.BookShelf.date_added,
|
||||
ub.ArchivedBook.is_archived)
|
||||
if sqlalchemy_version2:
|
||||
changed_entries = select(db.Books,
|
||||
ub.ArchivedBook.last_modified,
|
||||
ub.BookShelf.date_added,
|
||||
ub.ArchivedBook.is_archived)
|
||||
else:
|
||||
changed_entries = calibre_db.session.query(db.Books,
|
||||
ub.ArchivedBook.last_modified,
|
||||
ub.BookShelf.date_added,
|
||||
ub.ArchivedBook.is_archived)
|
||||
changed_entries = (changed_entries
|
||||
.join(db.Data).outerjoin(ub.ArchivedBook, and_(db.Books.id == ub.ArchivedBook.book_id,
|
||||
ub.ArchivedBook.user_id == current_user.id))
|
||||
@ -193,9 +188,12 @@ def HandleSyncRequest():
|
||||
.filter(ub.Shelf.kobo_sync)
|
||||
.distinct())
|
||||
else:
|
||||
changed_entries = calibre_db.session.query(db.Books,
|
||||
ub.ArchivedBook.last_modified,
|
||||
ub.ArchivedBook.is_archived)
|
||||
if sqlalchemy_version2:
|
||||
changed_entries = select(db.Books, ub.ArchivedBook.last_modified, ub.ArchivedBook.is_archived)
|
||||
else:
|
||||
changed_entries = calibre_db.session.query(db.Books,
|
||||
ub.ArchivedBook.last_modified,
|
||||
ub.ArchivedBook.is_archived)
|
||||
changed_entries = (changed_entries
|
||||
.join(db.Data).outerjoin(ub.ArchivedBook, and_(db.Books.id == ub.ArchivedBook.book_id,
|
||||
ub.ArchivedBook.user_id == current_user.id))
|
||||
@ -207,16 +205,19 @@ def HandleSyncRequest():
|
||||
.order_by(db.Books.id))
|
||||
|
||||
reading_states_in_new_entitlements = []
|
||||
books = changed_entries.limit(SYNC_ITEM_LIMIT)
|
||||
if sqlalchemy_version2:
|
||||
books = calibre_db.session.execute(changed_entries.limit(SYNC_ITEM_LIMIT))
|
||||
else:
|
||||
books = changed_entries.limit(SYNC_ITEM_LIMIT)
|
||||
log.debug("Books to Sync: {}".format(len(books.all())))
|
||||
for book in books:
|
||||
formats = [data.format for data in book.Books.data]
|
||||
if 'KEPUB' not in formats and config.config_kepubifypath and 'EPUB' in formats:
|
||||
helper.convert_book_format(book.Books.id, config.get_book_path(), 'EPUB', 'KEPUB', current_user.name)
|
||||
helper.convert_book_format(book.Books.id, config.config_calibre_dir, 'EPUB', 'KEPUB', current_user.name)
|
||||
|
||||
kobo_reading_state = get_or_create_reading_state(book.Books.id)
|
||||
entitlement = {
|
||||
"BookEntitlement": create_book_entitlement(book.Books, archived=(book.is_archived==True)),
|
||||
"BookEntitlement": create_book_entitlement(book.Books, archived=(book.is_archived == True)),
|
||||
"BookMetadata": get_metadata(book.Books),
|
||||
}
|
||||
|
||||
@ -225,7 +226,7 @@ def HandleSyncRequest():
|
||||
new_reading_state_last_modified = max(new_reading_state_last_modified, kobo_reading_state.last_modified)
|
||||
reading_states_in_new_entitlements.append(book.Books.id)
|
||||
|
||||
ts_created = book.Books.timestamp.replace(tzinfo=None)
|
||||
ts_created = book.Books.timestamp
|
||||
|
||||
try:
|
||||
ts_created = max(ts_created, book.date_added)
|
||||
@ -238,7 +239,7 @@ def HandleSyncRequest():
|
||||
sync_results.append({"ChangedEntitlement": entitlement})
|
||||
|
||||
new_books_last_modified = max(
|
||||
book.Books.last_modified.replace(tzinfo=None), new_books_last_modified
|
||||
book.Books.last_modified, new_books_last_modified
|
||||
)
|
||||
try:
|
||||
new_books_last_modified = max(
|
||||
@ -250,16 +251,27 @@ def HandleSyncRequest():
|
||||
new_books_last_created = max(ts_created, new_books_last_created)
|
||||
kobo_sync_status.add_synced_books(book.Books.id)
|
||||
|
||||
max_change = changed_entries.filter(ub.ArchivedBook.is_archived)\
|
||||
.filter(ub.ArchivedBook.user_id == current_user.id) \
|
||||
.order_by(func.datetime(ub.ArchivedBook.last_modified).desc()).first()
|
||||
if sqlalchemy_version2:
|
||||
max_change = calibre_db.session.execute(changed_entries
|
||||
.filter(ub.ArchivedBook.is_archived)
|
||||
.filter(ub.ArchivedBook.user_id == current_user.id)
|
||||
.order_by(func.datetime(ub.ArchivedBook.last_modified).desc()))\
|
||||
.columns(db.Books).first()
|
||||
else:
|
||||
max_change = changed_entries.from_self().filter(ub.ArchivedBook.is_archived)\
|
||||
.filter(ub.ArchivedBook.user_id == current_user.id) \
|
||||
.order_by(func.datetime(ub.ArchivedBook.last_modified).desc()).first()
|
||||
|
||||
max_change = max_change.last_modified if max_change else new_archived_last_modified
|
||||
|
||||
new_archived_last_modified = max(new_archived_last_modified, max_change)
|
||||
|
||||
# no. of books returned
|
||||
book_count = changed_entries.count()
|
||||
if sqlalchemy_version2:
|
||||
entries = calibre_db.session.execute(changed_entries).all()
|
||||
book_count = len(entries)
|
||||
else:
|
||||
book_count = changed_entries.count()
|
||||
# last entry:
|
||||
cont_sync = bool(book_count)
|
||||
log.debug("Remaining books to Sync: {}".format(book_count))
|
||||
@ -322,13 +334,13 @@ def generate_sync_response(sync_token, sync_results, set_cont=False):
|
||||
extra_headers["x-kobo-recent-reads"] = store_response.headers.get("x-kobo-recent-reads")
|
||||
|
||||
except Exception as ex:
|
||||
log.error_or_exception("Failed to receive or parse response from Kobo's sync endpoint: {}".format(ex))
|
||||
log.error("Failed to receive or parse response from Kobo's sync endpoint: {}".format(ex))
|
||||
if set_cont:
|
||||
extra_headers["x-kobo-sync"] = "continue"
|
||||
sync_token.to_headers(extra_headers)
|
||||
|
||||
# log.debug("Kobo Sync Content: {}".format(sync_results))
|
||||
# jsonify decodes the Unicode string different to what kobo expects
|
||||
# jsonify decodes the unicode string different to what kobo expects
|
||||
response = make_response(json.dumps(sync_results), extra_headers)
|
||||
response.headers["Content-Type"] = "application/json; charset=utf-8"
|
||||
return response
|
||||
@ -343,7 +355,7 @@ def HandleMetadataRequest(book_uuid):
|
||||
log.info("Kobo library metadata request received for book %s" % book_uuid)
|
||||
book = calibre_db.get_book_by_uuid(book_uuid)
|
||||
if not book or not book.data:
|
||||
log.info("Book %s not found in database", book_uuid)
|
||||
log.info(u"Book %s not found in database", book_uuid)
|
||||
return redirect_or_proxy_request()
|
||||
|
||||
metadata = get_metadata(book)
|
||||
@ -352,7 +364,7 @@ def HandleMetadataRequest(book_uuid):
|
||||
return response
|
||||
|
||||
|
||||
def get_download_url_for_book(book_id, book_format):
|
||||
def get_download_url_for_book(book, book_format):
|
||||
if not current_app.wsgi_app.is_proxied:
|
||||
if ':' in request.host and not request.host.endswith(']'):
|
||||
host = "".join(request.host.split(':')[:-1])
|
||||
@ -364,13 +376,13 @@ def get_download_url_for_book(book_id, book_format):
|
||||
url_base=host,
|
||||
url_port=config.config_external_port,
|
||||
auth_token=get_auth_token(),
|
||||
book_id=book_id,
|
||||
book_id=book.id,
|
||||
book_format=book_format.lower()
|
||||
)
|
||||
return url_for(
|
||||
"kobo.download_book",
|
||||
auth_token=kobo_auth.get_auth_token(),
|
||||
book_id=book_id,
|
||||
book_id=book.id,
|
||||
book_format=book_format.lower(),
|
||||
_external=True,
|
||||
)
|
||||
@ -380,7 +392,7 @@ def create_book_entitlement(book, archived):
|
||||
book_uuid = str(book.uuid)
|
||||
return {
|
||||
"Accessibility": "Full",
|
||||
"ActivePeriod": {"From": convert_to_kobo_timestamp_string(datetime.now(timezone.utc))},
|
||||
"ActivePeriod": {"From": convert_to_kobo_timestamp_string(datetime.datetime.utcnow())},
|
||||
"Created": convert_to_kobo_timestamp_string(book.timestamp),
|
||||
"CrossRevisionId": book_uuid,
|
||||
"Id": book_uuid,
|
||||
@ -428,13 +440,7 @@ def get_series(book):
|
||||
|
||||
|
||||
def get_seriesindex(book):
|
||||
return book.series_index if isinstance(book.series_index, float) else 1
|
||||
|
||||
|
||||
def get_language(book):
|
||||
if not book.languages:
|
||||
return 'en'
|
||||
return isoLanguages.get(part3=book.languages[0].lang_code).part1
|
||||
return book.series_index or 1
|
||||
|
||||
|
||||
def get_metadata(book):
|
||||
@ -446,21 +452,16 @@ def get_metadata(book):
|
||||
continue
|
||||
for kobo_format in KOBO_FORMATS[book_data.format]:
|
||||
# log.debug('Id: %s, Format: %s' % (book.id, kobo_format))
|
||||
try:
|
||||
if get_epub_layout(book, book_data) == 'pre-paginated':
|
||||
kobo_format = 'EPUB3FL'
|
||||
download_urls.append(
|
||||
{
|
||||
"Format": kobo_format,
|
||||
"Size": book_data.uncompressed_size,
|
||||
"Url": get_download_url_for_book(book.id, book_data.format),
|
||||
# The Kobo forma accepts platforms: (Generic, Android)
|
||||
"Platform": "Generic",
|
||||
# "DrmType": "None", # Not required
|
||||
}
|
||||
)
|
||||
except (zipfile.BadZipfile, FileNotFoundError) as e:
|
||||
log.error(e)
|
||||
download_urls.append(
|
||||
{
|
||||
"Format": kobo_format,
|
||||
"Size": book_data.uncompressed_size,
|
||||
"Url": get_download_url_for_book(book, book_data.format),
|
||||
# The Kobo forma accepts platforms: (Generic, Android)
|
||||
"Platform": "Generic",
|
||||
# "DrmType": "None", # Not required
|
||||
}
|
||||
)
|
||||
|
||||
book_uuid = book.uuid
|
||||
metadata = {
|
||||
@ -479,7 +480,7 @@ def get_metadata(book):
|
||||
"IsInternetArchive": False,
|
||||
"IsPreOrder": False,
|
||||
"IsSocialEnabled": True,
|
||||
"Language": get_language(book),
|
||||
"Language": "en",
|
||||
"PhoneticPronunciations": {},
|
||||
"PublicationDate": convert_to_kobo_timestamp_string(book.pubdate),
|
||||
"Publisher": {"Imprint": "", "Name": get_publisher(book), },
|
||||
@ -491,16 +492,14 @@ def get_metadata(book):
|
||||
|
||||
if get_series(book):
|
||||
name = get_series(book)
|
||||
try:
|
||||
metadata["Series"] = {
|
||||
"Name": get_series(book),
|
||||
"Number": get_seriesindex(book), # ToDo Check int() ?
|
||||
"NumberFloat": float(get_seriesindex(book)),
|
||||
# Get a deterministic id based on the series name.
|
||||
"Id": str(uuid.uuid3(uuid.NAMESPACE_DNS, name)),
|
||||
}
|
||||
except Exception as e:
|
||||
print(e)
|
||||
metadata["Series"] = {
|
||||
"Name": get_series(book),
|
||||
"Number": get_seriesindex(book), # ToDo Check int() ?
|
||||
"NumberFloat": float(get_seriesindex(book)),
|
||||
# Get a deterministic id based on the series name.
|
||||
"Id": str(uuid.uuid3(uuid.NAMESPACE_DNS, name)),
|
||||
}
|
||||
|
||||
return metadata
|
||||
|
||||
|
||||
@ -509,7 +508,7 @@ def get_metadata(book):
|
||||
@requires_kobo_auth
|
||||
# Creates a Shelf with the given items, and returns the shelf's uuid.
|
||||
def HandleTagCreate():
|
||||
# catch delete requests, otherwise they are handled in the book delete handler
|
||||
# catch delete requests, otherwise the are handled in the book delete handler
|
||||
if request.method == "DELETE":
|
||||
abort(405)
|
||||
name, items = None, None
|
||||
@ -703,12 +702,20 @@ def sync_shelves(sync_token, sync_results, only_kobo_shelves=False):
|
||||
})
|
||||
extra_filters.append(ub.Shelf.kobo_sync)
|
||||
|
||||
shelflist = ub.session.query(ub.Shelf).outerjoin(ub.BookShelf).filter(
|
||||
or_(func.datetime(ub.Shelf.last_modified) > sync_token.tags_last_modified,
|
||||
func.datetime(ub.BookShelf.date_added) > sync_token.tags_last_modified),
|
||||
ub.Shelf.user_id == current_user.id,
|
||||
*extra_filters
|
||||
).distinct().order_by(func.datetime(ub.Shelf.last_modified).asc())
|
||||
if sqlalchemy_version2:
|
||||
shelflist = ub.session.execute(select(ub.Shelf).outerjoin(ub.BookShelf).filter(
|
||||
or_(func.datetime(ub.Shelf.last_modified) > sync_token.tags_last_modified,
|
||||
func.datetime(ub.BookShelf.date_added) > sync_token.tags_last_modified),
|
||||
ub.Shelf.user_id == current_user.id,
|
||||
*extra_filters
|
||||
).distinct().order_by(func.datetime(ub.Shelf.last_modified).asc())).columns(ub.Shelf)
|
||||
else:
|
||||
shelflist = ub.session.query(ub.Shelf).outerjoin(ub.BookShelf).filter(
|
||||
or_(func.datetime(ub.Shelf.last_modified) > sync_token.tags_last_modified,
|
||||
func.datetime(ub.BookShelf.date_added) > sync_token.tags_last_modified),
|
||||
ub.Shelf.user_id == current_user.id,
|
||||
*extra_filters
|
||||
).distinct().order_by(func.datetime(ub.Shelf.last_modified).asc())
|
||||
|
||||
for shelf in shelflist:
|
||||
if not shelf_lib.check_shelf_view_permissions(shelf):
|
||||
@ -732,7 +739,7 @@ def sync_shelves(sync_token, sync_results, only_kobo_shelves=False):
|
||||
ub.session_commit()
|
||||
|
||||
|
||||
# Creates a Kobo "Tag" object from an ub.Shelf object
|
||||
# Creates a Kobo "Tag" object from a ub.Shelf object
|
||||
def create_kobo_tag(shelf):
|
||||
tag = {
|
||||
"Created": convert_to_kobo_timestamp_string(shelf.created),
|
||||
@ -745,7 +752,7 @@ def create_kobo_tag(shelf):
|
||||
for book_shelf in shelf.books:
|
||||
book = calibre_db.get_book(book_shelf.book_id)
|
||||
if not book:
|
||||
log.info("Book (id: %s) in BookShelf (id: %s) not found in book database", book_shelf.book_id, shelf.id)
|
||||
log.info(u"Book (id: %s) in BookShelf (id: %s) not found in book database", book_shelf.book_id, shelf.id)
|
||||
continue
|
||||
tag["Items"].append(
|
||||
{
|
||||
@ -762,7 +769,7 @@ def create_kobo_tag(shelf):
|
||||
def HandleStateRequest(book_uuid):
|
||||
book = calibre_db.get_book_by_uuid(book_uuid)
|
||||
if not book or not book.data:
|
||||
log.info("Book %s not found in database", book_uuid)
|
||||
log.info(u"Book %s not found in database", book_uuid)
|
||||
return redirect_or_proxy_request()
|
||||
|
||||
kobo_reading_state = get_or_create_reading_state(book.id)
|
||||
@ -802,7 +809,7 @@ def HandleStateRequest(book_uuid):
|
||||
if new_book_read_status == ub.ReadBook.STATUS_IN_PROGRESS \
|
||||
and new_book_read_status != book_read.read_status:
|
||||
book_read.times_started_reading += 1
|
||||
book_read.last_time_started_reading = datetime.now(timezone.utc)
|
||||
book_read.last_time_started_reading = datetime.datetime.utcnow()
|
||||
book_read.read_status = new_book_read_status
|
||||
update_results_response["StatusInfoResult"] = {"Result": "Success"}
|
||||
except (KeyError, TypeError, ValueError, StatementError):
|
||||
@ -909,31 +916,20 @@ def get_current_bookmark_response(current_bookmark):
|
||||
@kobo.route("/<book_uuid>/<width>/<height>/<Quality>/<isGreyscale>/image.jpg")
|
||||
@requires_kobo_auth
|
||||
def HandleCoverImageRequest(book_uuid, width, height, Quality, isGreyscale):
|
||||
try:
|
||||
if int(height) > 1000:
|
||||
resolution = COVER_THUMBNAIL_LARGE
|
||||
elif int(height) > 500:
|
||||
resolution = COVER_THUMBNAIL_MEDIUM
|
||||
book_cover = helper.get_book_cover_with_uuid(book_uuid, resolution=COVER_THUMBNAIL_SMALL)
|
||||
if not book_cover:
|
||||
if config.config_kobo_proxy:
|
||||
log.debug("Cover for unknown book: %s proxied to kobo" % book_uuid)
|
||||
return redirect(KOBO_IMAGEHOST_URL +
|
||||
"/{book_uuid}/{width}/{height}/false/image.jpg".format(book_uuid=book_uuid,
|
||||
width=width,
|
||||
height=height), 307)
|
||||
else:
|
||||
resolution = COVER_THUMBNAIL_SMALL
|
||||
except ValueError:
|
||||
log.error("Requested height %s of book %s is invalid" % (book_uuid, height))
|
||||
resolution = COVER_THUMBNAIL_SMALL
|
||||
book_cover = helper.get_book_cover_with_uuid(book_uuid, resolution=resolution)
|
||||
if book_cover:
|
||||
log.debug("Serving local cover image of book %s" % book_uuid)
|
||||
return book_cover
|
||||
|
||||
if not config.config_kobo_proxy:
|
||||
log.debug("Returning 404 for cover image of unknown book %s" % book_uuid)
|
||||
# additional proxy request make no sense, -> direct return
|
||||
return abort(404)
|
||||
|
||||
log.debug("Redirecting request for cover image of unknown book %s to Kobo" % book_uuid)
|
||||
return redirect(KOBO_IMAGEHOST_URL +
|
||||
"/{book_uuid}/{width}/{height}/false/image.jpg".format(book_uuid=book_uuid,
|
||||
width=width,
|
||||
height=height), 307)
|
||||
log.debug("Cover for unknown book: %s requested" % book_uuid)
|
||||
# additional proxy request make no sense, -> direct return
|
||||
return make_response(jsonify({}))
|
||||
log.debug("Cover request received for book %s" % book_uuid)
|
||||
return book_cover
|
||||
|
||||
|
||||
@kobo.route("")
|
||||
@ -948,7 +944,7 @@ def HandleBookDeletionRequest(book_uuid):
|
||||
log.info("Kobo book delete request received for book %s" % book_uuid)
|
||||
book = calibre_db.get_book_by_uuid(book_uuid)
|
||||
if not book:
|
||||
log.info("Book %s not found in database", book_uuid)
|
||||
log.info(u"Book %s not found in database", book_uuid)
|
||||
return redirect_or_proxy_request()
|
||||
|
||||
book_id = book.id
|
||||
@ -960,11 +956,9 @@ def HandleBookDeletionRequest(book_uuid):
|
||||
|
||||
# TODO: Implement the following routes
|
||||
@csrf.exempt
|
||||
@kobo.route("/v1/library/<dummy>", methods=["DELETE", "GET", "POST"])
|
||||
@kobo.route("/v1/library/<dummy>/preview", methods=["POST"])
|
||||
@kobo.route("/v1/library/<dummy>", methods=["DELETE", "GET"])
|
||||
def HandleUnimplementedRequest(dummy=None):
|
||||
log.debug("Unimplemented Library Request received: %s (request is forwarded to kobo if configured)",
|
||||
request.base_url)
|
||||
log.debug("Unimplemented Library Request received: %s", request.base_url)
|
||||
return redirect_or_proxy_request()
|
||||
|
||||
|
||||
@ -975,9 +969,8 @@ def HandleUnimplementedRequest(dummy=None):
|
||||
@kobo.route("/v1/user/wishlist", methods=["GET", "POST"])
|
||||
@kobo.route("/v1/user/recommendations", methods=["GET", "POST"])
|
||||
@kobo.route("/v1/analytics/<dummy>", methods=["GET", "POST"])
|
||||
@kobo.route("/v1/assets", methods=["GET"])
|
||||
def HandleUserRequest(dummy=None):
|
||||
log.debug("Unimplemented User Request received: %s (request is forwarded to kobo if configured)", request.base_url)
|
||||
log.debug("Unimplemented User Request received: %s", request.base_url)
|
||||
return redirect_or_proxy_request()
|
||||
|
||||
|
||||
@ -1017,8 +1010,7 @@ def handle_getests():
|
||||
@kobo.route("/v1/affiliate", methods=["GET", "POST"])
|
||||
@kobo.route("/v1/deals", methods=["GET", "POST"])
|
||||
def HandleProductsRequest(dummy=None):
|
||||
log.debug("Unimplemented Products Request received: %s (request is forwarded to kobo if configured)",
|
||||
request.base_url)
|
||||
log.debug("Unimplemented Products Request received: %s", request.base_url)
|
||||
return redirect_or_proxy_request()
|
||||
|
||||
|
||||
@ -1035,7 +1027,7 @@ def make_calibre_web_auth_response():
|
||||
"RefreshToken": RefreshToken,
|
||||
"TokenType": "Bearer",
|
||||
"TrackingId": str(uuid.uuid4()),
|
||||
"UserKey": content.get('UserKey', ""),
|
||||
"UserKey": content['UserKey'],
|
||||
}
|
||||
)
|
||||
)
|
||||
@ -1048,7 +1040,7 @@ def HandleAuthRequest():
|
||||
log.debug('Kobo Auth request')
|
||||
if config.config_kobo_proxy:
|
||||
try:
|
||||
return redirect_or_proxy_request(auth=True)
|
||||
return redirect_or_proxy_request()
|
||||
except Exception:
|
||||
log.error("Failed to receive or parse response from Kobo's auth endpoint. Falling back to un-proxied mode.")
|
||||
return make_calibre_web_auth_response()
|
||||
@ -1132,37 +1124,25 @@ def download_book(book_id, book_format):
|
||||
|
||||
def NATIVE_KOBO_RESOURCES():
|
||||
return {
|
||||
"account_page": "https://www.kobo.com/account/settings",
|
||||
"account_page": "https://secure.kobobooks.com/profile",
|
||||
"account_page_rakuten": "https://my.rakuten.co.jp/",
|
||||
"add_device": "https://storeapi.kobo.com/v1/user/add-device",
|
||||
"add_entitlement": "https://storeapi.kobo.com/v1/library/{RevisionIds}",
|
||||
"affiliaterequest": "https://storeapi.kobo.com/v1/affiliate",
|
||||
"assets": "https://storeapi.kobo.com/v1/assets",
|
||||
"audiobook": "https://storeapi.kobo.com/v1/products/audiobooks/{ProductId}",
|
||||
"audiobook_detail_page": "https://www.kobo.com/{region}/{language}/audiobook/{slug}",
|
||||
"audiobook_landing_page": "https://www.kobo.com/{region}/{language}/audiobooks",
|
||||
"audiobook_preview": "https://storeapi.kobo.com/v1/products/audiobooks/{Id}/preview",
|
||||
"audiobook_purchase_withcredit": "https://storeapi.kobo.com/v1/store/audiobook/{Id}",
|
||||
"audiobook_subscription_orange_deal_inclusion_url": "https://authorize.kobo.com/inclusion",
|
||||
"authorproduct_recommendations": "https://storeapi.kobo.com/v1/products/books/authors/recommendations",
|
||||
"autocomplete": "https://storeapi.kobo.com/v1/products/autocomplete",
|
||||
"blackstone_header": {
|
||||
"key": "x-amz-request-payer",
|
||||
"value": "requester"
|
||||
},
|
||||
"blackstone_header": {"key": "x-amz-request-payer", "value": "requester"},
|
||||
"book": "https://storeapi.kobo.com/v1/products/books/{ProductId}",
|
||||
"book_detail_page": "https://www.kobo.com/{region}/{language}/ebook/{slug}",
|
||||
"book_detail_page_rakuten": "http://books.rakuten.co.jp/rk/{crossrevisionid}",
|
||||
"book_landing_page": "https://www.kobo.com/ebooks",
|
||||
"book_detail_page": "https://store.kobobooks.com/{culture}/ebook/{slug}",
|
||||
"book_detail_page_rakuten": "https://books.rakuten.co.jp/rk/{crossrevisionid}",
|
||||
"book_landing_page": "https://store.kobobooks.com/ebooks",
|
||||
"book_subscription": "https://storeapi.kobo.com/v1/products/books/subscriptions",
|
||||
"browse_history": "https://storeapi.kobo.com/v1/user/browsehistory",
|
||||
"categories": "https://storeapi.kobo.com/v1/categories",
|
||||
"categories_page": "https://www.kobo.com/ebooks/categories",
|
||||
"categories_page": "https://store.kobobooks.com/ebooks/categories",
|
||||
"category": "https://storeapi.kobo.com/v1/categories/{CategoryId}",
|
||||
"category_featured_lists": "https://storeapi.kobo.com/v1/categories/{CategoryId}/featured",
|
||||
"category_products": "https://storeapi.kobo.com/v1/categories/{CategoryId}/products",
|
||||
"checkout_borrowed_book": "https://storeapi.kobo.com/v1/library/borrow",
|
||||
"client_authd_referral": "https://authorize.kobo.com/api/AuthenticatedReferral/client/v1/getLink",
|
||||
"configuration_data": "https://storeapi.kobo.com/v1/configuration",
|
||||
"content_access_book": "https://storeapi.kobo.com/v1/products/books/{ProductId}/access",
|
||||
"customer_care_live_chat": "https://v2.zopim.com/widget/livechat.html?key=Y6gwUmnu4OATxN3Tli4Av9bYN319BTdO",
|
||||
@ -1173,109 +1153,92 @@ def NATIVE_KOBO_RESOURCES():
|
||||
"delete_tag_items": "https://storeapi.kobo.com/v1/library/tags/{TagId}/items/delete",
|
||||
"device_auth": "https://storeapi.kobo.com/v1/auth/device",
|
||||
"device_refresh": "https://storeapi.kobo.com/v1/auth/refresh",
|
||||
"dictionary_host": "https://ereaderfiles.kobo.com",
|
||||
"dictionary_host": "https://kbdownload1-a.akamaihd.net",
|
||||
"discovery_host": "https://discovery.kobobooks.com",
|
||||
"ereaderdevices": "https://storeapi.kobo.com/v2/products/EReaderDeviceFeeds",
|
||||
"eula_page": "https://www.kobo.com/termsofuse?style=onestore",
|
||||
"exchange_auth": "https://storeapi.kobo.com/v1/auth/exchange",
|
||||
"external_book": "https://storeapi.kobo.com/v1/products/books/external/{Ids}",
|
||||
"facebook_sso_page": "https://authorize.kobo.com/signin/provider/Facebook/login?returnUrl=http://kobo.com/",
|
||||
"facebook_sso_page":
|
||||
"https://authorize.kobo.com/signin/provider/Facebook/login?returnUrl=http://store.kobobooks.com/",
|
||||
"featured_list": "https://storeapi.kobo.com/v1/products/featured/{FeaturedListId}",
|
||||
"featured_lists": "https://storeapi.kobo.com/v1/products/featured",
|
||||
"free_books_page": {
|
||||
"EN": "https://www.kobo.com/{region}/{language}/p/free-ebooks",
|
||||
"FR": "https://www.kobo.com/{region}/{language}/p/livres-gratuits",
|
||||
"IT": "https://www.kobo.com/{region}/{language}/p/libri-gratuiti",
|
||||
"NL": "https://www.kobo.com/{region}/{language}/List/bekijk-het-overzicht-van-gratis-ebooks/QpkkVWnUw8sxmgjSlCbJRg",
|
||||
"PT": "https://www.kobo.com/{region}/{language}/p/livros-gratis"
|
||||
"NL": "https://www.kobo.com/{region}/{language}/"
|
||||
"List/bekijk-het-overzicht-van-gratis-ebooks/QpkkVWnUw8sxmgjSlCbJRg",
|
||||
"PT": "https://www.kobo.com/{region}/{language}/p/livros-gratis",
|
||||
},
|
||||
"fte_feedback": "https://storeapi.kobo.com/v1/products/ftefeedback",
|
||||
"funnel_metrics": "https://storeapi.kobo.com/v1/funnelmetrics",
|
||||
"get_download_keys": "https://storeapi.kobo.com/v1/library/downloadkeys",
|
||||
"get_download_link": "https://storeapi.kobo.com/v1/library/downloadlink",
|
||||
"get_tests_request": "https://storeapi.kobo.com/v1/analytics/gettests",
|
||||
"giftcard_epd_redeem_url": "https://www.kobo.com/{storefront}/{language}/redeem-ereader",
|
||||
"giftcard_redeem_url": "https://www.kobo.com/{storefront}/{language}/redeem",
|
||||
"gpb_flow_enabled": "False",
|
||||
"help_page": "http://www.kobo.com/help",
|
||||
"image_host": "//cdn.kobo.com/book-images/",
|
||||
"image_url_quality_template": "https://cdn.kobo.com/book-images/{ImageId}/{Width}/{Height}/{Quality}/{IsGreyscale}/image.jpg",
|
||||
"image_url_template": "https://cdn.kobo.com/book-images/{ImageId}/{Width}/{Height}/false/image.jpg",
|
||||
"kobo_audiobooks_credit_redemption": "False",
|
||||
"kobo_audiobooks_enabled": "True",
|
||||
"help_page": "https://www.kobo.com/help",
|
||||
"kobo_audiobooks_enabled": "False",
|
||||
"kobo_audiobooks_orange_deal_enabled": "False",
|
||||
"kobo_audiobooks_subscriptions_enabled": "False",
|
||||
"kobo_display_price": "True",
|
||||
"kobo_dropbox_link_account_enabled": "False",
|
||||
"kobo_google_tax": "False",
|
||||
"kobo_googledrive_link_account_enabled": "False",
|
||||
"kobo_nativeborrow_enabled": "False",
|
||||
"kobo_onedrive_link_account_enabled": "False",
|
||||
"kobo_nativeborrow_enabled": "True",
|
||||
"kobo_onestorelibrary_enabled": "False",
|
||||
"kobo_privacyCentre_url": "https://www.kobo.com/privacy",
|
||||
"kobo_redeem_enabled": "True",
|
||||
"kobo_shelfie_enabled": "False",
|
||||
"kobo_subscriptions_enabled": "True",
|
||||
"kobo_superpoints_enabled": "True",
|
||||
"kobo_subscriptions_enabled": "False",
|
||||
"kobo_superpoints_enabled": "False",
|
||||
"kobo_wishlist_enabled": "True",
|
||||
"library_book": "https://storeapi.kobo.com/v1/user/library/books/{LibraryItemId}",
|
||||
"library_items": "https://storeapi.kobo.com/v1/user/library",
|
||||
"library_metadata": "https://storeapi.kobo.com/v1/library/{Ids}/metadata",
|
||||
"library_prices": "https://storeapi.kobo.com/v1/user/library/previews/prices",
|
||||
"library_search": "https://storeapi.kobo.com/v1/library/search",
|
||||
"library_stack": "https://storeapi.kobo.com/v1/user/library/stacks/{LibraryItemId}",
|
||||
"library_sync": "https://storeapi.kobo.com/v1/library/sync",
|
||||
"love_dashboard_page": "https://www.kobo.com/{region}/{language}/kobosuperpoints",
|
||||
"love_points_redemption_page": "https://www.kobo.com/{region}/{language}/KoboSuperPointsRedemption?productId={ProductId}",
|
||||
"magazine_landing_page": "https://www.kobo.com/emagazines",
|
||||
"more_sign_in_options": "https://authorize.kobo.com/signin?returnUrl=http://kobo.com/#allProviders",
|
||||
"notebooks": "https://storeapi.kobo.com/api/internal/notebooks",
|
||||
"love_dashboard_page": "https://store.kobobooks.com/{culture}/kobosuperpoints",
|
||||
"love_points_redemption_page":
|
||||
"https://store.kobobooks.com/{culture}/KoboSuperPointsRedemption?productId={ProductId}",
|
||||
"magazine_landing_page": "https://store.kobobooks.com/emagazines",
|
||||
"notifications_registration_issue": "https://storeapi.kobo.com/v1/notifications/registration",
|
||||
"oauth_host": "https://oauth.kobo.com",
|
||||
"password_retrieval_page": "https://www.kobo.com/passwordretrieval.html",
|
||||
"personalizedrecommendations": "https://storeapi.kobo.com/v2/users/personalizedrecommendations",
|
||||
"pocket_link_account_start": "https://authorize.kobo.com/{region}/{language}/linkpocket",
|
||||
"overdrive_account": "https://auth.overdrive.com/account",
|
||||
"overdrive_library": "https://{libraryKey}.auth.overdrive.com/library",
|
||||
"overdrive_library_finder_host": "https://libraryfinder.api.overdrive.com",
|
||||
"overdrive_thunder_host": "https://thunder.api.overdrive.com",
|
||||
"password_retrieval_page": "https://www.kobobooks.com/passwordretrieval.html",
|
||||
"post_analytics_event": "https://storeapi.kobo.com/v1/analytics/event",
|
||||
"ppx_purchasing_url": "https://purchasing.kobo.com",
|
||||
"privacy_page": "https://www.kobo.com/privacypolicy?style=onestore",
|
||||
"product_nextread": "https://storeapi.kobo.com/v1/products/{ProductIds}/nextread",
|
||||
"product_prices": "https://storeapi.kobo.com/v1/products/{ProductIds}/prices",
|
||||
"product_recommendations": "https://storeapi.kobo.com/v1/products/{ProductId}/recommendations",
|
||||
"product_reviews": "https://storeapi.kobo.com/v1/products/{ProductIds}/reviews",
|
||||
"products": "https://storeapi.kobo.com/v1/products",
|
||||
"productsv2": "https://storeapi.kobo.com/v2/products",
|
||||
"provider_external_sign_in_page": "https://authorize.kobo.com/ExternalSignIn/{providerName}?returnUrl=http://kobo.com/",
|
||||
"provider_external_sign_in_page":
|
||||
"https://authorize.kobo.com/ExternalSignIn/{providerName}?returnUrl=http://store.kobobooks.com/",
|
||||
"purchase_buy": "https://www.kobo.com/checkout/createpurchase/",
|
||||
"purchase_buy_templated": "https://www.kobo.com/{culture}/checkout/createpurchase/{ProductId}",
|
||||
"quickbuy_checkout": "https://storeapi.kobo.com/v1/store/quickbuy/{PurchaseId}/checkout",
|
||||
"quickbuy_create": "https://storeapi.kobo.com/v1/store/quickbuy/purchase",
|
||||
"rakuten_token_exchange": "https://storeapi.kobo.com/v1/auth/rakuten_token_exchange",
|
||||
"rating": "https://storeapi.kobo.com/v1/products/{ProductId}/rating/{Rating}",
|
||||
"reading_services_host": "https://readingservices.kobo.com",
|
||||
"reading_state": "https://storeapi.kobo.com/v1/library/{Ids}/state",
|
||||
"redeem_interstitial_page": "https://www.kobo.com",
|
||||
"registration_page": "https://authorize.kobo.com/signup?returnUrl=http://kobo.com/",
|
||||
"redeem_interstitial_page": "https://store.kobobooks.com",
|
||||
"registration_page": "https://authorize.kobo.com/signup?returnUrl=http://store.kobobooks.com/",
|
||||
"related_items": "https://storeapi.kobo.com/v1/products/{Id}/related",
|
||||
"remaining_book_series": "https://storeapi.kobo.com/v1/products/books/series/{SeriesId}",
|
||||
"rename_tag": "https://storeapi.kobo.com/v1/library/tags/{TagId}",
|
||||
"review": "https://storeapi.kobo.com/v1/products/reviews/{ReviewId}",
|
||||
"review_sentiment": "https://storeapi.kobo.com/v1/products/reviews/{ReviewId}/sentiment/{Sentiment}",
|
||||
"shelfie_recommendations": "https://storeapi.kobo.com/v1/user/recommendations/shelfie",
|
||||
"sign_in_page": "https://authorize.kobo.com/signin?returnUrl=http://kobo.com/",
|
||||
"sign_in_page": "https://authorize.kobo.com/signin?returnUrl=http://store.kobobooks.com/",
|
||||
"social_authorization_host": "https://social.kobobooks.com:8443",
|
||||
"social_host": "https://social.kobobooks.com",
|
||||
"stacks_host_productId": "https://store.kobobooks.com/collections/byproductid/",
|
||||
"store_home": "www.kobo.com/{region}/{language}",
|
||||
"store_host": "www.kobo.com",
|
||||
"store_newreleases": "https://www.kobo.com/{region}/{language}/List/new-releases/961XUjtsU0qxkFItWOutGA",
|
||||
"store_search": "https://www.kobo.com/{region}/{language}/Search?Query={query}",
|
||||
"store_top50": "https://www.kobo.com/{region}/{language}/ebooks/Top",
|
||||
"subs_landing_page": "https://www.kobo.com/{region}/{language}/plus",
|
||||
"subs_management_page": "https://www.kobo.com/{region}/{language}/account/subscriptions",
|
||||
"subs_plans_page": "https://www.kobo.com/{region}/{language}/plus/plans",
|
||||
"subs_purchase_buy_templated": "https://www.kobo.com/{region}/{language}/Checkoutoption/{ProductId}/{TierId}",
|
||||
"store_host": "store.kobobooks.com",
|
||||
"store_newreleases": "https://store.kobobooks.com/{culture}/List/new-releases/961XUjtsU0qxkFItWOutGA",
|
||||
"store_search": "https://store.kobobooks.com/{culture}/Search?Query={query}",
|
||||
"store_top50": "https://store.kobobooks.com/{culture}/ebooks/Top",
|
||||
"tag_items": "https://storeapi.kobo.com/v1/library/tags/{TagId}/Items",
|
||||
"tags": "https://storeapi.kobo.com/v1/library/tags",
|
||||
"taste_profile": "https://storeapi.kobo.com/v1/products/tasteprofile",
|
||||
"terms_of_sale_page": "https://authorize.kobo.com/{region}/{language}/terms/termsofsale",
|
||||
"update_accessibility_to_preview": "https://storeapi.kobo.com/v1/library/{EntitlementIds}/preview",
|
||||
"use_one_store": "True",
|
||||
"use_one_store": "False",
|
||||
"user_loyalty_benefits": "https://storeapi.kobo.com/v1/user/loyalty/benefits",
|
||||
"user_platform": "https://storeapi.kobo.com/v1/user/platform",
|
||||
"user_profile": "https://storeapi.kobo.com/v1/user/profile",
|
||||
@ -1283,6 +1246,6 @@ def NATIVE_KOBO_RESOURCES():
|
||||
"user_recommendations": "https://storeapi.kobo.com/v1/user/recommendations",
|
||||
"user_reviews": "https://storeapi.kobo.com/v1/user/reviews",
|
||||
"user_wishlist": "https://storeapi.kobo.com/v1/user/wishlist",
|
||||
"userguide_host": "https://ereaderfiles.kobo.com",
|
||||
"wishlist_page": "https://www.kobo.com/{region}/{language}/account/wishlist"
|
||||
"userguide_host": "https://kbdownload1-a.akamaihd.net",
|
||||
"wishlist_page": "https://store.kobobooks.com/{region}/{language}/account/wishlist",
|
||||
}
|
||||
|
@ -22,7 +22,7 @@
|
||||
This module also includes research notes into the auth protocol used by Kobo devices.
|
||||
|
||||
Log-in:
|
||||
When first booting a Kobo device the user must log in to a Kobo (or affiliate) account.
|
||||
When first booting a Kobo device the user must sign into a Kobo (or affiliate) account.
|
||||
Upon successful sign-in, the user is redirected to
|
||||
https://auth.kobobooks.com/CrossDomainSignIn?id=<some id>
|
||||
which serves the following response:
|
||||
@ -41,7 +41,7 @@ issue for a few years now https://www.mobileread.com/forums/showpost.php?p=34768
|
||||
will still grant access given the userkey.)
|
||||
|
||||
Official Kobo Store Api authorization:
|
||||
* For most of the endpoints we care about (sync, metadata, tags, etc.), the userKey is
|
||||
* For most of the endpoints we care about (sync, metadata, tags, etc), the userKey is
|
||||
passed in the x-kobo-userkey header, and is sufficient to authorize the API call.
|
||||
* Some endpoints (e.g: AnnotationService) instead make use of Bearer tokens pass through
|
||||
an authorization header. To get a BearerToken, the device makes a POST request to the
|
||||
@ -64,15 +64,12 @@ from datetime import datetime
|
||||
from os import urandom
|
||||
from functools import wraps
|
||||
|
||||
from flask import g, Blueprint, abort, request
|
||||
from .cw_login import login_user, current_user
|
||||
from flask import g, Blueprint, url_for, abort, request
|
||||
from flask_login import login_user, current_user, login_required
|
||||
from flask_babel import gettext as _
|
||||
from flask_limiter import RateLimitExceeded
|
||||
|
||||
from . import logger, config, calibre_db, db, helper, ub, lm, limiter
|
||||
from . import logger, config, calibre_db, db, helper, ub, lm
|
||||
from .render_template import render_title_template
|
||||
from .usermanagement import user_login_required
|
||||
|
||||
|
||||
log = logger.create()
|
||||
|
||||
@ -80,7 +77,7 @@ kobo_auth = Blueprint("kobo_auth", __name__, url_prefix="/kobo_auth")
|
||||
|
||||
|
||||
@kobo_auth.route("/generate_auth_token/<int:user_id>")
|
||||
@user_login_required
|
||||
@login_required
|
||||
def generate_auth_token(user_id):
|
||||
warning = False
|
||||
host_list = request.host.rsplit(':')
|
||||
@ -115,14 +112,14 @@ def generate_auth_token(user_id):
|
||||
|
||||
return render_title_template(
|
||||
"generate_kobo_auth_url.html",
|
||||
title=_("Kobo Setup"),
|
||||
title=_(u"Kobo Setup"),
|
||||
auth_token=auth_token.auth_token,
|
||||
warning=warning
|
||||
warning = warning
|
||||
)
|
||||
|
||||
|
||||
@kobo_auth.route("/deleteauthtoken/<int:user_id>", methods=["POST"])
|
||||
@user_login_required
|
||||
@login_required
|
||||
def delete_auth_token(user_id):
|
||||
# Invalidate any previously generated Kobo Auth token for this user
|
||||
ub.session.query(ub.RemoteAuthToken).filter(ub.RemoteAuthToken.user_id == user_id)\
|
||||
@ -154,13 +151,6 @@ def requires_kobo_auth(f):
|
||||
def inner(*args, **kwargs):
|
||||
auth_token = get_auth_token()
|
||||
if auth_token is not None:
|
||||
try:
|
||||
limiter.check()
|
||||
except RateLimitExceeded:
|
||||
return abort(429)
|
||||
except (ConnectionError, Exception) as e:
|
||||
log.error("Connection error to limiter backend: %s", e)
|
||||
return abort(429)
|
||||
user = (
|
||||
ub.session.query(ub.User)
|
||||
.join(ub.RemoteAuthToken)
|
||||
@ -169,8 +159,7 @@ def requires_kobo_auth(f):
|
||||
)
|
||||
if user is not None:
|
||||
login_user(user)
|
||||
[limiter.limiter.storage.clear(k.key) for k in limiter.current_limits]
|
||||
return f(*args, **kwargs)
|
||||
log.debug("Received Kobo request without a recognizable auth token.")
|
||||
return abort(401)
|
||||
log.debug("Received Kobo request without a recognizable auth token.")
|
||||
return abort(401)
|
||||
return inner
|
||||
|
@ -17,12 +17,11 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
|
||||
from .cw_login import current_user
|
||||
from flask_login import current_user
|
||||
from . import ub
|
||||
from datetime import datetime, timezone
|
||||
import datetime
|
||||
from sqlalchemy.sql.expression import or_, and_, true
|
||||
# from sqlalchemy import exc
|
||||
|
||||
from sqlalchemy import exc
|
||||
|
||||
# Add the current book id to kobo_synced_books table for current user, if entry is already present,
|
||||
# do nothing (safety precaution)
|
||||
@ -51,6 +50,7 @@ def remove_synced_book(book_id, all=False, session=None):
|
||||
ub.session_commit(_session=session)
|
||||
|
||||
|
||||
|
||||
def change_archived_books(book_id, state=None, message=None):
|
||||
archived_book = ub.session.query(ub.ArchivedBook).filter(and_(ub.ArchivedBook.user_id == int(current_user.id),
|
||||
ub.ArchivedBook.book_id == book_id)).first()
|
||||
@ -58,7 +58,7 @@ def change_archived_books(book_id, state=None, message=None):
|
||||
archived_book = ub.ArchivedBook(user_id=current_user.id, book_id=book_id)
|
||||
|
||||
archived_book.is_archived = state if state else not archived_book.is_archived
|
||||
archived_book.last_modified = datetime.now(timezone.utc) # toDo. Check utc timestamp
|
||||
archived_book.last_modified = datetime.datetime.utcnow() # toDo. Check utc timestamp
|
||||
|
||||
ub.session.merge(archived_book)
|
||||
ub.session_commit(message)
|
||||
@ -71,7 +71,7 @@ def update_on_sync_shelfs(user_id):
|
||||
books_to_archive = (ub.session.query(ub.KoboSyncedBooks)
|
||||
.join(ub.BookShelf, ub.KoboSyncedBooks.book_id == ub.BookShelf.book_id, isouter=True)
|
||||
.join(ub.Shelf, ub.Shelf.user_id == user_id, isouter=True)
|
||||
.filter(or_(ub.Shelf.kobo_sync == 0, ub.Shelf.kobo_sync==None))
|
||||
.filter(or_(ub.Shelf.kobo_sync == 0, ub.Shelf.kobo_sync == None))
|
||||
.filter(ub.KoboSyncedBooks.user_id == user_id).all())
|
||||
for b in books_to_archive:
|
||||
change_archived_books(b.book_id, True)
|
||||
|
@ -29,7 +29,7 @@ from .constants import CONFIG_DIR as _CONFIG_DIR
|
||||
ACCESS_FORMATTER_GEVENT = Formatter("%(message)s")
|
||||
ACCESS_FORMATTER_TORNADO = Formatter("[%(asctime)s] %(message)s")
|
||||
|
||||
FORMATTER = Formatter("[%(asctime)s] %(levelname)5s {%(filename)s:%(lineno)d} %(message)s")
|
||||
FORMATTER = Formatter("[%(asctime)s] %(levelname)5s {%(name)s:%(lineno)d} %(message)s")
|
||||
DEFAULT_LOG_LEVEL = logging.INFO
|
||||
DEFAULT_LOG_FILE = os.path.join(_CONFIG_DIR, "calibre-web.log")
|
||||
DEFAULT_ACCESS_LOG = os.path.join(_CONFIG_DIR, "access.log")
|
||||
@ -42,12 +42,17 @@ logging.addLevelName(logging.CRITICAL, "CRIT")
|
||||
|
||||
class _Logger(logging.Logger):
|
||||
|
||||
def error_or_exception(self, message, stacklevel=1, *args, **kwargs):
|
||||
is_debug = self.getEffectiveLevel() <= logging.DEBUG
|
||||
if not is_debug:
|
||||
self.exception(message, stacklevel=stacklevel, *args, **kwargs)
|
||||
def error_or_exception(self, message, stacklevel=2, *args, **kwargs):
|
||||
if sys.version_info > (3, 7):
|
||||
if is_debug_enabled():
|
||||
self.exception(message, stacklevel=stacklevel, *args, **kwargs)
|
||||
else:
|
||||
self.error(message, stacklevel=stacklevel, *args, **kwargs)
|
||||
else:
|
||||
self.error(message, stacklevel=stacklevel, *args, **kwargs)
|
||||
if is_debug_enabled():
|
||||
self.exception(message, stack_info=True, *args, **kwargs)
|
||||
else:
|
||||
self.error(message, *args, **kwargs)
|
||||
|
||||
def debug_no_auth(self, message, *args, **kwargs):
|
||||
message = message.strip("\r\n")
|
||||
@ -145,7 +150,7 @@ def setup(log_file, log_level=None):
|
||||
else:
|
||||
try:
|
||||
file_handler = RotatingFileHandler(log_file, maxBytes=100000, backupCount=2, encoding='utf-8')
|
||||
except (IOError, PermissionError):
|
||||
except IOError:
|
||||
if log_file == DEFAULT_LOG_FILE:
|
||||
raise
|
||||
file_handler = RotatingFileHandler(DEFAULT_LOG_FILE, maxBytes=100000, backupCount=2, encoding='utf-8')
|
||||
@ -172,7 +177,7 @@ def create_access_log(log_file, log_name, formatter):
|
||||
access_log.setLevel(logging.INFO)
|
||||
try:
|
||||
file_handler = RotatingFileHandler(log_file, maxBytes=50000, backupCount=2, encoding='utf-8')
|
||||
except (IOError, PermissionError):
|
||||
except IOError:
|
||||
if log_file == DEFAULT_ACCESS_LOG:
|
||||
raise
|
||||
file_handler = RotatingFileHandler(DEFAULT_ACCESS_LOG, maxBytes=50000, backupCount=2, encoding='utf-8')
|
||||
|
17
cps/main.py
@ -18,20 +18,14 @@
|
||||
|
||||
import sys
|
||||
|
||||
from . import create_app, limiter
|
||||
from . import create_app
|
||||
from .jinjia import jinjia
|
||||
from flask import request
|
||||
|
||||
|
||||
def request_username():
|
||||
return request.authorization.username
|
||||
|
||||
from .remotelogin import remotelogin
|
||||
|
||||
def main():
|
||||
app = create_app()
|
||||
|
||||
from .web import web
|
||||
from .basic import basic
|
||||
from .opds import opds
|
||||
from .admin import admi
|
||||
from .gdrive import gdrive
|
||||
@ -42,22 +36,18 @@ def main():
|
||||
from .shelf import shelf
|
||||
from .tasks_status import tasks
|
||||
from .error_handler import init_errorhandler
|
||||
from .remotelogin import remotelogin
|
||||
try:
|
||||
from .kobo import kobo, get_kobo_activated
|
||||
from .kobo_auth import kobo_auth
|
||||
from flask_limiter.util import get_remote_address
|
||||
kobo_available = get_kobo_activated()
|
||||
except (ImportError, AttributeError): # Catch also error for not installed flask-WTF (missing csrf decorator)
|
||||
kobo_available = False
|
||||
kobo = kobo_auth = get_remote_address = None
|
||||
|
||||
try:
|
||||
from .oauth_bb import oauth
|
||||
oauth_available = True
|
||||
except ImportError:
|
||||
oauth_available = False
|
||||
oauth = None
|
||||
|
||||
from . import web_server
|
||||
init_errorhandler()
|
||||
@ -65,9 +55,7 @@ def main():
|
||||
app.register_blueprint(search)
|
||||
app.register_blueprint(tasks)
|
||||
app.register_blueprint(web)
|
||||
app.register_blueprint(basic)
|
||||
app.register_blueprint(opds)
|
||||
limiter.limit("3/minute", key_func=request_username)(opds)
|
||||
app.register_blueprint(jinjia)
|
||||
app.register_blueprint(about)
|
||||
app.register_blueprint(shelf)
|
||||
@ -79,7 +67,6 @@ def main():
|
||||
if kobo_available:
|
||||
app.register_blueprint(kobo)
|
||||
app.register_blueprint(kobo_auth)
|
||||
limiter.limit("3/minute", key_func=get_remote_address)(kobo)
|
||||
if oauth_available:
|
||||
app.register_blueprint(oauth)
|
||||
success = web_server.start()
|
||||
|
@ -25,7 +25,7 @@ try:
|
||||
import cchardet #optional for better speed
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
from cps import logger
|
||||
from cps.services.Metadata import MetaRecord, MetaSourceInfo, Metadata
|
||||
import cps.logger as logger
|
||||
|
||||
@ -33,21 +33,21 @@ import cps.logger as logger
|
||||
from operator import itemgetter
|
||||
log = logger.create()
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
||||
class Amazon(Metadata):
|
||||
__name__ = "Amazon"
|
||||
__id__ = "amazon"
|
||||
headers = {'upgrade-insecure-requests': '1',
|
||||
'user-agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:130.0) Gecko/20100101 Firefox/130.0',
|
||||
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/png,image/svg+xml,*/*;q=0.8',
|
||||
'Sec-Fetch-Site': 'same-origin',
|
||||
'Sec-Fetch-Mode': 'navigate',
|
||||
'Sec-Fetch-User': '?1',
|
||||
'Sec-Fetch-Dest': 'document',
|
||||
'Upgrade-Insecure-Requests': '1',
|
||||
'Alt-Used' : 'www.amazon.com',
|
||||
'Priority' : 'u=0, i',
|
||||
'accept-encoding': 'gzip, deflate, br, zstd',
|
||||
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36',
|
||||
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
|
||||
'sec-gpc': '1',
|
||||
'sec-fetch-site': 'none',
|
||||
'sec-fetch-mode': 'navigate',
|
||||
'sec-fetch-user': '?1',
|
||||
'sec-fetch-dest': 'document',
|
||||
'accept-encoding': 'gzip, deflate, br',
|
||||
'accept-language': 'en-US,en;q=0.9'}
|
||||
session = requests.Session()
|
||||
session.headers=headers
|
||||
@ -55,6 +55,7 @@ class Amazon(Metadata):
|
||||
def search(
|
||||
self, query: str, generic_cover: str = "", locale: str = "en"
|
||||
) -> Optional[List[MetaRecord]]:
|
||||
#timer=time()
|
||||
def inner(link, index) -> [dict, int]:
|
||||
with self.session as session:
|
||||
try:
|
||||
@ -62,11 +63,11 @@ class Amazon(Metadata):
|
||||
r.raise_for_status()
|
||||
except Exception as ex:
|
||||
log.warning(ex)
|
||||
return []
|
||||
return
|
||||
long_soup = BS(r.text, "lxml") #~4sec :/
|
||||
soup2 = long_soup.find("div", attrs={"cel_widget_id": "dpx-ppd_csm_instrumentation_wrapper"})
|
||||
soup2 = long_soup.find("div", attrs={"cel_widget_id": "dpx-books-ppd_csm_instrumentation_wrapper"})
|
||||
if soup2 is None:
|
||||
return []
|
||||
return
|
||||
try:
|
||||
match = MetaRecord(
|
||||
title = "",
|
||||
@ -89,7 +90,7 @@ class Amazon(Metadata):
|
||||
soup2.find("div", attrs={"data-feature-name": "bookDescription"}).stripped_strings)\
|
||||
.replace("\xa0"," ")[:-9].strip().strip("\n")
|
||||
except (AttributeError, TypeError):
|
||||
return [] # if there is no description it is not a book and therefore should be ignored
|
||||
return None # if there is no description it is not a book and therefore should be ignored
|
||||
try:
|
||||
match.title = soup2.find("span", attrs={"id": "productTitle"}).text
|
||||
except (AttributeError, TypeError):
|
||||
@ -97,7 +98,7 @@ class Amazon(Metadata):
|
||||
try:
|
||||
match.authors = [next(
|
||||
filter(lambda i: i != " " and i != "\n" and not i.startswith("{"),
|
||||
x.findAll(string=True))).strip()
|
||||
x.findAll(text=True))).strip()
|
||||
for x in soup2.findAll("span", attrs={"class": "author"})]
|
||||
except (AttributeError, TypeError, StopIteration):
|
||||
match.authors = ""
|
||||
@ -108,13 +109,13 @@ class Amazon(Metadata):
|
||||
except (AttributeError, ValueError):
|
||||
match.rating = 0
|
||||
try:
|
||||
match.cover = soup2.find("img", attrs={"class": "a-dynamic-image"})["src"]
|
||||
match.cover = soup2.find("img", attrs={"class": "a-dynamic-image frontImage"})["src"]
|
||||
except (AttributeError, TypeError):
|
||||
match.cover = ""
|
||||
return match, index
|
||||
except Exception as e:
|
||||
log.error_or_exception(e)
|
||||
return []
|
||||
return
|
||||
|
||||
val = list()
|
||||
if self.active:
|
||||
@ -126,15 +127,15 @@ class Amazon(Metadata):
|
||||
results.raise_for_status()
|
||||
except requests.exceptions.HTTPError as e:
|
||||
log.error_or_exception(e)
|
||||
return []
|
||||
return None
|
||||
except Exception as e:
|
||||
log.warning(e)
|
||||
return []
|
||||
return None
|
||||
soup = BS(results.text, 'html.parser')
|
||||
links_list = [next(filter(lambda i: "digital-text" in i["href"], x.findAll("a")))["href"] for x in
|
||||
soup.findAll("div", attrs={"data-component-type": "s-search-result"})]
|
||||
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
|
||||
fut = {executor.submit(inner, link, index) for index, link in enumerate(links_list[:3])}
|
||||
val = list(map(lambda x : x.result(), concurrent.futures.as_completed(fut)))
|
||||
fut = {executor.submit(inner, link, index) for index, link in enumerate(links_list[:5])}
|
||||
val = list(map(lambda x : x.result() ,concurrent.futures.as_completed(fut)))
|
||||
result = list(filter(lambda x: x, val))
|
||||
return [x[0] for x in sorted(result, key=itemgetter(1))] #sort by amazons listing order for best relevance
|
||||
|
@ -43,8 +43,7 @@ class Douban(Metadata):
|
||||
__id__ = "douban"
|
||||
DESCRIPTION = "豆瓣"
|
||||
META_URL = "https://book.douban.com/"
|
||||
SEARCH_JSON_URL = "https://www.douban.com/j/search"
|
||||
SEARCH_URL = "https://www.douban.com/search"
|
||||
SEARCH_URL = "https://www.douban.com/j/search"
|
||||
|
||||
ID_PATTERN = re.compile(r"sid: (?P<id>\d+),")
|
||||
AUTHORS_PATTERN = re.compile(r"作者|译者")
|
||||
@ -53,7 +52,6 @@ class Douban(Metadata):
|
||||
PUBLISHED_DATE_PATTERN = re.compile(r"出版年")
|
||||
SERIES_PATTERN = re.compile(r"丛书")
|
||||
IDENTIFIERS_PATTERN = re.compile(r"ISBN|统一书号")
|
||||
CRITERIA_PATTERN = re.compile("criteria = '(.+)'")
|
||||
|
||||
TITTLE_XPATH = "//span[@property='v:itemreviewed']"
|
||||
COVER_XPATH = "//a[@class='nbg']"
|
||||
@ -65,90 +63,56 @@ class Douban(Metadata):
|
||||
session = requests.Session()
|
||||
session.headers = {
|
||||
'user-agent':
|
||||
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36 Edg/98.0.1108.56',
|
||||
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36 Edg/98.0.1108.56',
|
||||
}
|
||||
|
||||
def search(self,
|
||||
query: str,
|
||||
generic_cover: str = "",
|
||||
locale: str = "en") -> List[MetaRecord]:
|
||||
val = []
|
||||
def search(
|
||||
self, query: str, generic_cover: str = "", locale: str = "en"
|
||||
) -> Optional[List[MetaRecord]]:
|
||||
if self.active:
|
||||
log.debug(f"start searching {query} on douban")
|
||||
log.debug(f"starting search {query} on douban")
|
||||
if title_tokens := list(
|
||||
self.get_title_tokens(query, strip_joiners=False)):
|
||||
self.get_title_tokens(query, strip_joiners=False)
|
||||
):
|
||||
query = "+".join(title_tokens)
|
||||
|
||||
book_id_list = self._get_book_id_list_from_html(query)
|
||||
try:
|
||||
r = self.session.get(
|
||||
self.SEARCH_URL, params={"cat": 1001, "q": query}
|
||||
)
|
||||
r.raise_for_status()
|
||||
|
||||
if not book_id_list:
|
||||
log.debug("No search results in Douban")
|
||||
except Exception as e:
|
||||
log.warning(e)
|
||||
return None
|
||||
|
||||
results = r.json()
|
||||
if results["total"] == 0:
|
||||
return []
|
||||
|
||||
with futures.ThreadPoolExecutor(
|
||||
max_workers=5, thread_name_prefix='douban') as executor:
|
||||
book_id_list = [
|
||||
self.ID_PATTERN.search(item).group("id")
|
||||
for item in results["items"][:10] if self.ID_PATTERN.search(item)
|
||||
]
|
||||
|
||||
with futures.ThreadPoolExecutor(max_workers=5) as executor:
|
||||
|
||||
fut = [
|
||||
executor.submit(self._parse_single_book, book_id,
|
||||
generic_cover) for book_id in book_id_list
|
||||
executor.submit(self._parse_single_book, book_id, generic_cover)
|
||||
for book_id in book_id_list
|
||||
]
|
||||
|
||||
|
||||
val = [
|
||||
future.result() for future in futures.as_completed(fut)
|
||||
if future.result()
|
||||
future.result()
|
||||
for future in futures.as_completed(fut) if future.result()
|
||||
]
|
||||
|
||||
return val
|
||||
|
||||
def _get_book_id_list_from_html(self, query: str) -> List[str]:
|
||||
try:
|
||||
r = self.session.get(self.SEARCH_URL,
|
||||
params={
|
||||
"cat": 1001,
|
||||
"q": query
|
||||
})
|
||||
r.raise_for_status()
|
||||
|
||||
except Exception as e:
|
||||
log.warning(e)
|
||||
return []
|
||||
|
||||
html = etree.HTML(r.content.decode("utf8"))
|
||||
result_list = html.xpath(self.COVER_XPATH)
|
||||
|
||||
return [
|
||||
self.ID_PATTERN.search(item.get("onclick")).group("id")
|
||||
for item in result_list[:10]
|
||||
if self.ID_PATTERN.search(item.get("onclick"))
|
||||
]
|
||||
|
||||
def _get_book_id_list_from_json(self, query: str) -> List[str]:
|
||||
try:
|
||||
r = self.session.get(self.SEARCH_JSON_URL,
|
||||
params={
|
||||
"cat": 1001,
|
||||
"q": query
|
||||
})
|
||||
r.raise_for_status()
|
||||
|
||||
except Exception as e:
|
||||
log.warning(e)
|
||||
return []
|
||||
|
||||
results = r.json()
|
||||
if results["total"] == 0:
|
||||
return []
|
||||
|
||||
return [
|
||||
self.ID_PATTERN.search(item).group("id")
|
||||
for item in results["items"][:10] if self.ID_PATTERN.search(item)
|
||||
]
|
||||
|
||||
def _parse_single_book(self,
|
||||
id: str,
|
||||
generic_cover: str = "") -> Optional[MetaRecord]:
|
||||
def _parse_single_book(
|
||||
self, id: str, generic_cover: str = ""
|
||||
) -> Optional[MetaRecord]:
|
||||
url = f"https://book.douban.com/subject/{id}/"
|
||||
log.debug(f"start parsing {url}")
|
||||
|
||||
try:
|
||||
r = self.session.get(url)
|
||||
@ -169,12 +133,10 @@ class Douban(Metadata):
|
||||
),
|
||||
)
|
||||
|
||||
decode_content = r.content.decode("utf8")
|
||||
html = etree.HTML(decode_content)
|
||||
html = etree.HTML(r.content.decode("utf8"))
|
||||
|
||||
match.title = html.xpath(self.TITTLE_XPATH)[0].text
|
||||
match.cover = html.xpath(
|
||||
self.COVER_XPATH)[0].attrib["href"] or generic_cover
|
||||
match.cover = html.xpath(self.COVER_XPATH)[0].attrib["href"] or generic_cover
|
||||
try:
|
||||
rating_num = float(html.xpath(self.RATING_XPATH)[0].text.strip())
|
||||
except Exception:
|
||||
@ -184,41 +146,36 @@ class Douban(Metadata):
|
||||
tag_elements = html.xpath(self.TAGS_XPATH)
|
||||
if len(tag_elements):
|
||||
match.tags = [tag_element.text for tag_element in tag_elements]
|
||||
else:
|
||||
match.tags = self._get_tags(decode_content)
|
||||
|
||||
description_element = html.xpath(self.DESCRIPTION_XPATH)
|
||||
if len(description_element):
|
||||
match.description = html2text(
|
||||
etree.tostring(description_element[-1]).decode("utf8"))
|
||||
match.description = html2text(etree.tostring(
|
||||
description_element[-1], encoding="utf8").decode("utf8"))
|
||||
|
||||
info = html.xpath(self.INFO_XPATH)
|
||||
|
||||
for element in info:
|
||||
text = element.text
|
||||
if self.AUTHORS_PATTERN.search(text):
|
||||
next_element = element.getnext()
|
||||
while next_element is not None and next_element.tag != "br":
|
||||
match.authors.append(next_element.text)
|
||||
next_element = next_element.getnext()
|
||||
next = element.getnext()
|
||||
while next is not None and next.tag != "br":
|
||||
match.authors.append(next.text)
|
||||
next = next.getnext()
|
||||
elif self.PUBLISHER_PATTERN.search(text):
|
||||
if publisher := element.tail.strip():
|
||||
match.publisher = publisher
|
||||
else:
|
||||
match.publisher = element.getnext().text
|
||||
match.publisher = element.tail.strip()
|
||||
elif self.SUBTITLE_PATTERN.search(text):
|
||||
match.title = f'{match.title}:{element.tail.strip()}'
|
||||
match.title = f'{match.title}:' + element.tail.strip()
|
||||
elif self.PUBLISHED_DATE_PATTERN.search(text):
|
||||
match.publishedDate = self._clean_date(element.tail.strip())
|
||||
elif self.SERIES_PATTERN.search(text):
|
||||
elif self.SUBTITLE_PATTERN.search(text):
|
||||
match.series = element.getnext().text
|
||||
elif i_type := self.IDENTIFIERS_PATTERN.search(text):
|
||||
match.identifiers[i_type.group()] = element.tail.strip()
|
||||
|
||||
return match
|
||||
|
||||
@staticmethod
|
||||
def _clean_date(date: str) -> str:
|
||||
|
||||
def _clean_date(self, date: str) -> str:
|
||||
"""
|
||||
Clean up the date string to be in the format YYYY-MM-DD
|
||||
|
||||
@ -237,24 +194,13 @@ class Douban(Metadata):
|
||||
if date[i].isdigit():
|
||||
digit.append(date[i])
|
||||
elif digit:
|
||||
ls.append("".join(digit) if len(digit) ==
|
||||
2 else f"0{digit[0]}")
|
||||
ls.append("".join(digit) if len(digit)==2 else f"0{digit[0]}")
|
||||
digit = []
|
||||
if digit:
|
||||
ls.append("".join(digit) if len(digit) ==
|
||||
2 else f"0{digit[0]}")
|
||||
ls.append("".join(digit) if len(digit)==2 else f"0{digit[0]}")
|
||||
|
||||
moon = ls[0]
|
||||
if len(ls) > 1:
|
||||
day = ls[1]
|
||||
if len(ls)>1:
|
||||
day = ls[1]
|
||||
|
||||
return f"{year}-{moon}-{day}"
|
||||
|
||||
def _get_tags(self, text: str) -> List[str]:
|
||||
tags = []
|
||||
if criteria := self.CRITERIA_PATTERN.search(text):
|
||||
tags.extend(
|
||||
item.replace('7:', '') for item in criteria.group().split('|')
|
||||
if item.startswith('7:'))
|
||||
|
||||
return tags
|
||||
|
@ -19,7 +19,6 @@
|
||||
# Google Books api document: https://developers.google.com/books/docs/v1/using
|
||||
from typing import Dict, List, Optional
|
||||
from urllib.parse import quote
|
||||
from datetime import datetime
|
||||
|
||||
import requests
|
||||
|
||||
@ -54,7 +53,7 @@ class Google(Metadata):
|
||||
results.raise_for_status()
|
||||
except Exception as e:
|
||||
log.warning(e)
|
||||
return []
|
||||
return None
|
||||
for result in results.json().get("items", []):
|
||||
val.append(
|
||||
self._parse_search_result(
|
||||
@ -82,11 +81,7 @@ class Google(Metadata):
|
||||
match.description = result["volumeInfo"].get("description", "")
|
||||
match.languages = self._parse_languages(result=result, locale=locale)
|
||||
match.publisher = result["volumeInfo"].get("publisher", "")
|
||||
try:
|
||||
datetime.strptime(result["volumeInfo"].get("publishedDate", ""), "%Y-%m-%d")
|
||||
match.publishedDate = result["volumeInfo"].get("publishedDate", "")
|
||||
except ValueError:
|
||||
match.publishedDate = ""
|
||||
match.publishedDate = result["volumeInfo"].get("publishedDate", "")
|
||||
match.rating = result["volumeInfo"].get("averageRating", 0)
|
||||
match.series, match.series_index = "", 1
|
||||
match.tags = result["volumeInfo"].get("categories", [])
|
||||
@ -108,13 +103,6 @@ class Google(Metadata):
|
||||
def _parse_cover(result: Dict, generic_cover: str) -> str:
|
||||
if result["volumeInfo"].get("imageLinks"):
|
||||
cover_url = result["volumeInfo"]["imageLinks"]["thumbnail"]
|
||||
|
||||
# strip curl in cover
|
||||
cover_url = cover_url.replace("&edge=curl", "")
|
||||
|
||||
# request 800x900 cover image (higher resolution)
|
||||
cover_url += "&fife=w800-h900"
|
||||
|
||||
return cover_url.replace("http://", "https://")
|
||||
return generic_cover
|
||||
|
||||
|
@ -97,14 +97,12 @@ class LubimyCzytac(Metadata):
|
||||
LANGUAGES = f"{CONTAINER}//dt[contains(text(),'Język:')]{SIBLINGS}/text()"
|
||||
DESCRIPTION = f"{CONTAINER}//div[@class='collapse-content']"
|
||||
SERIES = f"{CONTAINER}//span/a[contains(@href,'/cykl/')]/text()"
|
||||
TRANSLATOR = f"{CONTAINER}//dt[contains(text(),'Tłumacz:')]{SIBLINGS}/a/text()"
|
||||
|
||||
DETAILS = "//div[@id='book-details']"
|
||||
PUBLISH_DATE = "//dt[contains(@title,'Data pierwszego wydania"
|
||||
FIRST_PUBLISH_DATE = f"{DETAILS}{PUBLISH_DATE} oryginalnego')]{SIBLINGS}[1]/text()"
|
||||
FIRST_PUBLISH_DATE_PL = f"{DETAILS}{PUBLISH_DATE} polskiego')]{SIBLINGS}[1]/text()"
|
||||
TAGS = "//a[contains(@href,'/ksiazki/t/')]/text()" # "//nav[@aria-label='breadcrumbs']//a[contains(@href,'/ksiazki/k/')]/span/text()"
|
||||
|
||||
TAGS = "//nav[@aria-label='breadcrumb']//a[contains(@href,'/ksiazki/k/')]/text()"
|
||||
|
||||
RATING = "//meta[@property='books:rating:value']/@content"
|
||||
COVER = "//meta[@property='og:image']/@content"
|
||||
@ -137,7 +135,7 @@ class LubimyCzytac(Metadata):
|
||||
|
||||
def _prepare_query(self, title: str) -> str:
|
||||
query = ""
|
||||
characters_to_remove = r"\?()\/"
|
||||
characters_to_remove = "\?()\/"
|
||||
pattern = "[" + characters_to_remove + "]"
|
||||
title = re.sub(pattern, "", title)
|
||||
title = title.replace("_", " ")
|
||||
@ -160,7 +158,6 @@ class LubimyCzytac(Metadata):
|
||||
|
||||
class LubimyCzytacParser:
|
||||
PAGES_TEMPLATE = "<p id='strony'>Książka ma {0} stron(y).</p>"
|
||||
TRANSLATOR_TEMPLATE = "<p id='translator'>Tłumacz: {0}</p>"
|
||||
PUBLISH_DATE_TEMPLATE = "<p id='pierwsze_wydanie'>Data pierwszego wydania: {0}</p>"
|
||||
PUBLISH_DATE_PL_TEMPLATE = (
|
||||
"<p id='pierwsze_wydanie'>Data pierwszego wydania w Polsce: {0}</p>"
|
||||
@ -285,13 +282,11 @@ class LubimyCzytacParser:
|
||||
|
||||
def _parse_tags(self) -> List[str]:
|
||||
tags = self._parse_xpath_node(xpath=LubimyCzytac.TAGS, take_first=False)
|
||||
if tags:
|
||||
return [
|
||||
strip_accents(w.replace(", itd.", " itd."))
|
||||
for w in tags
|
||||
if isinstance(w, str)
|
||||
]
|
||||
return None
|
||||
return [
|
||||
strip_accents(w.replace(", itd.", " itd."))
|
||||
for w in tags
|
||||
if isinstance(w, str)
|
||||
]
|
||||
|
||||
def _parse_from_summary(self, attribute_name: str) -> Optional[str]:
|
||||
value = None
|
||||
@ -351,9 +346,5 @@ class LubimyCzytacParser:
|
||||
description += LubimyCzytacParser.PUBLISH_DATE_PL_TEMPLATE.format(
|
||||
first_publish_date_pl.strftime("%d.%m.%Y")
|
||||
)
|
||||
translator = self._parse_xpath_node(xpath=LubimyCzytac.TRANSLATOR)
|
||||
if translator:
|
||||
description += LubimyCzytacParser.TRANSLATOR_TEMPLATE.format(translator)
|
||||
|
||||
|
||||
return description
|
||||
|
@ -54,7 +54,7 @@ class scholar(Metadata):
|
||||
scholar_gen = itertools.islice(scholarly.search_pubs(query), 10)
|
||||
except Exception as e:
|
||||
log.warning(e)
|
||||
return list()
|
||||
return None
|
||||
for result in scholar_gen:
|
||||
match = self._parse_search_result(
|
||||
result=result, generic_cover="", locale=locale
|
||||
|
@ -32,7 +32,7 @@ class OAuthBackend(SQLAlchemyBackend):
|
||||
Stores and retrieves OAuth tokens using a relational database through
|
||||
the `SQLAlchemy`_ ORM.
|
||||
|
||||
_SQLAlchemy: https://www.sqlalchemy.org/
|
||||
.. _SQLAlchemy: https://www.sqlalchemy.org/
|
||||
"""
|
||||
def __init__(self, model, session, provider_id,
|
||||
user=None, user_id=None, user_required=None, anon_user=None,
|
||||
|
@ -30,9 +30,8 @@ from flask_dance.consumer import oauth_authorized, oauth_error
|
||||
from flask_dance.contrib.github import make_github_blueprint, github
|
||||
from flask_dance.contrib.google import make_google_blueprint, google
|
||||
from oauthlib.oauth2 import TokenExpiredError, InvalidGrantError
|
||||
from .cw_login import login_user, current_user
|
||||
from flask_login import login_user, current_user, login_required
|
||||
from sqlalchemy.orm.exc import NoResultFound
|
||||
from .usermanagement import user_login_required
|
||||
|
||||
from . import constants, logger, config, app, ub
|
||||
|
||||
@ -75,7 +74,7 @@ def register_user_with_oauth(user=None):
|
||||
if len(all_oauth.keys()) == 0:
|
||||
return
|
||||
if user is None:
|
||||
flash(_("Register with %(provider)s", provider=", ".join(list(all_oauth.values()))), category="success")
|
||||
flash(_(u"Register with %(provider)s", provider=", ".join(list(all_oauth.values()))), category="success")
|
||||
else:
|
||||
for oauth_key in all_oauth.keys():
|
||||
# Find this OAuth token in the database, or create it
|
||||
@ -135,8 +134,8 @@ def bind_oauth_or_register(provider_id, provider_user_id, redirect_url, provider
|
||||
# already bind with user, just login
|
||||
if oauth_entry.user:
|
||||
login_user(oauth_entry.user)
|
||||
log.debug("You are now logged in as: '%s'", oauth_entry.user.name)
|
||||
flash(_("Success! You are now logged in as: %(nickname)s", nickname=oauth_entry.user.name),
|
||||
log.debug(u"You are now logged in as: '%s'", oauth_entry.user.name)
|
||||
flash(_(u"you are now logged in as: '%(nickname)s'", nickname= oauth_entry.user.name),
|
||||
category="success")
|
||||
return redirect(url_for('web.index'))
|
||||
else:
|
||||
@ -146,21 +145,21 @@ def bind_oauth_or_register(provider_id, provider_user_id, redirect_url, provider
|
||||
try:
|
||||
ub.session.add(oauth_entry)
|
||||
ub.session.commit()
|
||||
flash(_("Link to %(oauth)s Succeeded", oauth=provider_name), category="success")
|
||||
flash(_(u"Link to %(oauth)s Succeeded", oauth=provider_name), category="success")
|
||||
log.info("Link to {} Succeeded".format(provider_name))
|
||||
return redirect(url_for('web.profile'))
|
||||
except Exception as ex:
|
||||
log.error_or_exception(ex)
|
||||
ub.session.rollback()
|
||||
else:
|
||||
flash(_("Login failed, No User Linked With OAuth Account"), category="error")
|
||||
flash(_(u"Login failed, No User Linked With OAuth Account"), category="error")
|
||||
log.info('Login failed, No User Linked With OAuth Account')
|
||||
return redirect(url_for('web.login'))
|
||||
# return redirect(url_for('web.login'))
|
||||
# if config.config_public_reg:
|
||||
# return redirect(url_for('web.register'))
|
||||
# else:
|
||||
# flash(_("Public registration is not enabled"), category="error")
|
||||
# flash(_(u"Public registration is not enabled"), category="error")
|
||||
# return redirect(url_for(redirect_url))
|
||||
except (NoResultFound, AttributeError):
|
||||
return redirect(url_for(redirect_url))
|
||||
@ -195,18 +194,17 @@ def unlink_oauth(provider):
|
||||
ub.session.delete(oauth_entry)
|
||||
ub.session.commit()
|
||||
logout_oauth_user()
|
||||
flash(_("Unlink to %(oauth)s Succeeded", oauth=oauth_check[provider]), category="success")
|
||||
flash(_(u"Unlink to %(oauth)s Succeeded", oauth=oauth_check[provider]), category="success")
|
||||
log.info("Unlink to {} Succeeded".format(oauth_check[provider]))
|
||||
except Exception as ex:
|
||||
log.error_or_exception(ex)
|
||||
ub.session.rollback()
|
||||
flash(_("Unlink to %(oauth)s Failed", oauth=oauth_check[provider]), category="error")
|
||||
flash(_(u"Unlink to %(oauth)s Failed", oauth=oauth_check[provider]), category="error")
|
||||
except NoResultFound:
|
||||
log.warning("oauth %s for user %d not found", provider, current_user.id)
|
||||
flash(_("Not Linked to %(oauth)s", oauth=provider), category="error")
|
||||
flash(_(u"Not Linked to %(oauth)s", oauth=provider), category="error")
|
||||
return redirect(url_for('web.profile'))
|
||||
|
||||
|
||||
def generate_oauth_blueprints():
|
||||
if not ub.session.query(ub.OAuthProvider).count():
|
||||
for provider in ("github", "google"):
|
||||
@ -260,13 +258,13 @@ if ub.oauth_support:
|
||||
@oauth_authorized.connect_via(oauthblueprints[0]['blueprint'])
|
||||
def github_logged_in(blueprint, token):
|
||||
if not token:
|
||||
flash(_("Failed to log in with GitHub."), category="error")
|
||||
flash(_(u"Failed to log in with GitHub."), category="error")
|
||||
log.error("Failed to log in with GitHub")
|
||||
return False
|
||||
|
||||
resp = blueprint.session.get("/user")
|
||||
if not resp.ok:
|
||||
flash(_("Failed to fetch user info from GitHub."), category="error")
|
||||
flash(_(u"Failed to fetch user info from GitHub."), category="error")
|
||||
log.error("Failed to fetch user info from GitHub")
|
||||
return False
|
||||
|
||||
@ -278,13 +276,13 @@ if ub.oauth_support:
|
||||
@oauth_authorized.connect_via(oauthblueprints[1]['blueprint'])
|
||||
def google_logged_in(blueprint, token):
|
||||
if not token:
|
||||
flash(_("Failed to log in with Google."), category="error")
|
||||
flash(_(u"Failed to log in with Google."), category="error")
|
||||
log.error("Failed to log in with Google")
|
||||
return False
|
||||
|
||||
resp = blueprint.session.get("/oauth2/v2/userinfo")
|
||||
if not resp.ok:
|
||||
flash(_("Failed to fetch user info from Google."), category="error")
|
||||
flash(_(u"Failed to fetch user info from Google."), category="error")
|
||||
log.error("Failed to fetch user info from Google")
|
||||
return False
|
||||
|
||||
@ -293,13 +291,12 @@ if ub.oauth_support:
|
||||
return oauth_update_token(str(oauthblueprints[1]['id']), token, google_user_id)
|
||||
|
||||
|
||||
|
||||
# notify on OAuth provider error
|
||||
@oauth_error.connect_via(oauthblueprints[0]['blueprint'])
|
||||
def github_error(blueprint, error, error_description=None, error_uri=None):
|
||||
msg = (
|
||||
"OAuth error from {name}! "
|
||||
"error={error} description={description} uri={uri}"
|
||||
u"OAuth error from {name}! "
|
||||
u"error={error} description={description} uri={uri}"
|
||||
).format(
|
||||
name=blueprint.name,
|
||||
error=error,
|
||||
@ -311,8 +308,8 @@ if ub.oauth_support:
|
||||
@oauth_error.connect_via(oauthblueprints[1]['blueprint'])
|
||||
def google_error(blueprint, error, error_description=None, error_uri=None):
|
||||
msg = (
|
||||
"OAuth error from {name}! "
|
||||
"error={error} description={description} uri={uri}"
|
||||
u"OAuth error from {name}! "
|
||||
u"error={error} description={description} uri={uri}"
|
||||
).format(
|
||||
name=blueprint.name,
|
||||
error=error,
|
||||
@ -332,16 +329,16 @@ def github_login():
|
||||
if account_info.ok:
|
||||
account_info_json = account_info.json()
|
||||
return bind_oauth_or_register(oauthblueprints[0]['id'], account_info_json['id'], 'github.login', 'github')
|
||||
flash(_("GitHub Oauth error, please retry later."), category="error")
|
||||
flash(_(u"GitHub Oauth error, please retry later."), category="error")
|
||||
log.error("GitHub Oauth error, please retry later")
|
||||
except (InvalidGrantError, TokenExpiredError) as e:
|
||||
flash(_("GitHub Oauth error: {}").format(e), category="error")
|
||||
flash(_(u"GitHub Oauth error: {}").format(e), category="error")
|
||||
log.error(e)
|
||||
return redirect(url_for('web.login'))
|
||||
|
||||
|
||||
@oauth.route('/unlink/github', methods=["GET"])
|
||||
@user_login_required
|
||||
@login_required
|
||||
def github_login_unlink():
|
||||
return unlink_oauth(oauthblueprints[0]['id'])
|
||||
|
||||
@ -356,15 +353,15 @@ def google_login():
|
||||
if resp.ok:
|
||||
account_info_json = resp.json()
|
||||
return bind_oauth_or_register(oauthblueprints[1]['id'], account_info_json['id'], 'google.login', 'google')
|
||||
flash(_("Google Oauth error, please retry later."), category="error")
|
||||
flash(_(u"Google Oauth error, please retry later."), category="error")
|
||||
log.error("Google Oauth error, please retry later")
|
||||
except (InvalidGrantError, TokenExpiredError) as e:
|
||||
flash(_("Google Oauth error: {}").format(e), category="error")
|
||||
flash(_(u"Google Oauth error: {}").format(e), category="error")
|
||||
log.error(e)
|
||||
return redirect(url_for('web.login'))
|
||||
|
||||
|
||||
@oauth.route('/unlink/google', methods=["GET"])
|
||||
@user_login_required
|
||||
@login_required
|
||||
def google_login_unlink():
|
||||
return unlink_oauth(oauthblueprints[1]['id'])
|
||||
|
128
cps/opds.py
@ -21,29 +21,41 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import datetime
|
||||
# import json
|
||||
from urllib.parse import unquote_plus
|
||||
from functools import wraps
|
||||
|
||||
from flask import Blueprint, request, render_template, make_response, abort, g, jsonify
|
||||
from flask import Blueprint, request, render_template, Response, g, make_response, abort
|
||||
from flask_login import current_user
|
||||
from flask_babel import get_locale
|
||||
from flask_babel import gettext as _
|
||||
|
||||
|
||||
from sqlalchemy.sql.expression import func, text, or_, and_, true
|
||||
from sqlalchemy.exc import InvalidRequestError, OperationalError
|
||||
from werkzeug.security import check_password_hash
|
||||
|
||||
from . import logger, config, db, calibre_db, ub, isoLanguages, constants
|
||||
from .usermanagement import requires_basic_auth_if_no_ano, auth
|
||||
from . import constants, logger, config, db, calibre_db, ub, services, isoLanguages
|
||||
from .helper import get_download_link, get_book_cover
|
||||
from .pagination import Pagination
|
||||
from .web import render_read_books
|
||||
|
||||
from .usermanagement import load_user_from_request
|
||||
from flask_babel import gettext as _
|
||||
|
||||
opds = Blueprint('opds', __name__)
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
||||
def requires_basic_auth_if_no_ano(f):
|
||||
@wraps(f)
|
||||
def decorated(*args, **kwargs):
|
||||
auth = request.authorization
|
||||
if config.config_anonbrowse != 1:
|
||||
if not auth or auth.type != 'basic' or not check_auth(auth.username, auth.password):
|
||||
return authenticate()
|
||||
return f(*args, **kwargs)
|
||||
if config.config_login_type == constants.LOGIN_LDAP and services.ldap and config.config_anonbrowse != 1:
|
||||
return services.ldap.basic_auth_required(f)
|
||||
return decorated
|
||||
|
||||
|
||||
@opds.route("/opds/")
|
||||
@opds.route("/opds")
|
||||
@requires_basic_auth_if_no_ano
|
||||
@ -57,7 +69,7 @@ def feed_osd():
|
||||
return render_xml_template('osd.xml', lang='en-EN')
|
||||
|
||||
|
||||
# @opds.route("/opds/search", defaults={'query': ""})
|
||||
@opds.route("/opds/search", defaults={'query': ""})
|
||||
@opds.route("/opds/search/<path:query>")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_cc_search(query):
|
||||
@ -95,8 +107,6 @@ def feed_letter_books(book_id):
|
||||
@opds.route("/opds/new")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_new():
|
||||
if not auth.current_user().check_visibility(constants.SIDEBAR_RECENT):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
|
||||
db.Books, True, [db.Books.timestamp.desc()],
|
||||
@ -107,8 +117,6 @@ def feed_new():
|
||||
@opds.route("/opds/discover")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_discover():
|
||||
if not auth.current_user().check_visibility(constants.SIDEBAR_RANDOM):
|
||||
abort(404)
|
||||
query = calibre_db.generate_linked_query(config.config_read_column, db.Books)
|
||||
entries = query.filter(calibre_db.common_filters()).order_by(func.random()).limit(config.config_books_per_page)
|
||||
pagination = Pagination(1, config.config_books_per_page, int(config.config_books_per_page))
|
||||
@ -118,8 +126,6 @@ def feed_discover():
|
||||
@opds.route("/opds/rated")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_best_rated():
|
||||
if not auth.current_user().check_visibility(constants.SIDEBAR_BEST_RATED):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
|
||||
db.Books, db.Books.ratings.any(db.Ratings.rating > 9),
|
||||
@ -131,8 +137,6 @@ def feed_best_rated():
|
||||
@opds.route("/opds/hot")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_hot():
|
||||
if not auth.current_user().check_visibility(constants.SIDEBAR_HOT):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
all_books = ub.session.query(ub.Downloads, func.count(ub.Downloads.book_id)).order_by(
|
||||
func.count(ub.Downloads.book_id).desc()).group_by(ub.Downloads.book_id)
|
||||
@ -155,16 +159,12 @@ def feed_hot():
|
||||
@opds.route("/opds/author")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_authorindex():
|
||||
if not auth.current_user().check_visibility(constants.SIDEBAR_AUTHOR):
|
||||
abort(404)
|
||||
return render_element_index(db.Authors.sort, db.books_authors_link, 'opds.feed_letter_author')
|
||||
|
||||
|
||||
@opds.route("/opds/author/letter/<book_id>")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_letter_author(book_id):
|
||||
if not auth.current_user().check_visibility(constants.SIDEBAR_AUTHOR):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
letter = true() if book_id == "00" else func.upper(db.Authors.sort).startswith(book_id)
|
||||
entries = calibre_db.session.query(db.Authors).join(db.books_authors_link).join(db.Books)\
|
||||
@ -186,8 +186,6 @@ def feed_author(book_id):
|
||||
@opds.route("/opds/publisher")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_publisherindex():
|
||||
if not auth.current_user().check_visibility(constants.SIDEBAR_PUBLISHER):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
entries = calibre_db.session.query(db.Publishers)\
|
||||
.join(db.books_publishers_link)\
|
||||
@ -209,16 +207,12 @@ def feed_publisher(book_id):
|
||||
@opds.route("/opds/category")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_categoryindex():
|
||||
if not auth.current_user().check_visibility(constants.SIDEBAR_CATEGORY):
|
||||
abort(404)
|
||||
return render_element_index(db.Tags.name, db.books_tags_link, 'opds.feed_letter_category')
|
||||
|
||||
|
||||
@opds.route("/opds/category/letter/<book_id>")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_letter_category(book_id):
|
||||
if not auth.current_user().check_visibility(constants.SIDEBAR_CATEGORY):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
letter = true() if book_id == "00" else func.upper(db.Tags.name).startswith(book_id)
|
||||
entries = calibre_db.session.query(db.Tags)\
|
||||
@ -242,16 +236,12 @@ def feed_category(book_id):
|
||||
@opds.route("/opds/series")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_seriesindex():
|
||||
if not auth.current_user().check_visibility(constants.SIDEBAR_SERIES):
|
||||
abort(404)
|
||||
return render_element_index(db.Series.sort, db.books_series_link, 'opds.feed_letter_series')
|
||||
|
||||
|
||||
@opds.route("/opds/series/letter/<book_id>")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_letter_series(book_id):
|
||||
if not auth.current_user().check_visibility(constants.SIDEBAR_SERIES):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
letter = true() if book_id == "00" else func.upper(db.Series.sort).startswith(book_id)
|
||||
entries = calibre_db.session.query(db.Series)\
|
||||
@ -281,8 +271,6 @@ def feed_series(book_id):
|
||||
@opds.route("/opds/ratings")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_ratingindex():
|
||||
if not auth.current_user().check_visibility(constants.SIDEBAR_RATING):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
entries = calibre_db.session.query(db.Ratings, func.count('books_ratings_link.book').label('count'),
|
||||
(db.Ratings.rating / 2).label('name')) \
|
||||
@ -309,8 +297,6 @@ def feed_ratings(book_id):
|
||||
@opds.route("/opds/formats")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_formatindex():
|
||||
if not auth.current_user().check_visibility(constants.SIDEBAR_FORMAT):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
entries = calibre_db.session.query(db.Data).join(db.Books)\
|
||||
.filter(calibre_db.common_filters()) \
|
||||
@ -318,6 +304,7 @@ def feed_formatindex():
|
||||
.order_by(db.Data.format).all()
|
||||
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||
len(entries))
|
||||
|
||||
element = list()
|
||||
for entry in entries:
|
||||
element.append(FeedObject(entry.format, entry.format))
|
||||
@ -340,14 +327,12 @@ def feed_format(book_id):
|
||||
@opds.route("/opds/language/")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_languagesindex():
|
||||
if not auth.current_user().check_visibility(constants.SIDEBAR_LANGUAGE):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
if auth.current_user().filter_language() == "all":
|
||||
if current_user.filter_language() == u"all":
|
||||
languages = calibre_db.speaking_language()
|
||||
else:
|
||||
languages = calibre_db.session.query(db.Languages).filter(
|
||||
db.Languages.lang_code == auth.current_user().filter_language()).all()
|
||||
db.Languages.lang_code == current_user.filter_language()).all()
|
||||
languages[0].name = isoLanguages.get_language_name(get_locale(), languages[0].lang_code)
|
||||
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||
len(languages))
|
||||
@ -369,11 +354,8 @@ def feed_languages(book_id):
|
||||
@opds.route("/opds/shelfindex")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_shelfindex():
|
||||
if not (auth.current_user().is_authenticated or g.allow_anonymous):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
shelf = ub.session.query(ub.Shelf).filter(
|
||||
or_(ub.Shelf.is_public == 1, ub.Shelf.user_id == auth.current_user().id)).order_by(ub.Shelf.name).all()
|
||||
shelf = g.shelves_access
|
||||
number = len(shelf)
|
||||
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||
number)
|
||||
@ -383,19 +365,16 @@ def feed_shelfindex():
|
||||
@opds.route("/opds/shelf/<int:book_id>")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_shelf(book_id):
|
||||
if not (auth.current_user().is_authenticated or g.allow_anonymous):
|
||||
abort(404)
|
||||
off = request.args.get("offset") or 0
|
||||
if auth.current_user().is_anonymous:
|
||||
if current_user.is_anonymous:
|
||||
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.is_public == 1,
|
||||
ub.Shelf.id == book_id).first()
|
||||
else:
|
||||
shelf = ub.session.query(ub.Shelf).filter(or_(and_(ub.Shelf.user_id == int(auth.current_user().id),
|
||||
shelf = ub.session.query(ub.Shelf).filter(or_(and_(ub.Shelf.user_id == int(current_user.id),
|
||||
ub.Shelf.id == book_id),
|
||||
and_(ub.Shelf.is_public == 1,
|
||||
ub.Shelf.id == book_id))).first()
|
||||
result = list()
|
||||
pagination = list()
|
||||
# user is allowed to access shelf
|
||||
if shelf:
|
||||
result, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1),
|
||||
@ -423,9 +402,16 @@ def feed_shelf(book_id):
|
||||
@opds.route("/opds/download/<book_id>/<book_format>/")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def opds_download_link(book_id, book_format):
|
||||
if not auth.current_user().role_download():
|
||||
return abort(401)
|
||||
client = "kobo" if "Kobo" in request.headers.get('User-Agent') else ""
|
||||
# I gave up with this: With enabled ldap login, the user doesn't get logged in, therefore it's always guest
|
||||
# workaround, loading the user from the request and checking its download rights here
|
||||
# in case of anonymous browsing user is None
|
||||
user = load_user_from_request(request) or current_user
|
||||
if not user.role_download():
|
||||
return abort(403)
|
||||
if "Kobo" in request.headers.get('User-Agent'):
|
||||
client = "kobo"
|
||||
else:
|
||||
client = ""
|
||||
return get_download_link(book_id, book_format.lower(), client)
|
||||
|
||||
|
||||
@ -443,17 +429,6 @@ def get_metadata_calibre_companion(uuid, library):
|
||||
return ""
|
||||
|
||||
|
||||
@opds.route("/opds/stats")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def get_database_stats():
|
||||
stat = dict()
|
||||
stat['books'] = calibre_db.session.query(db.Books).count()
|
||||
stat['authors'] = calibre_db.session.query(db.Authors).count()
|
||||
stat['categories'] = calibre_db.session.query(db.Tags).count()
|
||||
stat['series'] = calibre_db.session.query(db.Series).count()
|
||||
return make_response(jsonify(stat))
|
||||
|
||||
|
||||
@opds.route("/opds/thumb_240_240/<book_id>")
|
||||
@opds.route("/opds/cover_240_240/<book_id>")
|
||||
@opds.route("/opds/cover_90_90/<book_id>")
|
||||
@ -466,8 +441,6 @@ def feed_get_cover(book_id):
|
||||
@opds.route("/opds/readbooks")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_read_books():
|
||||
if not (auth.current_user().check_visibility(constants.SIDEBAR_READ_AND_UNREAD) and not auth.current_user().is_anonymous):
|
||||
return abort(403)
|
||||
off = request.args.get("offset") or 0
|
||||
result, pagination = render_read_books(int(off) / (int(config.config_books_per_page)) + 1, True, True)
|
||||
return render_xml_template('feed.xml', entries=result, pagination=pagination)
|
||||
@ -476,8 +449,6 @@ def feed_read_books():
|
||||
@opds.route("/opds/unreadbooks")
|
||||
@requires_basic_auth_if_no_ano
|
||||
def feed_unread_books():
|
||||
if not (auth.current_user().check_visibility(constants.SIDEBAR_READ_AND_UNREAD) and not auth.current_user().is_anonymous):
|
||||
return abort(403)
|
||||
off = request.args.get("offset") or 0
|
||||
result, pagination = render_read_books(int(off) / (int(config.config_books_per_page)) + 1, False, True)
|
||||
return render_xml_template('feed.xml', entries=result, pagination=pagination)
|
||||
@ -507,11 +478,32 @@ def feed_search(term):
|
||||
return render_xml_template('feed.xml', searchterm="")
|
||||
|
||||
|
||||
def check_auth(username, password):
|
||||
try:
|
||||
username = username.encode('windows-1252')
|
||||
except UnicodeEncodeError:
|
||||
username = username.encode('utf-8')
|
||||
user = ub.session.query(ub.User).filter(func.lower(ub.User.name) ==
|
||||
username.decode('utf-8').lower()).first()
|
||||
if bool(user and check_password_hash(str(user.password), password)):
|
||||
return True
|
||||
else:
|
||||
ip_address = request.headers.get('X-Forwarded-For', request.remote_addr)
|
||||
log.warning('OPDS Login failed for user "%s" IP-address: %s', username.decode('utf-8'), ip_address)
|
||||
return False
|
||||
|
||||
|
||||
def authenticate():
|
||||
return Response(
|
||||
'Could not verify your access level for that URL.\n'
|
||||
'You have to login with proper credentials', 401,
|
||||
{'WWW-Authenticate': 'Basic realm="Login Required"'})
|
||||
|
||||
|
||||
def render_xml_template(*args, **kwargs):
|
||||
# ToDo: return time in current timezone similar to %z
|
||||
currtime = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S+00:00")
|
||||
xml = render_template(current_time=currtime, instance=config.config_calibre_web_title, constants=constants.sidebar_settings, *args, **kwargs)
|
||||
xml = render_template(current_time=currtime, instance=config.config_calibre_web_title, *args, **kwargs)
|
||||
response = make_response(xml)
|
||||
response.headers["Content-Type"] = "application/atom+xml; charset=utf-8"
|
||||
return response
|
||||
@ -536,7 +528,7 @@ def render_element_index(database_column, linked_table, folder):
|
||||
entries = entries.join(linked_table).join(db.Books)
|
||||
entries = entries.filter(calibre_db.common_filters()).group_by(func.upper(func.substr(database_column, 1, 1))).all()
|
||||
elements = []
|
||||
if off == 0 and entries:
|
||||
if off == 0:
|
||||
elements.append({'id': "00", 'name': _("All")})
|
||||
shift = 1
|
||||
for entry in entries[
|
||||
|
@ -29,7 +29,7 @@
|
||||
|
||||
from urllib.parse import urlparse, urljoin
|
||||
|
||||
from flask import request, url_for, redirect, current_app
|
||||
from flask import request, url_for, redirect
|
||||
|
||||
|
||||
def is_safe_url(target):
|
||||
@ -38,15 +38,16 @@ def is_safe_url(target):
|
||||
return test_url.scheme in ('http', 'https') and ref_url.netloc == test_url.netloc
|
||||
|
||||
|
||||
def remove_prefix(text, prefix):
|
||||
if text.startswith(prefix):
|
||||
return text[len(prefix):]
|
||||
return ""
|
||||
def get_redirect_target():
|
||||
for target in request.values.get('next'), request.referrer:
|
||||
if not target:
|
||||
continue
|
||||
if is_safe_url(target):
|
||||
return target
|
||||
|
||||
|
||||
def get_redirect_location(next, endpoint, **values):
|
||||
target = next or url_for(endpoint, **values)
|
||||
adapter = current_app.url_map.bind(urlparse(request.host_url).netloc)
|
||||
if not len(adapter.allowed_methods(remove_prefix(target, request.environ.get('HTTP_X_SCRIPT_NAME',"")))):
|
||||
def redirect_back(endpoint, **values):
|
||||
target = request.form['next']
|
||||
if not target or not is_safe_url(target):
|
||||
target = url_for(endpoint, **values)
|
||||
return target
|
||||
return redirect(target)
|
||||
|
@ -25,13 +25,12 @@ from datetime import datetime
|
||||
from functools import wraps
|
||||
|
||||
from flask import Blueprint, request, make_response, abort, url_for, flash, redirect
|
||||
from .cw_login import login_user, current_user
|
||||
from flask_login import login_required, current_user, login_user
|
||||
from flask_babel import gettext as _
|
||||
from sqlalchemy.sql.expression import true
|
||||
|
||||
from . import config, logger, ub
|
||||
from .render_template import render_title_template
|
||||
from .usermanagement import user_login_required
|
||||
|
||||
|
||||
remotelogin = Blueprint('remotelogin', __name__)
|
||||
@ -59,21 +58,21 @@ def remote_login():
|
||||
ub.session.add(auth_token)
|
||||
ub.session_commit()
|
||||
verify_url = url_for('remotelogin.verify_token', token=auth_token.auth_token, _external=true)
|
||||
log.debug("Remot Login request with token: %s", auth_token.auth_token)
|
||||
return render_title_template('remote_login.html', title=_("Login"), token=auth_token.auth_token,
|
||||
log.debug(u"Remot Login request with token: %s", auth_token.auth_token)
|
||||
return render_title_template('remote_login.html', title=_(u"Login"), token=auth_token.auth_token,
|
||||
verify_url=verify_url, page="remotelogin")
|
||||
|
||||
|
||||
@remotelogin.route('/verify/<token>')
|
||||
@remote_login_required
|
||||
@user_login_required
|
||||
@login_required
|
||||
def verify_token(token):
|
||||
auth_token = ub.session.query(ub.RemoteAuthToken).filter(ub.RemoteAuthToken.auth_token == token).first()
|
||||
|
||||
# Token not found
|
||||
if auth_token is None:
|
||||
flash(_("Token not found"), category="error")
|
||||
log.error("Remote Login token not found")
|
||||
flash(_(u"Token not found"), category="error")
|
||||
log.error(u"Remote Login token not found")
|
||||
return redirect(url_for('web.index'))
|
||||
|
||||
# Token expired
|
||||
@ -81,8 +80,8 @@ def verify_token(token):
|
||||
ub.session.delete(auth_token)
|
||||
ub.session_commit()
|
||||
|
||||
flash(_("Token has expired"), category="error")
|
||||
log.error("Remote Login token expired")
|
||||
flash(_(u"Token has expired"), category="error")
|
||||
log.error(u"Remote Login token expired")
|
||||
return redirect(url_for('web.index'))
|
||||
|
||||
# Update token with user information
|
||||
@ -90,8 +89,8 @@ def verify_token(token):
|
||||
auth_token.verified = True
|
||||
ub.session_commit()
|
||||
|
||||
flash(_("Success! Please return to your device"), category="success")
|
||||
log.debug("Remote Login token for userid %s verified", auth_token.user_id)
|
||||
flash(_(u"Success! Please return to your device"), category="success")
|
||||
log.debug(u"Remote Login token for userid %s verified", auth_token.user_id)
|
||||
return redirect(url_for('web.index'))
|
||||
|
||||
|
||||
@ -106,7 +105,7 @@ def token_verified():
|
||||
# Token not found
|
||||
if auth_token is None:
|
||||
data['status'] = 'error'
|
||||
data['message'] = _("Token not found")
|
||||
data['message'] = _(u"Token not found")
|
||||
|
||||
# Token expired
|
||||
elif datetime.now() > auth_token.expiration:
|
||||
@ -114,7 +113,7 @@ def token_verified():
|
||||
ub.session_commit()
|
||||
|
||||
data['status'] = 'error'
|
||||
data['message'] = _("Token has expired")
|
||||
data['message'] = _(u"Token has expired")
|
||||
|
||||
elif not auth_token.verified:
|
||||
data['status'] = 'not_verified'
|
||||
@ -127,8 +126,8 @@ def token_verified():
|
||||
ub.session_commit("User {} logged in via remotelogin, token deleted".format(user.name))
|
||||
|
||||
data['status'] = 'success'
|
||||
log.debug("Remote Login for userid %s succeeded", user.id)
|
||||
flash(_("Success! You are now logged in as: %(nickname)s", nickname=user.name), category="success")
|
||||
log.debug(u"Remote Login for userid %s succeeded", user.id)
|
||||
flash(_(u"you are now logged in as: '%(nickname)s'", nickname=user.name), category="success")
|
||||
|
||||
response = make_response(json.dumps(data, ensure_ascii=False))
|
||||
response.headers["Content-Type"] = "application/json; charset=utf-8"
|
||||
|
@ -19,10 +19,9 @@
|
||||
from flask import render_template, g, abort, request
|
||||
from flask_babel import gettext as _
|
||||
from werkzeug.local import LocalProxy
|
||||
from .cw_login import current_user
|
||||
from sqlalchemy.sql.expression import or_
|
||||
from flask_login import current_user
|
||||
|
||||
from . import config, constants, logger, ub
|
||||
from . import config, constants, logger
|
||||
from .ub import User
|
||||
|
||||
|
||||
@ -46,12 +45,12 @@ def get_sidebar_config(kwargs=None):
|
||||
"show_text": _('Show Hot Books'), "config_show": True})
|
||||
if current_user.role_admin():
|
||||
sidebar.append({"glyph": "glyphicon-download", "text": _('Downloaded Books'), "link": 'web.download_list',
|
||||
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not current_user.is_anonymous),
|
||||
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not g.user.is_anonymous),
|
||||
"page": "download", "show_text": _('Show Downloaded Books'),
|
||||
"config_show": content})
|
||||
else:
|
||||
sidebar.append({"glyph": "glyphicon-download", "text": _('Downloaded Books'), "link": 'web.books_list',
|
||||
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not current_user.is_anonymous),
|
||||
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not g.user.is_anonymous),
|
||||
"page": "download", "show_text": _('Show Downloaded Books'),
|
||||
"config_show": content})
|
||||
sidebar.append(
|
||||
@ -59,50 +58,47 @@ def get_sidebar_config(kwargs=None):
|
||||
"visibility": constants.SIDEBAR_BEST_RATED, 'public': True, "page": "rated",
|
||||
"show_text": _('Show Top Rated Books'), "config_show": True})
|
||||
sidebar.append({"glyph": "glyphicon-eye-open", "text": _('Read Books'), "link": 'web.books_list', "id": "read",
|
||||
"visibility": constants.SIDEBAR_READ_AND_UNREAD, 'public': (not current_user.is_anonymous),
|
||||
"page": "read", "show_text": _('Show Read and Unread'), "config_show": content})
|
||||
"visibility": constants.SIDEBAR_READ_AND_UNREAD, 'public': (not g.user.is_anonymous),
|
||||
"page": "read", "show_text": _('Show read and unread'), "config_show": content})
|
||||
sidebar.append(
|
||||
{"glyph": "glyphicon-eye-close", "text": _('Unread Books'), "link": 'web.books_list', "id": "unread",
|
||||
"visibility": constants.SIDEBAR_READ_AND_UNREAD, 'public': (not current_user.is_anonymous), "page": "unread",
|
||||
"visibility": constants.SIDEBAR_READ_AND_UNREAD, 'public': (not g.user.is_anonymous), "page": "unread",
|
||||
"show_text": _('Show unread'), "config_show": False})
|
||||
sidebar.append({"glyph": "glyphicon-random", "text": _('Discover'), "link": 'web.books_list', "id": "rand",
|
||||
"visibility": constants.SIDEBAR_RANDOM, 'public': True, "page": "discover",
|
||||
"show_text": _('Show Random Books'), "config_show": True})
|
||||
sidebar.append({"glyph": "glyphicon-inbox", "text": _('Categories'), "link": 'web.category_list', "id": "cat",
|
||||
"visibility": constants.SIDEBAR_CATEGORY, 'public': True, "page": "category",
|
||||
"show_text": _('Show Category Section'), "config_show": True})
|
||||
"show_text": _('Show category selection'), "config_show": True})
|
||||
sidebar.append({"glyph": "glyphicon-bookmark", "text": _('Series'), "link": 'web.series_list', "id": "serie",
|
||||
"visibility": constants.SIDEBAR_SERIES, 'public': True, "page": "series",
|
||||
"show_text": _('Show Series Section'), "config_show": True})
|
||||
"show_text": _('Show series selection'), "config_show": True})
|
||||
sidebar.append({"glyph": "glyphicon-user", "text": _('Authors'), "link": 'web.author_list', "id": "author",
|
||||
"visibility": constants.SIDEBAR_AUTHOR, 'public': True, "page": "author",
|
||||
"show_text": _('Show Author Section'), "config_show": True})
|
||||
"show_text": _('Show author selection'), "config_show": True})
|
||||
sidebar.append(
|
||||
{"glyph": "glyphicon-text-size", "text": _('Publishers'), "link": 'web.publisher_list', "id": "publisher",
|
||||
"visibility": constants.SIDEBAR_PUBLISHER, 'public': True, "page": "publisher",
|
||||
"show_text": _('Show Publisher Section'), "config_show":True})
|
||||
"show_text": _('Show publisher selection'), "config_show":True})
|
||||
sidebar.append({"glyph": "glyphicon-flag", "text": _('Languages'), "link": 'web.language_overview', "id": "lang",
|
||||
"visibility": constants.SIDEBAR_LANGUAGE, 'public': (current_user.filter_language() == 'all'),
|
||||
"visibility": constants.SIDEBAR_LANGUAGE, 'public': (g.user.filter_language() == 'all'),
|
||||
"page": "language",
|
||||
"show_text": _('Show Language Section'), "config_show": True})
|
||||
"show_text": _('Show language selection'), "config_show": True})
|
||||
sidebar.append({"glyph": "glyphicon-star-empty", "text": _('Ratings'), "link": 'web.ratings_list', "id": "rate",
|
||||
"visibility": constants.SIDEBAR_RATING, 'public': True,
|
||||
"page": "rating", "show_text": _('Show Ratings Section'), "config_show": True})
|
||||
"page": "rating", "show_text": _('Show ratings selection'), "config_show": True})
|
||||
sidebar.append({"glyph": "glyphicon-file", "text": _('File formats'), "link": 'web.formats_list', "id": "format",
|
||||
"visibility": constants.SIDEBAR_FORMAT, 'public': True,
|
||||
"page": "format", "show_text": _('Show File Formats Section'), "config_show": True})
|
||||
"page": "format", "show_text": _('Show file formats selection'), "config_show": True})
|
||||
sidebar.append(
|
||||
{"glyph": "glyphicon-trash", "text": _('Archived Books'), "link": 'web.books_list', "id": "archived",
|
||||
"visibility": constants.SIDEBAR_ARCHIVED, 'public': (not current_user.is_anonymous), "page": "archived",
|
||||
"show_text": _('Show Archived Books'), "config_show": content})
|
||||
"visibility": constants.SIDEBAR_ARCHIVED, 'public': (not g.user.is_anonymous), "page": "archived",
|
||||
"show_text": _('Show archived books'), "config_show": content})
|
||||
if not simple:
|
||||
sidebar.append(
|
||||
{"glyph": "glyphicon-th-list", "text": _('Books List'), "link": 'web.books_table', "id": "list",
|
||||
"visibility": constants.SIDEBAR_LIST, 'public': (not current_user.is_anonymous), "page": "list",
|
||||
"visibility": constants.SIDEBAR_LIST, 'public': (not g.user.is_anonymous), "page": "list",
|
||||
"show_text": _('Show Books List'), "config_show": content})
|
||||
g.shelves_access = ub.session.query(ub.Shelf).filter(
|
||||
or_(ub.Shelf.is_public == 1, ub.Shelf.user_id == current_user.id)).order_by(ub.Shelf.name).all()
|
||||
|
||||
return sidebar, simple
|
||||
|
||||
|
||||
@ -111,7 +107,7 @@ def render_title_template(*args, **kwargs):
|
||||
sidebar, simple = get_sidebar_config(kwargs)
|
||||
try:
|
||||
return render_template(instance=config.config_calibre_web_title, sidebar=sidebar, simple=simple,
|
||||
accept=config.config_upload_formats.split(','),
|
||||
accept=constants.EXTENSIONS_UPLOAD,
|
||||
*args, **kwargs)
|
||||
except PermissionError:
|
||||
log.error("No permission to access {} file.".format(args[0]))
|
||||
|
@ -41,9 +41,9 @@ class ReverseProxied(object):
|
||||
"""Wrap the application in this middleware and configure the
|
||||
front-end server to add these headers, to let you quietly bind
|
||||
this to a URL other than / and to an HTTP scheme that is
|
||||
different from what is used locally.
|
||||
different than what is used locally.
|
||||
|
||||
Code courtesy of: https://flask.pocoo.org/snippets/35/
|
||||
Code courtesy of: http://flask.pocoo.org/snippets/35/
|
||||
|
||||
In nginx:
|
||||
location /myprefix {
|
||||
|
@ -19,26 +19,19 @@
|
||||
import datetime
|
||||
|
||||
from . import config, constants
|
||||
from .services.background_scheduler import BackgroundScheduler, CronTrigger, use_APScheduler
|
||||
from .services.background_scheduler import BackgroundScheduler, use_APScheduler
|
||||
from .tasks.database import TaskReconnectDatabase
|
||||
from .tasks.clean import TaskClean
|
||||
from .tasks.thumbnail import TaskGenerateCoverThumbnails, TaskGenerateSeriesThumbnails, TaskClearCoverThumbnailCache
|
||||
from .services.worker import WorkerThread
|
||||
from .tasks.metadata_backup import TaskBackupMetadata
|
||||
|
||||
|
||||
def get_scheduled_tasks(reconnect=True):
|
||||
tasks = list()
|
||||
# Reconnect Calibre database (metadata.db) based on config.schedule_reconnect
|
||||
# config.schedule_reconnect or
|
||||
# Reconnect Calibre database (metadata.db)
|
||||
if reconnect:
|
||||
tasks.append([lambda: TaskReconnectDatabase(), 'reconnect', False])
|
||||
|
||||
# Delete temp folder
|
||||
tasks.append([lambda: TaskClean(), 'delete temp', True])
|
||||
|
||||
# Generate metadata.opf file for each changed book
|
||||
if config.schedule_metadata_backup:
|
||||
tasks.append([lambda: TaskBackupMetadata("en"), 'backup metadata', False])
|
||||
|
||||
# Generate all missing book cover thumbnails
|
||||
if config.schedule_generate_book_covers:
|
||||
tasks.append([lambda: TaskClearCoverThumbnailCache(0), 'delete superfluous book covers', True])
|
||||
@ -69,13 +62,10 @@ def register_scheduled_tasks(reconnect=True):
|
||||
duration = config.schedule_duration
|
||||
|
||||
# Register scheduled tasks
|
||||
timezone_info = datetime.datetime.now(datetime.timezone.utc).astimezone().tzinfo
|
||||
scheduler.schedule_tasks(tasks=get_scheduled_tasks(reconnect), trigger=CronTrigger(hour=start,
|
||||
timezone=timezone_info))
|
||||
scheduler.schedule_tasks(tasks=get_scheduled_tasks(reconnect), trigger='cron', hour=start)
|
||||
end_time = calclulate_end_time(start, duration)
|
||||
scheduler.schedule(func=end_scheduled_tasks, trigger=CronTrigger(hour=end_time.hour, minute=end_time.minute,
|
||||
timezone=timezone_info),
|
||||
name="end scheduled task")
|
||||
scheduler.schedule(func=end_scheduled_tasks, trigger='cron', name="end scheduled task", hour=end_time.hour,
|
||||
minute=end_time.minute)
|
||||
|
||||
# Kick-off tasks, if they should currently be running
|
||||
if should_task_be_running(start, duration):
|
||||
@ -93,8 +83,6 @@ def register_startup_tasks():
|
||||
# Ignore tasks that should currently be running, as these will be added when registering scheduled tasks
|
||||
if constants.APP_MODE in ['development', 'test'] and not should_task_be_running(start, duration):
|
||||
scheduler.schedule_tasks_immediately(tasks=get_scheduled_tasks(False))
|
||||
else:
|
||||
scheduler.schedule_tasks_immediately(tasks=[[lambda: TaskClean(), 'delete temp', True]])
|
||||
|
||||
|
||||
def should_task_be_running(start, duration):
|
||||
@ -103,7 +91,6 @@ def should_task_be_running(start, duration):
|
||||
end_time = start_time + datetime.timedelta(hours=duration // 60, minutes=duration % 60)
|
||||
return start_time < now < end_time
|
||||
|
||||
|
||||
def calclulate_end_time(start, duration):
|
||||
start_time = datetime.datetime.now().replace(hour=start, minute=0)
|
||||
return start_time + datetime.timedelta(hours=duration // 60, minutes=duration % 60)
|
||||
|
124
cps/search.py
@ -19,19 +19,17 @@ from datetime import datetime
|
||||
|
||||
from flask import Blueprint, request, redirect, url_for, flash
|
||||
from flask import session as flask_session
|
||||
from .cw_login import current_user
|
||||
from flask_login import current_user
|
||||
from flask_babel import format_date
|
||||
from flask_babel import gettext as _
|
||||
from sqlalchemy.sql.expression import func, not_, and_, or_, text, true
|
||||
from sqlalchemy.sql.functions import coalesce
|
||||
|
||||
from . import logger, db, calibre_db, config, ub
|
||||
from .string_helper import strip_whitespaces
|
||||
from .usermanagement import login_required_if_no_ano
|
||||
from .render_template import render_title_template
|
||||
from .pagination import Pagination
|
||||
|
||||
|
||||
search = Blueprint('search', __name__)
|
||||
|
||||
log = logger.create()
|
||||
@ -47,7 +45,7 @@ def simple_search():
|
||||
return render_title_template('search.html',
|
||||
searchterm="",
|
||||
result_count=0,
|
||||
title=_("Search"),
|
||||
title=_(u"Search"),
|
||||
page="search")
|
||||
|
||||
|
||||
@ -82,27 +80,16 @@ def adv_search_custom_columns(cc, term, q):
|
||||
if custom_end:
|
||||
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
|
||||
func.datetime(db.cc_classes[c.id].value) <= func.datetime(custom_end)))
|
||||
elif c.datatype in ["int", "float"]:
|
||||
custom_low = term.get('custom_column_' + str(c.id) + '_low')
|
||||
custom_high = term.get('custom_column_' + str(c.id) + '_high')
|
||||
if custom_low:
|
||||
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
|
||||
db.cc_classes[c.id].value >= custom_low))
|
||||
if custom_high:
|
||||
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
|
||||
db.cc_classes[c.id].value <= custom_high))
|
||||
else:
|
||||
custom_query = term.get('custom_column_' + str(c.id))
|
||||
if c.datatype == 'bool':
|
||||
if custom_query != "Any":
|
||||
if custom_query == "":
|
||||
q = q.filter(~getattr(db.Books, 'custom_column_' + str(c.id)).
|
||||
any(db.cc_classes[c.id].value >= 0))
|
||||
else:
|
||||
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
|
||||
db.cc_classes[c.id].value == bool(custom_query == "True")))
|
||||
elif custom_query != '' and custom_query is not None:
|
||||
if c.datatype == 'rating':
|
||||
if custom_query != '' and custom_query is not None:
|
||||
if c.datatype == 'bool':
|
||||
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
|
||||
db.cc_classes[c.id].value == (custom_query == "True")))
|
||||
elif c.datatype == 'int' or c.datatype == 'float':
|
||||
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
|
||||
db.cc_classes[c.id].value == custom_query))
|
||||
elif c.datatype == 'rating':
|
||||
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
|
||||
db.cc_classes[c.id].value == int(float(custom_query) * 2)))
|
||||
else:
|
||||
@ -141,10 +128,10 @@ def adv_search_read_status(read_status):
|
||||
db_filter = coalesce(ub.ReadBook.read_status, 0) != ub.ReadBook.STATUS_FINISHED
|
||||
else:
|
||||
try:
|
||||
if read_status == "":
|
||||
db_filter = coalesce(db.cc_classes[config.config_read_column].value, 2) == 2
|
||||
if read_status == "True":
|
||||
db_filter = db.cc_classes[config.config_read_column].value == True
|
||||
else:
|
||||
db_filter = db.cc_classes[config.config_read_column].value == bool(read_status == "True")
|
||||
db_filter = coalesce(db.cc_classes[config.config_read_column].value, False) != True
|
||||
except (KeyError, AttributeError, IndexError):
|
||||
log.error("Custom Column No.{} does not exist in calibre database".format(config.config_read_column))
|
||||
flash(_("Custom Column No.%(column)d does not exist in calibre database",
|
||||
@ -198,18 +185,18 @@ def extend_search_term(searchterm,
|
||||
searchterm.extend((author_name.replace('|', ','), book_title, publisher))
|
||||
if pub_start:
|
||||
try:
|
||||
searchterm.extend([_("Published after ") +
|
||||
searchterm.extend([_(u"Published after ") +
|
||||
format_date(datetime.strptime(pub_start, "%Y-%m-%d"),
|
||||
format='medium')])
|
||||
except ValueError:
|
||||
pub_start = ""
|
||||
pub_start = u""
|
||||
if pub_end:
|
||||
try:
|
||||
searchterm.extend([_("Published before ") +
|
||||
searchterm.extend([_(u"Published before ") +
|
||||
format_date(datetime.strptime(pub_end, "%Y-%m-%d"),
|
||||
format='medium')])
|
||||
except ValueError:
|
||||
pub_end = ""
|
||||
pub_end = u""
|
||||
elements = {'tag': db.Tags, 'serie':db.Series, 'shelf':ub.Shelf}
|
||||
for key, db_element in elements.items():
|
||||
tag_names = calibre_db.session.query(db_element).filter(db_element.id.in_(tags['include_' + key])).all()
|
||||
@ -227,11 +214,11 @@ def extend_search_term(searchterm,
|
||||
language_names = calibre_db.speaking_language(language_names)
|
||||
searchterm.extend(language.name for language in language_names)
|
||||
if rating_high:
|
||||
searchterm.extend([_("Rating <= %(rating)s", rating=rating_high)])
|
||||
searchterm.extend([_(u"Rating <= %(rating)s", rating=rating_high)])
|
||||
if rating_low:
|
||||
searchterm.extend([_("Rating >= %(rating)s", rating=rating_low)])
|
||||
if read_status != "Any":
|
||||
searchterm.extend([_("Read Status = '%(status)s'", status=read_status)])
|
||||
searchterm.extend([_(u"Rating >= %(rating)s", rating=rating_low)])
|
||||
if read_status:
|
||||
searchterm.extend([_(u"Read Status = %(status)s", status=read_status)])
|
||||
searchterm.extend(ext for ext in tags['include_extension'])
|
||||
searchterm.extend(ext for ext in tags['exclude_extension'])
|
||||
# handle custom columns
|
||||
@ -244,8 +231,7 @@ def render_adv_search_results(term, offset=None, order=None, limit=None):
|
||||
pagination = None
|
||||
|
||||
cc = calibre_db.get_cc_columns(config, filter_config_custom_read=True)
|
||||
calibre_db.create_functions()
|
||||
# calibre_db.session.connection().connection.connection.create_function("lower", 1, db.lcase)
|
||||
calibre_db.session.connection().connection.connection.create_function("lower", 1, db.lcase)
|
||||
query = calibre_db.generate_linked_query(config.config_read_column, db.Books)
|
||||
q = query.outerjoin(db.books_series_link, db.Books.id == db.books_series_link.c.book)\
|
||||
.outerjoin(db.Series)\
|
||||
@ -258,21 +244,21 @@ def render_adv_search_results(term, offset=None, order=None, limit=None):
|
||||
tags['include_' + element] = term.get('include_' + element)
|
||||
tags['exclude_' + element] = term.get('exclude_' + element)
|
||||
|
||||
author_name = term.get("authors")
|
||||
book_title = term.get("title")
|
||||
author_name = term.get("author_name")
|
||||
book_title = term.get("book_title")
|
||||
publisher = term.get("publisher")
|
||||
pub_start = term.get("publishstart")
|
||||
pub_end = term.get("publishend")
|
||||
rating_low = term.get("ratinghigh")
|
||||
rating_high = term.get("ratinglow")
|
||||
description = term.get("comments")
|
||||
description = term.get("comment")
|
||||
read_status = term.get("read_status")
|
||||
if author_name:
|
||||
author_name = strip_whitespaces(author_name).lower().replace(',', '|')
|
||||
author_name = author_name.strip().lower().replace(',', '|')
|
||||
if book_title:
|
||||
book_title = strip_whitespaces(book_title).lower()
|
||||
book_title = book_title.strip().lower()
|
||||
if publisher:
|
||||
publisher = strip_whitespaces(publisher).lower()
|
||||
publisher = publisher.strip().lower()
|
||||
|
||||
search_term = []
|
||||
cc_present = False
|
||||
@ -281,36 +267,23 @@ def render_adv_search_results(term, offset=None, order=None, limit=None):
|
||||
column_start = term.get('custom_column_' + str(c.id) + '_start')
|
||||
column_end = term.get('custom_column_' + str(c.id) + '_end')
|
||||
if column_start:
|
||||
search_term.extend(["{} >= {}".format(c.name,
|
||||
search_term.extend([u"{} >= {}".format(c.name,
|
||||
format_date(datetime.strptime(column_start, "%Y-%m-%d").date(),
|
||||
format='medium')
|
||||
)])
|
||||
cc_present = True
|
||||
if column_end:
|
||||
search_term.extend(["{} <= {}".format(c.name,
|
||||
format_date(datetime.strptime(column_end, "%Y-%m-%d").date(),
|
||||
search_term.extend([u"{} <= {}".format(c.name,
|
||||
format_date(datetime.strptime(column_end, "%Y-%m-%d").date(),
|
||||
format='medium')
|
||||
)])
|
||||
cc_present = True
|
||||
if c.datatype in ["int", "float"]:
|
||||
column_low = term.get('custom_column_' + str(c.id) + '_low')
|
||||
column_high = term.get('custom_column_' + str(c.id) + '_high')
|
||||
if column_low:
|
||||
search_term.extend(["{} >= {}".format(c.name, column_low)])
|
||||
cc_present = True
|
||||
if column_high:
|
||||
search_term.extend(["{} <= {}".format(c.name,column_high)])
|
||||
cc_present = True
|
||||
elif c.datatype == "bool":
|
||||
if term.get('custom_column_' + str(c.id)) != "Any":
|
||||
search_term.extend([("{}: {}".format(c.name, term.get('custom_column_' + str(c.id))))])
|
||||
cc_present = True
|
||||
elif term.get('custom_column_' + str(c.id)):
|
||||
search_term.extend([("{}: {}".format(c.name, term.get('custom_column_' + str(c.id))))])
|
||||
search_term.extend([(u"{}: {}".format(c.name, term.get('custom_column_' + str(c.id))))])
|
||||
cc_present = True
|
||||
|
||||
if any(tags.values()) or author_name or book_title or publisher or pub_start or pub_end or rating_low \
|
||||
or rating_high or description or cc_present or read_status != "Any":
|
||||
or rating_high or description or cc_present or read_status:
|
||||
search_term, pub_start, pub_end = extend_search_term(search_term,
|
||||
author_name,
|
||||
book_title,
|
||||
@ -329,8 +302,7 @@ def render_adv_search_results(term, offset=None, order=None, limit=None):
|
||||
q = q.filter(func.datetime(db.Books.pubdate) > func.datetime(pub_start))
|
||||
if pub_end:
|
||||
q = q.filter(func.datetime(db.Books.pubdate) < func.datetime(pub_end))
|
||||
if read_status != "Any":
|
||||
q = q.filter(adv_search_read_status(read_status))
|
||||
q = q.filter(adv_search_read_status(read_status))
|
||||
if publisher:
|
||||
q = q.filter(db.Books.publishers.any(func.lower(db.Publishers.name).ilike("%" + publisher + "%")))
|
||||
q = adv_search_tag(q, tags['include_tag'], tags['exclude_tag'])
|
||||
@ -367,7 +339,7 @@ def render_adv_search_results(term, offset=None, order=None, limit=None):
|
||||
pagination=pagination,
|
||||
entries=entries,
|
||||
result_count=result_count,
|
||||
title=_("Advanced Search"), page="advsearch",
|
||||
title=_(u"Advanced Search"), page="advsearch",
|
||||
order=order[1])
|
||||
|
||||
|
||||
@ -394,28 +366,22 @@ def render_prepare_search_form(cc):
|
||||
.filter(calibre_db.common_filters()) \
|
||||
.group_by(db.Data.format)\
|
||||
.order_by(db.Data.format).all()
|
||||
if current_user.filter_language() == "all":
|
||||
if current_user.filter_language() == u"all":
|
||||
languages = calibre_db.speaking_language()
|
||||
else:
|
||||
languages = None
|
||||
return render_title_template('search_form.html', tags=tags, languages=languages, extensions=extensions,
|
||||
series=series,shelves=shelves, title=_("Advanced Search"), cc=cc, page="advsearch")
|
||||
series=series,shelves=shelves, title=_(u"Advanced Search"), cc=cc, page="advsearch")
|
||||
|
||||
|
||||
def render_search_results(term, offset=None, order=None, limit=None):
|
||||
if term:
|
||||
join = db.books_series_link, db.Books.id == db.books_series_link.c.book, db.Series
|
||||
entries, result_count, pagination = calibre_db.get_search_results(term,
|
||||
config,
|
||||
offset,
|
||||
order,
|
||||
limit,
|
||||
*join)
|
||||
else:
|
||||
entries = list()
|
||||
order = [None, None]
|
||||
pagination = result_count = None
|
||||
|
||||
join = db.books_series_link, db.Books.id == db.books_series_link.c.book, db.Series
|
||||
entries, result_count, pagination = calibre_db.get_search_results(term,
|
||||
config,
|
||||
offset,
|
||||
order,
|
||||
limit,
|
||||
*join)
|
||||
return render_title_template('search.html',
|
||||
searchterm=term,
|
||||
pagination=pagination,
|
||||
@ -423,7 +389,7 @@ def render_search_results(term, offset=None, order=None, limit=None):
|
||||
adv_searchterm=term,
|
||||
entries=entries,
|
||||
result_count=result_count,
|
||||
title=_("Search"),
|
||||
title=_(u"Search"),
|
||||
page="search",
|
||||
order=order[1])
|
||||
|
||||
|
@ -23,16 +23,17 @@ import json
|
||||
import os
|
||||
import sys
|
||||
|
||||
from flask import Blueprint, request, url_for, make_response, jsonify
|
||||
from .cw_login import current_user
|
||||
from flask import Blueprint, Response, request, url_for
|
||||
from flask_login import current_user
|
||||
from flask_login import login_required
|
||||
from flask_babel import get_locale
|
||||
from sqlalchemy.exc import InvalidRequestError, OperationalError
|
||||
from sqlalchemy.orm.attributes import flag_modified
|
||||
|
||||
from cps.services.Metadata import Metadata
|
||||
from . import constants, logger, ub, web_server
|
||||
from .usermanagement import user_login_required
|
||||
|
||||
# current_milli_time = lambda: int(round(time() * 1000))
|
||||
|
||||
meta = Blueprint("metadata", __name__)
|
||||
|
||||
@ -80,7 +81,7 @@ cl = list_classes(new_list)
|
||||
|
||||
|
||||
@meta.route("/metadata/provider")
|
||||
@user_login_required
|
||||
@login_required
|
||||
def metadata_provider():
|
||||
active = current_user.view_settings.get("metadata", {})
|
||||
provider = list()
|
||||
@ -89,12 +90,12 @@ def metadata_provider():
|
||||
provider.append(
|
||||
{"name": c.__name__, "active": ac, "initial": ac, "id": c.__id__}
|
||||
)
|
||||
return make_response(jsonify(provider))
|
||||
return Response(json.dumps(provider), mimetype="application/json")
|
||||
|
||||
|
||||
@meta.route("/metadata/provider", methods=["POST"])
|
||||
@meta.route("/metadata/provider/<prov_name>", methods=["POST"])
|
||||
@user_login_required
|
||||
@login_required
|
||||
def metadata_change_active_provider(prov_name):
|
||||
new_state = request.get_json()
|
||||
active = current_user.view_settings.get("metadata", {})
|
||||
@ -114,12 +115,14 @@ def metadata_change_active_provider(prov_name):
|
||||
provider = next((c for c in cl if c.__id__ == prov_name), None)
|
||||
if provider is not None:
|
||||
data = provider.search(new_state.get("query", ""))
|
||||
return make_response(jsonify([asdict(x) for x in data]))
|
||||
return Response(
|
||||
json.dumps([asdict(x) for x in data]), mimetype="application/json"
|
||||
)
|
||||
return ""
|
||||
|
||||
|
||||
@meta.route("/metadata/search", methods=["POST"])
|
||||
@user_login_required
|
||||
@login_required
|
||||
def metadata_search():
|
||||
query = request.form.to_dict().get("query")
|
||||
data = list()
|
||||
@ -127,7 +130,7 @@ def metadata_search():
|
||||
locale = get_locale()
|
||||
if query:
|
||||
static_cover = url_for("static", filename="generic_cover.jpg")
|
||||
# ret = cl[0].search(query, static_cover, locale)
|
||||
# start = current_milli_time()
|
||||
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
|
||||
meta = {
|
||||
executor.submit(c.search, query, static_cover, locale): c
|
||||
@ -136,4 +139,5 @@ def metadata_search():
|
||||
}
|
||||
for future in concurrent.futures.as_completed(meta):
|
||||
data.extend([asdict(x) for x in future.result() if x])
|
||||
return make_response(jsonify(data))
|
||||
# log.info({'Time elapsed {}'.format(current_milli_time()-start)})
|
||||
return Response(json.dumps(data), mimetype="application/json")
|
||||
|
115
cps/server.py
@ -21,12 +21,12 @@ import os
|
||||
import errno
|
||||
import signal
|
||||
import socket
|
||||
import subprocess # nosec
|
||||
|
||||
try:
|
||||
from gevent.pywsgi import WSGIServer
|
||||
from .gevent_wsgi import MyWSGIHandler
|
||||
from gevent.pool import Pool
|
||||
from gevent.socket import socket as GeventSocket
|
||||
from gevent import __version__ as _version
|
||||
from greenlet import GreenletExit
|
||||
import ssl
|
||||
@ -36,7 +36,6 @@ except ImportError:
|
||||
from .tornado_wsgi import MyWSGIContainer
|
||||
from tornado.httpserver import HTTPServer
|
||||
from tornado.ioloop import IOLoop
|
||||
from tornado import netutil
|
||||
from tornado import version as _version
|
||||
VERSION = 'Tornado ' + _version
|
||||
_GEVENT = False
|
||||
@ -96,13 +95,7 @@ class WebServer(object):
|
||||
log.warning('Cert path: %s', certfile_path)
|
||||
log.warning('Key path: %s', keyfile_path)
|
||||
|
||||
@staticmethod
|
||||
def _make_gevent_socket_activated():
|
||||
# Reuse an already open socket on fd=SD_LISTEN_FDS_START
|
||||
SD_LISTEN_FDS_START = 3
|
||||
return GeventSocket(fileno=SD_LISTEN_FDS_START)
|
||||
|
||||
def _prepare_unix_socket(self, socket_file):
|
||||
def _make_gevent_unix_socket(self, socket_file):
|
||||
# the socket file must not exist prior to bind()
|
||||
if os.path.exists(socket_file):
|
||||
# avoid nuking regular files and symbolic links (could be a mistype or security issue)
|
||||
@ -110,41 +103,35 @@ class WebServer(object):
|
||||
raise OSError(errno.EEXIST, os.strerror(errno.EEXIST), socket_file)
|
||||
os.remove(socket_file)
|
||||
|
||||
unix_sock = WSGIServer.get_listener(socket_file, family=socket.AF_UNIX)
|
||||
self.unix_socket_file = socket_file
|
||||
|
||||
def _make_gevent_listener(self):
|
||||
# ensure current user and group have r/w permissions, no permissions for other users
|
||||
# this way the socket can be shared in a semi-secure manner
|
||||
# between the user running calibre-web and the user running the fronting webserver
|
||||
os.chmod(socket_file, 0o660)
|
||||
|
||||
return unix_sock
|
||||
|
||||
def _make_gevent_socket(self):
|
||||
if os.name != 'nt':
|
||||
socket_activated = os.environ.get("LISTEN_FDS")
|
||||
if socket_activated:
|
||||
sock = self._make_gevent_socket_activated()
|
||||
sock_info = sock.getsockname()
|
||||
return sock, "systemd-socket:" + _readable_listen_address(sock_info[0], sock_info[1])
|
||||
unix_socket_file = os.environ.get("CALIBRE_UNIX_SOCKET")
|
||||
if unix_socket_file:
|
||||
self._prepare_unix_socket(unix_socket_file)
|
||||
unix_sock = WSGIServer.get_listener(unix_socket_file, family=socket.AF_UNIX)
|
||||
# ensure current user and group have r/w permissions, no permissions for other users
|
||||
# this way the socket can be shared in a semi-secure manner
|
||||
# between the user running calibre-web and the user running the fronting webserver
|
||||
os.chmod(unix_socket_file, 0o660)
|
||||
|
||||
return unix_sock, "unix:" + unix_socket_file
|
||||
return self._make_gevent_unix_socket(unix_socket_file), "unix:" + unix_socket_file
|
||||
|
||||
if self.listen_address:
|
||||
return ((self.listen_address, self.listen_port),
|
||||
_readable_listen_address(self.listen_address, self.listen_port))
|
||||
return (self.listen_address, self.listen_port), None
|
||||
|
||||
if os.name == 'nt':
|
||||
self.listen_address = '0.0.0.0'
|
||||
return ((self.listen_address, self.listen_port),
|
||||
_readable_listen_address(self.listen_address, self.listen_port))
|
||||
return (self.listen_address, self.listen_port), None
|
||||
|
||||
address = ('::', self.listen_port)
|
||||
try:
|
||||
address = ('::', self.listen_port)
|
||||
sock = WSGIServer.get_listener(address, family=socket.AF_INET6)
|
||||
except socket.error as ex:
|
||||
log.error('%s', ex)
|
||||
log.warning('Unable to listen on {}, trying on IPv4 only...'.format(address))
|
||||
log.warning('Unable to listen on "", trying on IPv4 only...')
|
||||
address = ('', self.listen_port)
|
||||
sock = WSGIServer.get_listener(address, family=socket.AF_INET)
|
||||
|
||||
@ -165,7 +152,7 @@ class WebServer(object):
|
||||
# The value of __package__ indicates how Python was called. It may
|
||||
# not exist if a setuptools script is installed as an egg. It may be
|
||||
# set incorrectly for entry points created with pip on Windows.
|
||||
if getattr(__main__, "__package__", "") in ["", None] or (
|
||||
if getattr(__main__, "__package__", None) is None or (
|
||||
os.name == "nt"
|
||||
and __main__.__package__ == ""
|
||||
and not os.path.exists(py_script)
|
||||
@ -206,15 +193,15 @@ class WebServer(object):
|
||||
rv.extend(("-m", py_module.lstrip(".")))
|
||||
|
||||
rv.extend(args)
|
||||
if os.name == 'nt':
|
||||
rv = ['"{}"'.format(a) for a in rv]
|
||||
return rv
|
||||
|
||||
def _start_gevent(self):
|
||||
ssl_args = self.ssl_args or {}
|
||||
|
||||
try:
|
||||
sock, output = self._make_gevent_listener()
|
||||
sock, output = self._make_gevent_socket()
|
||||
if output is None:
|
||||
output = _readable_listen_address(self.listen_address, self.listen_port)
|
||||
log.info('Starting Gevent server on %s', output)
|
||||
self.wsgiserver = WSGIServer(sock, self.app, log=self.access_logger, handler_class=MyWSGIHandler,
|
||||
error_log=log,
|
||||
@ -239,42 +226,17 @@ class WebServer(object):
|
||||
if os.name == 'nt' and sys.version_info > (3, 7):
|
||||
import asyncio
|
||||
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
|
||||
try:
|
||||
# Max Buffersize set to 200MB
|
||||
http_server = HTTPServer(MyWSGIContainer(self.app),
|
||||
max_buffer_size=209700000,
|
||||
ssl_options=self.ssl_args)
|
||||
log.info('Starting Tornado server on %s', _readable_listen_address(self.listen_address, self.listen_port))
|
||||
|
||||
unix_socket_file = os.environ.get("CALIBRE_UNIX_SOCKET")
|
||||
if os.environ.get("LISTEN_FDS") and os.name != 'nt':
|
||||
SD_LISTEN_FDS_START = 3
|
||||
sock = socket.socket(fileno=SD_LISTEN_FDS_START)
|
||||
http_server.add_socket(sock)
|
||||
sock.setblocking(0)
|
||||
socket_name =sock.getsockname()
|
||||
output = "systemd-socket:" + _readable_listen_address(socket_name[0], socket_name[1])
|
||||
elif unix_socket_file and os.name != 'nt':
|
||||
self._prepare_unix_socket(unix_socket_file)
|
||||
output = "unix:" + unix_socket_file
|
||||
unix_socket = netutil.bind_unix_socket(self.unix_socket_file)
|
||||
http_server.add_socket(unix_socket)
|
||||
# ensure current user and group have r/w permissions, no permissions for other users
|
||||
# this way the socket can be shared in a semi-secure manner
|
||||
# between the user running calibre-web and the user running the fronting webserver
|
||||
os.chmod(self.unix_socket_file, 0o660)
|
||||
else:
|
||||
output = _readable_listen_address(self.listen_address, self.listen_port)
|
||||
http_server.listen(self.listen_port, self.listen_address)
|
||||
log.info('Starting Tornado server on %s', output)
|
||||
|
||||
self.wsgiserver = IOLoop.current()
|
||||
self.wsgiserver.start()
|
||||
# wait for stop signal
|
||||
self.wsgiserver.close(True)
|
||||
finally:
|
||||
if self.unix_socket_file:
|
||||
os.remove(self.unix_socket_file)
|
||||
self.unix_socket_file = None
|
||||
# Max Buffersize set to 200MB
|
||||
http_server = HTTPServer(MyWSGIContainer(self.app),
|
||||
max_buffer_size=209700000,
|
||||
ssl_options=self.ssl_args)
|
||||
http_server.listen(self.listen_port, self.listen_address)
|
||||
self.wsgiserver = IOLoop.current()
|
||||
self.wsgiserver.start()
|
||||
# wait for stop signal
|
||||
self.wsgiserver.close(True)
|
||||
|
||||
def start(self):
|
||||
try:
|
||||
@ -300,14 +262,8 @@ class WebServer(object):
|
||||
|
||||
log.info("Performing restart of Calibre-Web")
|
||||
args = self._get_args_for_reloading()
|
||||
os.execv(args[0].lstrip('"').rstrip('"'), args)
|
||||
|
||||
@staticmethod
|
||||
def shutdown_scheduler():
|
||||
from .services.background_scheduler import BackgroundScheduler
|
||||
scheduler = BackgroundScheduler()
|
||||
if scheduler:
|
||||
scheduler.scheduler.shutdown()
|
||||
subprocess.call(args, close_fds=True) # nosec
|
||||
return True
|
||||
|
||||
def _killServer(self, __, ___):
|
||||
self.stop()
|
||||
@ -317,14 +273,9 @@ class WebServer(object):
|
||||
updater_thread.stop()
|
||||
|
||||
log.info("webserver stop (restart=%s)", restart)
|
||||
self.shutdown_scheduler()
|
||||
self.restart = restart
|
||||
if self.wsgiserver:
|
||||
if _GEVENT:
|
||||
self.wsgiserver.close()
|
||||
else:
|
||||
if restart:
|
||||
self.wsgiserver.call_later(1.0, self.wsgiserver.stop)
|
||||
else:
|
||||
self.wsgiserver.asyncio_loop.call_soon_threadsafe(self.wsgiserver.stop)
|
||||
|
||||
self.wsgiserver.add_callback_from_signal(self.wsgiserver.stop)
|
||||
|
@ -19,9 +19,11 @@
|
||||
|
||||
import sys
|
||||
from base64 import b64decode, b64encode
|
||||
from jsonschema import validate, exceptions
|
||||
from jsonschema import validate, exceptions, __version__
|
||||
from datetime import datetime
|
||||
|
||||
from urllib.parse import unquote
|
||||
|
||||
from flask import json
|
||||
from .. import logger
|
||||
|
||||
@ -30,10 +32,10 @@ log = logger.create()
|
||||
|
||||
|
||||
def b64encode_json(json_data):
|
||||
return b64encode(json.dumps(json_data).encode()).decode("utf-8")
|
||||
return b64encode(json.dumps(json_data).encode())
|
||||
|
||||
|
||||
# Python3 has a timestamp() method we could be calling, however it's not available in python2.
|
||||
# Python3 has a timestamp() method we could be calling, however it's not avaiable in python2.
|
||||
def to_epoch_timestamp(datetime_object):
|
||||
return (datetime_object - datetime(1970, 1, 1)).total_seconds()
|
||||
|
||||
@ -47,7 +49,7 @@ def get_datetime_from_json(json_object, field_name):
|
||||
|
||||
|
||||
class SyncToken:
|
||||
""" The SyncToken is used to persist state across requests.
|
||||
""" The SyncToken is used to persist state accross requests.
|
||||
When serialized over the response headers, the Kobo device will propagate the token onto following
|
||||
requests to the service. As an example use-case, the SyncToken is used to detect books that have been added
|
||||
to the library since the last time the device synced to the server.
|
||||
@ -173,8 +175,8 @@ class SyncToken:
|
||||
|
||||
def __str__(self):
|
||||
return "{},{},{},{},{},{}".format(self.books_last_created,
|
||||
self.books_last_modified,
|
||||
self.archive_last_modified,
|
||||
self.reading_state_last_modified,
|
||||
self.tags_last_modified,
|
||||
self.raw_kobo_store_token)
|
||||
self.books_last_modified,
|
||||
self.archive_last_modified,
|
||||
self.reading_state_last_modified,
|
||||
self.tags_last_modified,
|
||||
self.raw_kobo_store_token)
|
||||
|
@ -23,8 +23,6 @@ from .worker import WorkerThread
|
||||
|
||||
try:
|
||||
from apscheduler.schedulers.background import BackgroundScheduler as BScheduler
|
||||
from apscheduler.triggers.cron import CronTrigger
|
||||
from apscheduler.triggers.date import DateTrigger
|
||||
use_APScheduler = True
|
||||
except (ImportError, RuntimeError) as e:
|
||||
use_APScheduler = False
|
||||
@ -42,37 +40,38 @@ class BackgroundScheduler:
|
||||
if cls._instance is None:
|
||||
cls._instance = super(BackgroundScheduler, cls).__new__(cls)
|
||||
cls.log = logger.create()
|
||||
logger.logging.getLogger('tzlocal').setLevel(logger.logging.WARNING)
|
||||
cls.scheduler = BScheduler()
|
||||
cls.scheduler.start()
|
||||
|
||||
atexit.register(lambda: cls.scheduler.shutdown())
|
||||
|
||||
return cls._instance
|
||||
|
||||
def schedule(self, func, trigger, name=None):
|
||||
def schedule(self, func, trigger, name=None, **trigger_args):
|
||||
if use_APScheduler:
|
||||
return self.scheduler.add_job(func=func, trigger=trigger, name=name)
|
||||
return self.scheduler.add_job(func=func, trigger=trigger, name=name, **trigger_args)
|
||||
|
||||
# Expects a lambda expression for the task
|
||||
def schedule_task(self, task, user=None, name=None, hidden=False, trigger=None):
|
||||
def schedule_task(self, task, user=None, name=None, hidden=False, trigger='cron', **trigger_args):
|
||||
if use_APScheduler:
|
||||
def scheduled_task():
|
||||
worker_task = task()
|
||||
worker_task.scheduled = True
|
||||
WorkerThread.add(user, worker_task, hidden=hidden)
|
||||
return self.schedule(func=scheduled_task, trigger=trigger, name=name)
|
||||
return self.schedule(func=scheduled_task, trigger=trigger, name=name, **trigger_args)
|
||||
|
||||
# Expects a list of lambda expressions for the tasks
|
||||
def schedule_tasks(self, tasks, user=None, trigger=None):
|
||||
def schedule_tasks(self, tasks, user=None, trigger='cron', **trigger_args):
|
||||
if use_APScheduler:
|
||||
for task in tasks:
|
||||
self.schedule_task(task[0], user=user, trigger=trigger, name=task[1], hidden=task[2])
|
||||
self.schedule_task(task[0], user=user, trigger=trigger, name=task[1], hidden=task[2], **trigger_args)
|
||||
|
||||
# Expects a lambda expression for the task
|
||||
def schedule_task_immediately(self, task, user=None, name=None, hidden=False):
|
||||
if use_APScheduler:
|
||||
def immediate_task():
|
||||
WorkerThread.add(user, task(), hidden)
|
||||
return self.schedule(func=immediate_task, trigger=DateTrigger(), name=name)
|
||||
return self.schedule(func=immediate_task, trigger='date', name=name)
|
||||
|
||||
# Expects a list of lambda expressions for the tasks
|
||||
def schedule_tasks_immediately(self, tasks, user=None):
|
||||
|
@ -36,7 +36,6 @@ SCOPES = ['openid', 'https://www.googleapis.com/auth/gmail.send', 'https://www.g
|
||||
def setup_gmail(token):
|
||||
# If there are no (valid) credentials available, let the user log in.
|
||||
creds = None
|
||||
user_info = None
|
||||
if "token" in token:
|
||||
creds = Credentials(
|
||||
token=token['token'],
|
||||
@ -92,7 +91,7 @@ def send_messsage(token, msg):
|
||||
if creds and creds.expired and creds.refresh_token:
|
||||
creds.refresh(Request())
|
||||
service = build('gmail', 'v1', credentials=creds)
|
||||
message_as_bytes = msg.as_bytes() # the message should be converted from string to bytes.
|
||||
message_as_bytes = msg.as_bytes() # the message should converted from string to bytes.
|
||||
message_as_base64 = base64.urlsafe_b64encode(message_as_bytes) # encode in base64 (printable letters coding)
|
||||
raw = message_as_base64.decode() # convert to something JSON serializable
|
||||
body = {'raw': raw}
|
||||
|
@ -18,51 +18,16 @@
|
||||
|
||||
import time
|
||||
from functools import reduce
|
||||
import requests
|
||||
|
||||
from goodreads.client import GoodreadsClient
|
||||
from goodreads.request import GoodreadsRequest
|
||||
import xmltodict
|
||||
|
||||
try:
|
||||
import Levenshtein
|
||||
from goodreads.client import GoodreadsClient
|
||||
except ImportError:
|
||||
Levenshtein = False
|
||||
from betterreads.client import GoodreadsClient
|
||||
|
||||
try: import Levenshtein
|
||||
except ImportError: Levenshtein = False
|
||||
|
||||
from .. import logger
|
||||
from ..clean_html import clean_string
|
||||
|
||||
|
||||
class my_GoodreadsClient(GoodreadsClient):
|
||||
|
||||
def request(self, *args, **kwargs):
|
||||
"""Create a GoodreadsRequest object and make that request"""
|
||||
req = my_GoodreadsRequest(self, *args, **kwargs)
|
||||
return req.request()
|
||||
|
||||
|
||||
class GoodreadsRequestException(Exception):
|
||||
def __init__(self, error_msg, url):
|
||||
self.error_msg = error_msg
|
||||
self.url = url
|
||||
|
||||
def __str__(self):
|
||||
return self.url, ':', self.error_msg
|
||||
|
||||
|
||||
class my_GoodreadsRequest(GoodreadsRequest):
|
||||
|
||||
def request(self):
|
||||
resp = requests.get(self.host+self.path, params=self.params,
|
||||
headers={"User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:125.0) "
|
||||
"Gecko/20100101 Firefox/125.0"})
|
||||
if resp.status_code != 200:
|
||||
raise GoodreadsRequestException(resp.reason, self.path)
|
||||
if self.req_format == 'xml':
|
||||
data_dict = xmltodict.parse(resp.content)
|
||||
return data_dict['GoodreadsResponse']
|
||||
else:
|
||||
raise Exception("Invalid format")
|
||||
|
||||
|
||||
log = logger.create()
|
||||
@ -73,20 +38,20 @@ _CACHE_TIMEOUT = 23 * 60 * 60 # 23 hours (in seconds)
|
||||
_AUTHORS_CACHE = {}
|
||||
|
||||
|
||||
def connect(key=None, enabled=True):
|
||||
def connect(key=None, secret=None, enabled=True):
|
||||
global _client
|
||||
|
||||
if not enabled or not key:
|
||||
if not enabled or not key or not secret:
|
||||
_client = None
|
||||
return
|
||||
|
||||
if _client:
|
||||
# make sure the configuration has not changed since last we used the client
|
||||
if _client.client_key != key:
|
||||
if _client.client_key != key or _client.client_secret != secret:
|
||||
_client = None
|
||||
|
||||
if not _client:
|
||||
_client = my_GoodreadsClient(key, None)
|
||||
_client = GoodreadsClient(key, secret)
|
||||
|
||||
|
||||
def get_author_info(author_name):
|
||||
@ -111,13 +76,12 @@ def get_author_info(author_name):
|
||||
|
||||
if author_info:
|
||||
author_info._timestamp = now
|
||||
author_info.safe_about = clean_string(author_info.about)
|
||||
_AUTHORS_CACHE[author_name] = author_info
|
||||
return author_info
|
||||
|
||||
|
||||
def get_other_books(author_info, library_books=None):
|
||||
# Get all identifiers (ISBN, Goodreads, etc.) and filter author's books by that list so we show fewer duplicates
|
||||
# Get all identifiers (ISBN, Goodreads, etc) and filter author's books by that list so we show fewer duplicates
|
||||
# Note: Not all images will be shown, even though they're available on Goodreads.com.
|
||||
# See https://www.goodreads.com/topic/show/18213769-goodreads-book-images
|
||||
|
||||
@ -127,8 +91,7 @@ def get_other_books(author_info, library_books=None):
|
||||
identifiers = []
|
||||
library_titles = []
|
||||
if library_books:
|
||||
identifiers = list(
|
||||
reduce(lambda acc, book: acc + [i.val for i in book.identifiers if i.val], library_books, []))
|
||||
identifiers = list(reduce(lambda acc, book: acc + [i.val for i in book.identifiers if i.val], library_books, []))
|
||||
library_titles = [book.title for book in library_books]
|
||||
|
||||
for book in author_info.books:
|
||||
|
@ -20,7 +20,6 @@ import base64
|
||||
|
||||
from flask_simpleldap import LDAP, LDAPException
|
||||
from flask_simpleldap import ldap as pyLDAP
|
||||
from flask import current_app
|
||||
from .. import constants, logger
|
||||
|
||||
try:
|
||||
@ -29,49 +28,7 @@ except ImportError:
|
||||
pass
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
||||
class LDAPLogger(object):
|
||||
|
||||
@staticmethod
|
||||
def write(message):
|
||||
try:
|
||||
log.debug(message.strip("\n").replace("\n", ""))
|
||||
except Exception:
|
||||
log.debug("Logging Error")
|
||||
|
||||
|
||||
class mySimpleLDap(LDAP):
|
||||
|
||||
@staticmethod
|
||||
def init_app(app):
|
||||
super(mySimpleLDap, mySimpleLDap).init_app(app)
|
||||
app.config.setdefault('LDAP_LOGLEVEL', 0)
|
||||
|
||||
@property
|
||||
def initialize(self):
|
||||
"""Initialize a connection to the LDAP server.
|
||||
|
||||
:return: LDAP connection object.
|
||||
"""
|
||||
try:
|
||||
log_level = 2 if current_app.config['LDAP_LOGLEVEL'] == logger.logging.DEBUG else 0
|
||||
conn = pyLDAP.initialize('{0}://{1}:{2}'.format(
|
||||
current_app.config['LDAP_SCHEMA'],
|
||||
current_app.config['LDAP_HOST'],
|
||||
current_app.config['LDAP_PORT']), trace_level=log_level, trace_file=LDAPLogger())
|
||||
conn.set_option(pyLDAP.OPT_NETWORK_TIMEOUT,
|
||||
current_app.config['LDAP_TIMEOUT'])
|
||||
conn = self._set_custom_options(conn)
|
||||
conn.protocol_version = pyLDAP.VERSION3
|
||||
if current_app.config['LDAP_USE_TLS']:
|
||||
conn.start_tls_s()
|
||||
return conn
|
||||
except pyLDAP.LDAPError as e:
|
||||
raise LDAPException(self.error(e.args))
|
||||
|
||||
|
||||
_ldap = mySimpleLDap()
|
||||
_ldap = LDAP()
|
||||
|
||||
|
||||
def init_app(app, config):
|
||||
@ -87,15 +44,15 @@ def init_app(app, config):
|
||||
app.config['LDAP_SCHEMA'] = 'ldap'
|
||||
if config.config_ldap_authentication > constants.LDAP_AUTH_ANONYMOUS:
|
||||
if config.config_ldap_authentication > constants.LDAP_AUTH_UNAUTHENTICATE:
|
||||
if config.config_ldap_serv_password_e is None:
|
||||
config.config_ldap_serv_password_e = ''
|
||||
app.config['LDAP_PASSWORD'] = config.config_ldap_serv_password_e
|
||||
if config.config_ldap_serv_password is None:
|
||||
config.config_ldap_serv_password = ''
|
||||
app.config['LDAP_PASSWORD'] = base64.b64decode(config.config_ldap_serv_password)
|
||||
else:
|
||||
app.config['LDAP_PASSWORD'] = ""
|
||||
app.config['LDAP_PASSWORD'] = base64.b64decode("")
|
||||
app.config['LDAP_USERNAME'] = config.config_ldap_serv_username
|
||||
else:
|
||||
app.config['LDAP_USERNAME'] = ""
|
||||
app.config['LDAP_PASSWORD'] = ""
|
||||
app.config['LDAP_PASSWORD'] = base64.b64decode("")
|
||||
if bool(config.config_ldap_cert_path):
|
||||
app.config['LDAP_CUSTOM_OPTIONS'].update({
|
||||
pyLDAP.OPT_X_TLS_REQUIRE_CERT: pyLDAP.OPT_X_TLS_DEMAND,
|
||||
@ -113,7 +70,7 @@ def init_app(app, config):
|
||||
app.config['LDAP_OPENLDAP'] = bool(config.config_ldap_openldap)
|
||||
app.config['LDAP_GROUP_OBJECT_FILTER'] = config.config_ldap_group_object_filter
|
||||
app.config['LDAP_GROUP_MEMBERS_FIELD'] = config.config_ldap_group_members_field
|
||||
app.config['LDAP_LOGLEVEL'] = config.config_log_level
|
||||
|
||||
try:
|
||||
_ldap.init_app(app)
|
||||
except ValueError:
|
||||
@ -127,7 +84,7 @@ def init_app(app, config):
|
||||
log.error(e)
|
||||
|
||||
|
||||
def get_object_details(user=None, query_filter=None):
|
||||
def get_object_details(user=None,query_filter=None):
|
||||
return _ldap.get_object_details(user, query_filter=query_filter)
|
||||
|
||||
|
||||
|
@ -199,7 +199,7 @@ class CalibreTask:
|
||||
self.run(*args)
|
||||
except Exception as ex:
|
||||
self._handleError(str(ex))
|
||||
log.exception(ex)
|
||||
log.error_or_exception(ex)
|
||||
|
||||
self.end_time = datetime.now()
|
||||
|
||||
@ -235,7 +235,7 @@ class CalibreTask:
|
||||
|
||||
@property
|
||||
def dead(self):
|
||||
"""Determines whether this task can be garbage collected
|
||||
"""Determines whether or not this task can be garbage collected
|
||||
|
||||
We have a separate dictating this because there may be certain tasks that want to override this
|
||||
"""
|
||||
@ -266,6 +266,3 @@ class CalibreTask:
|
||||
def _handleSuccess(self):
|
||||
self.stat = STAT_FINISH_SUCCESS
|
||||
self.progress = 1
|
||||
|
||||
def __str__(self):
|
||||
return self.name
|
||||
|
167
cps/shelf.py
@ -21,17 +21,17 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
from datetime import datetime
|
||||
|
||||
from flask import Blueprint, flash, redirect, request, url_for, abort
|
||||
from flask_babel import gettext as _
|
||||
from .cw_login import current_user
|
||||
from flask_login import current_user, login_required
|
||||
from sqlalchemy.exc import InvalidRequestError, OperationalError
|
||||
from sqlalchemy.sql.expression import func, true
|
||||
|
||||
from . import calibre_db, config, db, logger, ub
|
||||
from .render_template import render_title_template
|
||||
from .usermanagement import login_required_if_no_ano, user_login_required
|
||||
from .usermanagement import login_required_if_no_ano
|
||||
|
||||
log = logger.create()
|
||||
|
||||
@ -39,20 +39,20 @@ shelf = Blueprint('shelf', __name__)
|
||||
|
||||
|
||||
@shelf.route("/shelf/add/<int:shelf_id>/<int:book_id>", methods=["POST"])
|
||||
@user_login_required
|
||||
@login_required
|
||||
def add_to_shelf(shelf_id, book_id):
|
||||
xhr = request.headers.get('X-Requested-With') == 'XMLHttpRequest'
|
||||
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||
if shelf is None:
|
||||
log.error("Invalid shelf specified: %s", shelf_id)
|
||||
if not xhr:
|
||||
flash(_("Invalid shelf specified"), category="error")
|
||||
flash(_(u"Invalid shelf specified"), category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
return "Invalid shelf specified", 400
|
||||
|
||||
if not check_shelf_edit_permissions(shelf):
|
||||
if not xhr:
|
||||
flash(_("Sorry you are not allowed to add a book to that shelf"), category="error")
|
||||
flash(_(u"Sorry you are not allowed to add a book to that shelf"), category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
return "Sorry you are not allowed to add a book to the that shelf", 403
|
||||
|
||||
@ -61,7 +61,7 @@ def add_to_shelf(shelf_id, book_id):
|
||||
if book_in_shelf:
|
||||
log.error("Book %s is already part of %s", book_id, shelf)
|
||||
if not xhr:
|
||||
flash(_("Book is already part of the shelf: %(shelfname)s", shelfname=shelf.name), category="error")
|
||||
flash(_(u"Book is already part of the shelf: %(shelfname)s", shelfname=shelf.name), category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
return "Book is already part of the shelf: %s" % shelf.name, 400
|
||||
|
||||
@ -71,30 +71,22 @@ def add_to_shelf(shelf_id, book_id):
|
||||
else:
|
||||
maxOrder = maxOrder[0]
|
||||
|
||||
if not calibre_db.session.query(db.Books).filter(db.Books.id == book_id).one_or_none():
|
||||
log.error("Invalid Book Id: %s. Could not be added to shelf %s", book_id, shelf.name)
|
||||
if not xhr:
|
||||
flash(_("%(book_id)s is a invalid Book Id. Could not be added to Shelf", book_id=book_id),
|
||||
category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
return "%s is a invalid Book Id. Could not be added to Shelf" % book_id, 400
|
||||
|
||||
shelf.books.append(ub.BookShelf(shelf=shelf.id, book_id=book_id, order=maxOrder + 1))
|
||||
shelf.last_modified = datetime.now(timezone.utc)
|
||||
shelf.last_modified = datetime.utcnow()
|
||||
try:
|
||||
ub.session.merge(shelf)
|
||||
ub.session.commit()
|
||||
except (OperationalError, InvalidRequestError) as e:
|
||||
ub.session.rollback()
|
||||
log.error_or_exception("Settings Database error: {}".format(e))
|
||||
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
|
||||
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
|
||||
if "HTTP_REFERER" in request.environ:
|
||||
return redirect(request.environ["HTTP_REFERER"])
|
||||
else:
|
||||
return redirect(url_for('web.index'))
|
||||
if not xhr:
|
||||
log.debug("Book has been added to shelf: {}".format(shelf.name))
|
||||
flash(_("Book has been added to shelf: %(sname)s", sname=shelf.name), category="success")
|
||||
flash(_(u"Book has been added to shelf: %(sname)s", sname=shelf.name), category="success")
|
||||
if "HTTP_REFERER" in request.environ:
|
||||
return redirect(request.environ["HTTP_REFERER"])
|
||||
else:
|
||||
@ -103,17 +95,17 @@ def add_to_shelf(shelf_id, book_id):
|
||||
|
||||
|
||||
@shelf.route("/shelf/massadd/<int:shelf_id>", methods=["POST"])
|
||||
@user_login_required
|
||||
@login_required
|
||||
def search_to_shelf(shelf_id):
|
||||
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||
if shelf is None:
|
||||
log.error("Invalid shelf specified: {}".format(shelf_id))
|
||||
flash(_("Invalid shelf specified"), category="error")
|
||||
flash(_(u"Invalid shelf specified"), category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
|
||||
if not check_shelf_edit_permissions(shelf):
|
||||
log.warning("You are not allowed to add a book to the shelf".format(shelf.name))
|
||||
flash(_("You are not allowed to add a book to the shelf"), category="error")
|
||||
flash(_(u"You are not allowed to add a book to the shelf"), category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
|
||||
if current_user.id in ub.searched_ids and ub.searched_ids[current_user.id]:
|
||||
@ -131,7 +123,7 @@ def search_to_shelf(shelf_id):
|
||||
|
||||
if not books_for_shelf:
|
||||
log.error("Books are already part of {}".format(shelf.name))
|
||||
flash(_("Books are already part of the shelf: %(name)s", name=shelf.name), category="error")
|
||||
flash(_(u"Books are already part of the shelf: %(name)s", name=shelf.name), category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
|
||||
maxOrder = ub.session.query(func.max(ub.BookShelf.order)).filter(ub.BookShelf.shelf == shelf_id).first()[0] or 0
|
||||
@ -139,23 +131,23 @@ def search_to_shelf(shelf_id):
|
||||
for book in books_for_shelf:
|
||||
maxOrder += 1
|
||||
shelf.books.append(ub.BookShelf(shelf=shelf.id, book_id=book, order=maxOrder))
|
||||
shelf.last_modified = datetime.now(timezone.utc)
|
||||
shelf.last_modified = datetime.utcnow()
|
||||
try:
|
||||
ub.session.merge(shelf)
|
||||
ub.session.commit()
|
||||
flash(_("Books have been added to shelf: %(sname)s", sname=shelf.name), category="success")
|
||||
flash(_(u"Books have been added to shelf: %(sname)s", sname=shelf.name), category="success")
|
||||
except (OperationalError, InvalidRequestError) as e:
|
||||
ub.session.rollback()
|
||||
log.error_or_exception("Settings Database error: {}".format(e))
|
||||
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
|
||||
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
|
||||
else:
|
||||
log.error("Could not add books to shelf: {}".format(shelf.name))
|
||||
flash(_("Could not add books to shelf: %(sname)s", sname=shelf.name), category="error")
|
||||
flash(_(u"Could not add books to shelf: %(sname)s", sname=shelf.name), category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
|
||||
|
||||
@shelf.route("/shelf/remove/<int:shelf_id>/<int:book_id>", methods=["POST"])
|
||||
@user_login_required
|
||||
@login_required
|
||||
def remove_from_shelf(shelf_id, book_id):
|
||||
xhr = request.headers.get('X-Requested-With') == 'XMLHttpRequest'
|
||||
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||
@ -185,18 +177,18 @@ def remove_from_shelf(shelf_id, book_id):
|
||||
|
||||
try:
|
||||
ub.session.delete(book_shelf)
|
||||
shelf.last_modified = datetime.now(timezone.utc)
|
||||
shelf.last_modified = datetime.utcnow()
|
||||
ub.session.commit()
|
||||
except (OperationalError, InvalidRequestError) as e:
|
||||
ub.session.rollback()
|
||||
log.error_or_exception("Settings Database error: {}".format(e))
|
||||
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
|
||||
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
|
||||
if "HTTP_REFERER" in request.environ:
|
||||
return redirect(request.environ["HTTP_REFERER"])
|
||||
else:
|
||||
return redirect(url_for('web.index'))
|
||||
if not xhr:
|
||||
flash(_("Book has been removed from shelf: %(sname)s", sname=shelf.name), category="success")
|
||||
flash(_(u"Book has been removed from shelf: %(sname)s", sname=shelf.name), category="success")
|
||||
if "HTTP_REFERER" in request.environ:
|
||||
return redirect(request.environ["HTTP_REFERER"])
|
||||
else:
|
||||
@ -205,31 +197,31 @@ def remove_from_shelf(shelf_id, book_id):
|
||||
else:
|
||||
if not xhr:
|
||||
log.warning("You are not allowed to remove a book from shelf: {}".format(shelf.name))
|
||||
flash(_("Sorry you are not allowed to remove a book from this shelf"),
|
||||
flash(_(u"Sorry you are not allowed to remove a book from this shelf"),
|
||||
category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
return "Sorry you are not allowed to remove a book from this shelf", 403
|
||||
|
||||
|
||||
@shelf.route("/shelf/create", methods=["GET", "POST"])
|
||||
@user_login_required
|
||||
@login_required
|
||||
def create_shelf():
|
||||
shelf = ub.Shelf()
|
||||
return create_edit_shelf(shelf, page_title=_("Create a Shelf"), page="shelfcreate")
|
||||
return create_edit_shelf(shelf, page_title=_(u"Create a Shelf"), page="shelfcreate")
|
||||
|
||||
|
||||
@shelf.route("/shelf/edit/<int:shelf_id>", methods=["GET", "POST"])
|
||||
@user_login_required
|
||||
@login_required
|
||||
def edit_shelf(shelf_id):
|
||||
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||
if not check_shelf_edit_permissions(shelf):
|
||||
flash(_("Sorry you are not allowed to edit this shelf"), category="error")
|
||||
flash(_(u"Sorry you are not allowed to edit this shelf"), category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
return create_edit_shelf(shelf, page_title=_("Edit a shelf"), page="shelfedit", shelf_id=shelf_id)
|
||||
return create_edit_shelf(shelf, page_title=_(u"Edit a shelf"), page="shelfedit", shelf_id=shelf_id)
|
||||
|
||||
|
||||
@shelf.route("/shelf/delete/<int:shelf_id>", methods=["POST"])
|
||||
@user_login_required
|
||||
@login_required
|
||||
def delete_shelf(shelf_id):
|
||||
cur_shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||
try:
|
||||
@ -240,7 +232,7 @@ def delete_shelf(shelf_id):
|
||||
except InvalidRequestError as e:
|
||||
ub.session.rollback()
|
||||
log.error_or_exception("Settings Database error: {}".format(e))
|
||||
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
|
||||
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
|
||||
|
||||
@ -250,7 +242,7 @@ def show_simpleshelf(shelf_id):
|
||||
return render_show_shelf(2, shelf_id, 1, None)
|
||||
|
||||
|
||||
@shelf.route("/shelf/<int:shelf_id>", defaults={"sort_param": "stored", 'page': 1})
|
||||
@shelf.route("/shelf/<int:shelf_id>", defaults={"sort_param": "order", 'page': 1})
|
||||
@shelf.route("/shelf/<int:shelf_id>/<sort_param>", defaults={'page': 1})
|
||||
@shelf.route("/shelf/<int:shelf_id>/<sort_param>/<int:page>")
|
||||
@login_required_if_no_ano
|
||||
@ -259,7 +251,7 @@ def show_shelf(shelf_id, sort_param, page):
|
||||
|
||||
|
||||
@shelf.route("/shelf/order/<int:shelf_id>", methods=["GET", "POST"])
|
||||
@user_login_required
|
||||
@login_required
|
||||
def order_shelf(shelf_id):
|
||||
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||
if shelf and check_shelf_view_permissions(shelf):
|
||||
@ -271,13 +263,13 @@ def order_shelf(shelf_id):
|
||||
for book in books_in_shelf:
|
||||
setattr(book, 'order', to_save[str(book.book_id)])
|
||||
counter += 1
|
||||
# if order different from before -> shelf.last_modified = datetime.now(timezone.utc)
|
||||
# if order different from before -> shelf.last_modified = datetime.utcnow()
|
||||
try:
|
||||
ub.session.commit()
|
||||
except (OperationalError, InvalidRequestError) as e:
|
||||
ub.session.rollback()
|
||||
log.error_or_exception("Settings Database error: {}".format(e))
|
||||
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
|
||||
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
|
||||
|
||||
result = list()
|
||||
if shelf:
|
||||
@ -286,7 +278,7 @@ def order_shelf(shelf_id):
|
||||
.add_columns(calibre_db.common_filters().label("visible")) \
|
||||
.filter(ub.BookShelf.shelf == shelf_id).order_by(ub.BookShelf.order.asc()).all()
|
||||
return render_title_template('shelf_order.html', entries=result,
|
||||
title=_("Change order of Shelf: '%(name)s'", name=shelf.name),
|
||||
title=_(u"Change order of Shelf: '%(name)s'", name=shelf.name),
|
||||
shelf=shelf, page="shelforder")
|
||||
else:
|
||||
abort(404)
|
||||
@ -303,14 +295,11 @@ def check_shelf_edit_permissions(cur_shelf):
|
||||
|
||||
|
||||
def check_shelf_view_permissions(cur_shelf):
|
||||
try:
|
||||
if cur_shelf.is_public:
|
||||
return True
|
||||
if current_user.is_anonymous or cur_shelf.user_id != current_user.id:
|
||||
log.error("User is unauthorized to view non-public shelf: {}".format(cur_shelf.name))
|
||||
return False
|
||||
except Exception as e:
|
||||
log.error(e)
|
||||
if cur_shelf.is_public:
|
||||
return True
|
||||
if current_user.is_anonymous or cur_shelf.user_id != current_user.id:
|
||||
log.error("User is unauthorized to view non-public shelf: {}".format(cur_shelf.name))
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
@ -321,7 +310,7 @@ def create_edit_shelf(shelf, page_title, page, shelf_id=False):
|
||||
if request.method == "POST":
|
||||
to_save = request.form.to_dict()
|
||||
if not current_user.role_edit_shelfs() and to_save.get("is_public") == "on":
|
||||
flash(_("Sorry you are not allowed to create a public shelf"), category="error")
|
||||
flash(_(u"Sorry you are not allowed to create a public shelf"), category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
is_public = 1 if to_save.get("is_public") == "on" else 0
|
||||
if config.config_kobo_sync:
|
||||
@ -338,24 +327,24 @@ def create_edit_shelf(shelf, page_title, page, shelf_id=False):
|
||||
shelf.user_id = int(current_user.id)
|
||||
ub.session.add(shelf)
|
||||
shelf_action = "created"
|
||||
flash_text = _("Shelf %(title)s created", title=shelf_title)
|
||||
flash_text = _(u"Shelf %(title)s created", title=shelf_title)
|
||||
else:
|
||||
shelf_action = "changed"
|
||||
flash_text = _("Shelf %(title)s changed", title=shelf_title)
|
||||
flash_text = _(u"Shelf %(title)s changed", title=shelf_title)
|
||||
try:
|
||||
ub.session.commit()
|
||||
log.info("Shelf {} {}".format(shelf_title, shelf_action))
|
||||
log.info(u"Shelf {} {}".format(shelf_title, shelf_action))
|
||||
flash(flash_text, category="success")
|
||||
return redirect(url_for('shelf.show_shelf', shelf_id=shelf.id))
|
||||
except (OperationalError, InvalidRequestError) as ex:
|
||||
ub.session.rollback()
|
||||
log.error_or_exception(ex)
|
||||
log.error_or_exception("Settings Database error: {}".format(ex))
|
||||
flash(_("Oops! Database Error: %(error)s.", error=ex.orig), category="error")
|
||||
flash(_(u"Database error: %(error)s.", error=ex.orig), category="error")
|
||||
except Exception as ex:
|
||||
ub.session.rollback()
|
||||
log.error_or_exception(ex)
|
||||
flash(_("There was an error"), category="error")
|
||||
flash(_(u"There was an error"), category="error")
|
||||
return render_title_template('shelf_edit.html',
|
||||
shelf=shelf,
|
||||
title=page_title,
|
||||
@ -377,7 +366,7 @@ def check_shelf_is_unique(title, is_public, shelf_id=False):
|
||||
|
||||
if not is_shelf_name_unique:
|
||||
log.error("A public shelf with the name '{}' already exists.".format(title))
|
||||
flash(_("A public shelf with the name '%(title)s' already exists.", title=title),
|
||||
flash(_(u"A public shelf with the name '%(title)s' already exists.", title=title),
|
||||
category="error")
|
||||
else:
|
||||
is_shelf_name_unique = ub.session.query(ub.Shelf) \
|
||||
@ -388,7 +377,7 @@ def check_shelf_is_unique(title, is_public, shelf_id=False):
|
||||
|
||||
if not is_shelf_name_unique:
|
||||
log.error("A private shelf with the name '{}' already exists.".format(title))
|
||||
flash(_("A private shelf with the name '%(title)s' already exists.", title=title),
|
||||
flash(_(u"A private shelf with the name '%(title)s' already exists.", title=title),
|
||||
category="error")
|
||||
return is_shelf_name_unique
|
||||
|
||||
@ -418,37 +407,29 @@ def change_shelf_order(shelf_id, order):
|
||||
|
||||
def render_show_shelf(shelf_type, shelf_id, page_no, sort_param):
|
||||
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||
status = current_user.get_view_property("shelf", 'man')
|
||||
|
||||
# check user is allowed to access shelf
|
||||
if shelf and check_shelf_view_permissions(shelf):
|
||||
if shelf_type == 1:
|
||||
if status != 'on':
|
||||
if sort_param == 'stored':
|
||||
sort_param = current_user.get_view_property("shelf", 'stored')
|
||||
else:
|
||||
current_user.set_view_property("shelf", 'stored', sort_param)
|
||||
if sort_param == 'pubnew':
|
||||
change_shelf_order(shelf_id, [db.Books.pubdate.desc()])
|
||||
if sort_param == 'pubold':
|
||||
change_shelf_order(shelf_id, [db.Books.pubdate])
|
||||
if sort_param == 'shelfnew':
|
||||
change_shelf_order(shelf_id, [ub.BookShelf.date_added.desc()])
|
||||
if sort_param == 'shelfold':
|
||||
change_shelf_order(shelf_id, [ub.BookShelf.date_added])
|
||||
if sort_param == 'abc':
|
||||
change_shelf_order(shelf_id, [db.Books.sort])
|
||||
if sort_param == 'zyx':
|
||||
change_shelf_order(shelf_id, [db.Books.sort.desc()])
|
||||
if sort_param == 'new':
|
||||
change_shelf_order(shelf_id, [db.Books.timestamp.desc()])
|
||||
if sort_param == 'old':
|
||||
change_shelf_order(shelf_id, [db.Books.timestamp])
|
||||
if sort_param == 'authaz':
|
||||
change_shelf_order(shelf_id, [db.Books.author_sort.asc(), db.Series.name, db.Books.series_index])
|
||||
if sort_param == 'authza':
|
||||
change_shelf_order(shelf_id, [db.Books.author_sort.desc(),
|
||||
db.Series.name.desc(),
|
||||
db.Books.series_index.desc()])
|
||||
# order = [ub.BookShelf.order.asc()]
|
||||
if sort_param == 'pubnew':
|
||||
change_shelf_order(shelf_id, [db.Books.pubdate.desc()])
|
||||
if sort_param == 'pubold':
|
||||
change_shelf_order(shelf_id, [db.Books.pubdate])
|
||||
if sort_param == 'abc':
|
||||
change_shelf_order(shelf_id, [db.Books.sort])
|
||||
if sort_param == 'zyx':
|
||||
change_shelf_order(shelf_id, [db.Books.sort.desc()])
|
||||
if sort_param == 'new':
|
||||
change_shelf_order(shelf_id, [db.Books.timestamp.desc()])
|
||||
if sort_param == 'old':
|
||||
change_shelf_order(shelf_id, [db.Books.timestamp])
|
||||
if sort_param == 'authaz':
|
||||
change_shelf_order(shelf_id, [db.Books.author_sort.asc(), db.Series.name, db.Books.series_index])
|
||||
if sort_param == 'authza':
|
||||
change_shelf_order(shelf_id, [db.Books.author_sort.desc(),
|
||||
db.Series.name.desc(),
|
||||
db.Books.series_index.desc()])
|
||||
page = "shelf.html"
|
||||
pagesize = 0
|
||||
else:
|
||||
@ -461,7 +442,7 @@ def render_show_shelf(shelf_type, shelf_id, page_no, sort_param):
|
||||
[ub.BookShelf.order.asc()],
|
||||
True, config.config_read_column,
|
||||
ub.BookShelf, ub.BookShelf.book_id == db.Books.id)
|
||||
# delete shelf entries where book is not existent anymore, can happen if book is deleted outside calibre-web
|
||||
# delete chelf entries where book is not existent anymore, can happen if book is deleted outside calibre-web
|
||||
wrong_entries = calibre_db.session.query(ub.BookShelf) \
|
||||
.join(db.Books, ub.BookShelf.book_id == db.Books.id, isouter=True) \
|
||||
.filter(db.Books.id == None).all()
|
||||
@ -473,16 +454,14 @@ def render_show_shelf(shelf_type, shelf_id, page_no, sort_param):
|
||||
except (OperationalError, InvalidRequestError) as e:
|
||||
ub.session.rollback()
|
||||
log.error_or_exception("Settings Database error: {}".format(e))
|
||||
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
|
||||
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
|
||||
|
||||
return render_title_template(page,
|
||||
entries=result,
|
||||
pagination=pagination,
|
||||
title=_("Shelf: '%(name)s'", name=shelf.name),
|
||||
title=_(u"Shelf: '%(name)s'", name=shelf.name),
|
||||
shelf=shelf,
|
||||
page="shelf",
|
||||
status=status,
|
||||
order=sort_param)
|
||||
page="shelf")
|
||||
else:
|
||||
flash(_("Error opening shelf. Shelf does not exist or is not accessible"), category="error")
|
||||
flash(_(u"Error opening shelf. Shelf does not exist or is not accessible"), category="error")
|
||||
return redirect(url_for("web.index"))
|
||||
|
@ -1,108 +0,0 @@
|
||||
body {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
nav {
|
||||
height: 75px;
|
||||
padding: 5px 20px;
|
||||
width: 100%;
|
||||
display: table;
|
||||
box-sizing: border-box;
|
||||
border-bottom: 1px solid black;
|
||||
}
|
||||
|
||||
nav > * {
|
||||
display: inline-block;
|
||||
display: table-cell;
|
||||
vertical-align: middle;
|
||||
float: none;
|
||||
text-align: center;
|
||||
width: auto;
|
||||
}
|
||||
|
||||
nav > *:first-child {
|
||||
text-align: left;
|
||||
width: 1%;
|
||||
}
|
||||
|
||||
.theme{
|
||||
text-align: center;
|
||||
margin: 10px;
|
||||
}
|
||||
nav > *:last-child {
|
||||
text-align: right;
|
||||
width: 1%;
|
||||
}
|
||||
|
||||
nav > a {
|
||||
color: black;
|
||||
margin: 0 20px;
|
||||
}
|
||||
|
||||
.search {
|
||||
margin: auto auto;
|
||||
}
|
||||
|
||||
form > input {
|
||||
width: 18ch;
|
||||
padding-left: 4px;
|
||||
}
|
||||
|
||||
form > * {
|
||||
height: 50px;
|
||||
background-color: white;
|
||||
border-radius: 0;
|
||||
border: 1px solid #ccc;
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
display: inline-block;
|
||||
vertical-align: top;
|
||||
}
|
||||
|
||||
form > span {
|
||||
margin-left: -5px;
|
||||
}
|
||||
|
||||
button {
|
||||
border: none;
|
||||
padding: 0 10px;
|
||||
margin: 0;
|
||||
width: 160px;
|
||||
height: 100%;
|
||||
background-color: white;
|
||||
}
|
||||
|
||||
.body {
|
||||
padding: 5px 20px;
|
||||
}
|
||||
|
||||
a {
|
||||
color: black;
|
||||
}
|
||||
|
||||
img {
|
||||
width: 150px;
|
||||
height: 250px;
|
||||
object-fit: cover;
|
||||
}
|
||||
|
||||
.listing {
|
||||
white-space: nowrap;
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
margin-right: 20px;
|
||||
}
|
||||
|
||||
.pagination {
|
||||
padding: 10px 0;
|
||||
height: 20px;
|
||||
font-weight: 700;
|
||||
}
|
||||
|
||||
.pagination > div {
|
||||
float: left;
|
||||
}
|
||||
|
||||
.pagination > div:last-child {
|
||||
float: right;
|
||||
}
|
@ -3268,10 +3268,6 @@ div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] > div.
|
||||
left: auto !important
|
||||
}
|
||||
|
||||
ul.dropdown-menu.offscreen {
|
||||
margin: 2px -160px 0
|
||||
}
|
||||
|
||||
div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] .dropdown-menu.offscreen {
|
||||
position: fixed;
|
||||
top: 120px;
|
||||
@ -3294,13 +3290,10 @@ div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] .dropd
|
||||
-ms-transform-origin: center top;
|
||||
transform-origin: center top;
|
||||
border: 0;
|
||||
left: 0 !important;
|
||||
overflow-y: auto;
|
||||
}
|
||||
.dropdown-menu:not(.datepicker-dropdown):not(.profileDropli) {
|
||||
left: 0 !important;
|
||||
}
|
||||
#add-to-shelves {
|
||||
min-height: 48px;
|
||||
max-height: calc(100% - 120px);
|
||||
overflow-y: auto;
|
||||
}
|
||||
@ -4337,7 +4330,6 @@ body.advanced_search > div.container-fluid > div.row-fluid > div.col-sm-10 > div
|
||||
|
||||
.navbar-right > li > ul.dropdown-menu.offscreen {
|
||||
right: -10px
|
||||
|
||||
}
|
||||
|
||||
.login .plexBack, body.login > div.container-fluid > div.row-fluid > div.col-sm-2, body.login > div.navbar.navbar-default.navbar-static-top > div > form {
|
||||
@ -4431,6 +4423,38 @@ body.advanced_search > div.container-fluid > div.row-fluid > div.col-sm-10 > div
|
||||
left: 49px;
|
||||
margin-top: 5px
|
||||
}
|
||||
|
||||
body:not(.blur) > .navbar > .container-fluid > .navbar-header:after, body:not(.blur) > .navbar > .container-fluid > .navbar-header:before {
|
||||
color: hsla(0, 0%, 100%, .7);
|
||||
cursor: pointer;
|
||||
display: block;
|
||||
font-family: plex-icons-new, serif;
|
||||
font-size: 20px;
|
||||
font-stretch: 100%;
|
||||
font-style: normal;
|
||||
font-variant-caps: normal;
|
||||
font-variant-east-asian: normal;
|
||||
font-variant-numeric: normal;
|
||||
font-weight: 400;
|
||||
height: 60px;
|
||||
letter-spacing: normal;
|
||||
line-height: 60px;
|
||||
position: absolute
|
||||
}
|
||||
|
||||
body:not(.blur) > .navbar > .container-fluid > .navbar-header:before {
|
||||
content: "\EA30";
|
||||
-webkit-font-variant-ligatures: normal;
|
||||
font-variant-ligatures: normal;
|
||||
left: 20px
|
||||
}
|
||||
|
||||
body:not(.blur) > .navbar > .container-fluid > .navbar-header:after {
|
||||
content: "\EA2F";
|
||||
-webkit-font-variant-ligatures: normal;
|
||||
font-variant-ligatures: normal;
|
||||
left: 60px
|
||||
}
|
||||
}
|
||||
|
||||
body.admin > div.container-fluid > div > div.col-sm-10 > div.container-fluid > div.row:first-of-type > div.col > h2:before, body.admin > div.container-fluid > div > div.col-sm-10 > div.discover > h2:first-of-type:before, body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.discover > h1:before, body.newuser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.discover > h1:before {
|
||||
@ -4818,14 +4842,8 @@ body.advsearch:not(.blur) > div.container-fluid > div.row-fluid > div.col-sm-10
|
||||
z-index: 999999999999999999999999999999999999
|
||||
}
|
||||
|
||||
body.search #shelf-actions button#add-to-shelf {
|
||||
height: 40px;
|
||||
}
|
||||
@media screen and (max-width: 767px) {
|
||||
body.search .discover, body.advsearch .discover {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
}
|
||||
.search #shelf-actions, body.login .home-btn {
|
||||
display: none
|
||||
}
|
||||
|
||||
body.read:not(.blur) a[href*=readbooks] {
|
||||
@ -5146,7 +5164,7 @@ body.login > div.navbar.navbar-default.navbar-static-top > div > div.navbar-head
|
||||
right: 5px
|
||||
}
|
||||
|
||||
body:not(.search) #shelf-actions > .btn-group.open, .downloadBtn.open, .profileDrop[aria-expanded=true] {
|
||||
#shelf-actions > .btn-group.open, .downloadBtn.open, .profileDrop[aria-expanded=true] {
|
||||
pointer-events: none
|
||||
}
|
||||
|
||||
@ -5163,7 +5181,7 @@ body:not(.search) #shelf-actions > .btn-group.open, .downloadBtn.open, .profileD
|
||||
color: var(--color-primary)
|
||||
}
|
||||
|
||||
body:not(.search) #shelf-actions, body:not(.search) #shelf-actions > .btn-group, body:not(.search) #shelf-actions > .btn-group > .empty-ul {
|
||||
#shelf-actions, #shelf-actions > .btn-group, #shelf-actions > .btn-group > .empty-ul {
|
||||
pointer-events: none
|
||||
}
|
||||
|
||||
@ -7153,11 +7171,12 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.
|
||||
}
|
||||
|
||||
body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3, body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3 {
|
||||
position: relative;
|
||||
max-width: unset;
|
||||
width: 100%;
|
||||
height: unset;
|
||||
max-width: 130px;
|
||||
width: 130px;
|
||||
height: 180px;
|
||||
margin: 0;
|
||||
padding: 15px;
|
||||
position: absolute
|
||||
}
|
||||
|
||||
body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9, body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9 {
|
||||
@ -7166,6 +7185,10 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.
|
||||
width: 100%
|
||||
}
|
||||
|
||||
body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9 > .form-group:nth-child(1), body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9 > .form-group:nth-child(2), body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9 > .form-group:nth-child(1), body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9 > .form-group:nth-child(2) {
|
||||
padding-left: 120px
|
||||
}
|
||||
|
||||
#deleteButton, body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3 > div.text-center > #delete, body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3 > div.text-center > #delete {
|
||||
top: 48px;
|
||||
height: 42px
|
||||
@ -7286,11 +7309,6 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.
|
||||
float: right
|
||||
}
|
||||
|
||||
body.blur #main-nav + #scnd-nav .create-shelf, body.blur #main-nav + .col-sm-2 #scnd-nav .create-shelf {
|
||||
float: none;
|
||||
margin: 5px 0 10px -10px;
|
||||
}
|
||||
|
||||
#main-nav + #scnd-nav .nav-head.hidden-xs {
|
||||
display: list-item !important;
|
||||
width: 225px
|
||||
@ -7951,5 +7969,3 @@ div.comments[data-readmore] {
|
||||
transition: height 300ms;
|
||||
overflow: hidden
|
||||
}
|
||||
|
||||
.dropdown-menu > .offscreen
|
||||
|
@ -22,7 +22,3 @@ body.serieslist.grid-view div.container-fluid > div > div.col-sm-10::before {
|
||||
padding: 0 0;
|
||||
line-height: 15px;
|
||||
}
|
||||
|
||||
input.datepicker {color: transparent}
|
||||
input.datepicker:focus {color: transparent}
|
||||
input.datepicker:focus + input {color: #555}
|
||||
|
@ -15,5 +15,5 @@
|
||||
|
||||
.blackTheme {
|
||||
background: #000;
|
||||
color: #fff;
|
||||
color: #fff
|
||||
}
|
@ -149,20 +149,6 @@ body {
|
||||
word-wrap: break-word;
|
||||
}
|
||||
|
||||
#mainContent > canvas {
|
||||
display: block;
|
||||
margin-left: auto;
|
||||
margin-right: auto;
|
||||
}
|
||||
|
||||
.long-strip > .mainImage {
|
||||
margin-bottom: 4px;
|
||||
}
|
||||
|
||||
.long-strip > .mainImage:last-child {
|
||||
margin-bottom: 0px !important;
|
||||
}
|
||||
|
||||
#titlebar {
|
||||
min-height: 25px;
|
||||
height: auto;
|
||||
|
@ -1,3 +0,0 @@
|
||||
<svg width="12" height="13" viewBox="0 0 12 13" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M5.375 7.625V11.875C5.375 12.0408 5.44085 12.1997 5.55806 12.3169C5.67527 12.4342 5.83424 12.5 6 12.5C6.16576 12.5 6.32473 12.4342 6.44194 12.3169C6.55915 12.1997 6.625 12.0408 6.625 11.875V7.625L7.125 7.125H11.375C11.5408 7.125 11.6997 7.05915 11.8169 6.94194C11.9342 6.82473 12 6.66576 12 6.5C12 6.33424 11.9342 6.17527 11.8169 6.05806C11.6997 5.94085 11.5408 5.875 11.375 5.875H7.125L6.625 5.375V1.125C6.625 0.95924 6.55915 0.800269 6.44194 0.683058C6.32473 0.565848 6.16576 0.5 6 0.5C5.83424 0.5 5.67527 0.565848 5.55806 0.683058C5.44085 0.800269 5.375 0.95924 5.375 1.125V5.375L4.875 5.875H0.625C0.45924 5.875 0.300269 5.94085 0.183058 6.05806C0.065848 6.17527 0 6.33424 0 6.5C0 6.66576 0.065848 6.82473 0.183058 6.94194C0.300269 7.05915 0.45924 7.125 0.625 7.125H4.762L5.375 7.625Z" fill="black"/>
|
||||
</svg>
|
Before Width: | Height: | Size: 920 B |
@ -1,3 +0,0 @@
|
||||
<svg width="12" height="13" viewBox="0 0 12 13" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M6 0.5C5.21207 0.5 4.43185 0.655195 3.7039 0.956723C2.97595 1.25825 2.31451 1.70021 1.75736 2.25736C1.20021 2.81451 0.758251 3.47595 0.456723 4.2039C0.155195 4.93185 0 5.71207 0 6.5C0 7.28793 0.155195 8.06815 0.456723 8.7961C0.758251 9.52405 1.20021 10.1855 1.75736 10.7426C2.31451 11.2998 2.97595 11.7417 3.7039 12.0433C4.43185 12.3448 5.21207 12.5 6 12.5C7.5913 12.5 9.11742 11.8679 10.2426 10.7426C11.3679 9.61742 12 8.0913 12 6.5C12 4.9087 11.3679 3.38258 10.2426 2.25736C9.11742 1.13214 7.5913 0.5 6 0.5ZM5.06 8.9L2.9464 6.7856C2.85273 6.69171 2.80018 6.56446 2.80033 6.43183C2.80048 6.29921 2.85331 6.17207 2.9472 6.0784C3.04109 5.98473 3.16834 5.93218 3.30097 5.93233C3.43359 5.93248 3.56073 5.98531 3.6544 6.0792L5.3112 7.7368L8.3464 4.7008C8.44109 4.6109 8.56715 4.56153 8.69771 4.56322C8.82827 4.56492 8.95301 4.61754 9.04534 4.70986C9.13766 4.80219 9.19028 4.92693 9.19198 5.05749C9.19367 5.18805 9.1443 5.31411 9.0544 5.4088L5.5624 8.9H5.06Z" fill="#FBFBFE"/>
|
||||
</svg>
|
Before Width: | Height: | Size: 1.1 KiB |
@ -1,6 +0,0 @@
|
||||
<!-- This Source Code Form is subject to the terms of the Mozilla Public
|
||||
- License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" height="40" width="40">
|
||||
<path d="M9 3.5a1.5 1.5 0 0 0-3-.001v7.95C6 12.83 7.12 14 8.5 14s2.5-1.17 2.5-2.55V5.5a.5.5 0 0 1 1 0v6.03C11.955 13.427 10.405 15 8.5 15S5.044 13.426 5 11.53V3.5a2.5 2.5 0 0 1 5 0v7.003a1.5 1.5 0 0 1-3-.003v-5a.5.5 0 0 1 1 0v5a.5.5 0 0 0 1 0Z"/>
|
||||
</svg>
|
Before Width: | Height: | Size: 552 B |
@ -1,7 +0,0 @@
|
||||
<!-- This Source Code Form is subject to the terms of the Mozilla Public
|
||||
- License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" height="40" width="40">
|
||||
<path d="M8.156 12.5a.99.99 0 0 0 .707-.294l.523-2.574L10.5 8.499l1.058-1.04 2.65-.601a.996.996 0 0 0 0-1.414l-3.657-3.658a.996.996 0 0 0-1.414 0l-.523 2.576L7.5 5.499 6.442 6.535l-2.65.6a.996.996 0 0 0 0 1.413l3.657 3.658a.999.999 0 0 0 .707.295z"/>
|
||||
<path d="M9.842.996c-.386 0-.77.146-1.06.44a.5.5 0 0 0-.136.251l-.492 2.43-1.008 1.03-.953.933-2.511.566a.5.5 0 0 0-.243.133 1.505 1.505 0 0 0-.002 2.123l1.477 1.477-2.768 2.767a.5.5 0 0 0 0 .707.5.5 0 0 0 .708 0l2.767-2.767 1.475 1.474a1.494 1.494 0 0 0 2.123-.002.5.5 0 0 0 .135-.254l.492-2.427 1.008-1.024.953-.937 2.511-.57a.5.5 0 0 0 .243-.132c.586-.58.583-1.543.002-2.125l-3.659-3.656A1.501 1.501 0 0 0 9.842.996Zm.05 1.025a.394.394 0 0 1 .305.12l3.658 3.657c.18.18.141.432.002.627l-2.41.545a.5.5 0 0 0-.24.131L10.15 8.142a.5.5 0 0 0-.007.006L9.029 9.283a.5.5 0 0 0-.133.25l-.48 2.36c-.082.053-.165.109-.26.109a.492.492 0 0 1-.353-.149L4.145 8.195c-.18-.18-.141-.432-.002-.627l2.41-.545a.5.5 0 0 0 .238-.13L7.85 5.857a.5.5 0 0 0 .007-.008l1.114-1.138a.5.5 0 0 0 .133-.25l.472-2.323a.619.619 0 0 1 .317-.117Z"/>
|
||||
</svg>
|
Before Width: | Height: | Size: 1.3 KiB |
@ -1,6 +0,0 @@
|
||||
<svg width="18" height="19" viewBox="0 0 18 19" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path fill-rule="evenodd" clip-rule="evenodd" d="M12.2 3.09C12.28 3.01 12.43 3 12.43 3C12.48 3 12.58 3.02 12.66 3.1L14.45 4.89C14.58 5.02 14.58 5.22 14.45 5.35L11.7713 8.02872L9.51628 5.77372L12.2 3.09ZM13.2658 5.12L11.7713 6.6145L10.9305 5.77372L12.425 4.27921L13.2658 5.12Z" fill="#FBFBFE"/>
|
||||
<path fill-rule="evenodd" clip-rule="evenodd" d="M5.98 9.32L8.23 11.57L10.7106 9.08938L8.45562 6.83438L5.98 9.31V9.32ZM8.23 10.1558L9.29641 9.08938L8.45562 8.24859L7.38921 9.315L8.23 10.1558Z" fill="#FBFBFE"/>
|
||||
<path fill-rule="evenodd" clip-rule="evenodd" d="M10.1526 13.1816L16.2125 7.1217C16.7576 6.58919 17.05 5.8707 17.05 5.12C17.05 4.36931 16.7576 3.65084 16.2126 3.11834L14.4317 1.33747C13.8992 0.79242 13.1807 0.5 12.43 0.5C11.6643 0.5 10.9529 0.812929 10.4329 1.33289L3.68289 8.08289C3.04127 8.72452 3.00459 9.75075 3.57288 10.4363L1.29187 12.7239C1.09186 12.9245 0.990263 13.1957 1.0007 13.4685L1 14.5C0.447715 14.5 0 14.9477 0 15.5V17.5C0 18.0523 0.447715 18.5 1 18.5H16C16.5523 18.5 17 18.0523 17 17.5V15.5C17 14.9477 16.5523 14.5 16 14.5H10.2325C9.83594 14.5 9.39953 13.9347 10.1526 13.1816ZM4.39 9.85L4.9807 10.4407L2.39762 13.0312H6.63877L7.10501 12.565L7.57125 13.0312H8.88875L15.51 6.41C15.86 6.07 16.05 5.61 16.05 5.12C16.05 4.63 15.86 4.17 15.51 3.83L13.72 2.04C13.38 1.69 12.92 1.5 12.43 1.5C11.94 1.5 11.48 1.7 11.14 2.04L4.39 8.79C4.1 9.08 4.1 9.56 4.39 9.85ZM16 17.5V15.5H1V17.5H16Z" fill="#FBFBFE"/>
|
||||
<path d="M15.1616 6.05136L15.1616 6.05132L15.1564 6.05645L8.40645 12.8064C8.35915 12.8537 8.29589 12.88 8.23 12.88C8.16411 12.88 8.10085 12.8537 8.05355 12.8064L7.45857 12.2115L7.10501 11.8579L6.75146 12.2115L6.03289 12.93H3.20465L5.33477 10.7937L5.6873 10.4402L5.33426 10.0871L4.74355 9.49645C4.64882 9.40171 4.64882 9.23829 4.74355 9.14355L11.4936 2.39355C11.7436 2.14354 12.0779 2 12.43 2C12.7883 2 13.1179 2.13776 13.3614 2.38839L13.3613 2.38843L13.3664 2.39355L15.1564 4.18355L15.1564 4.18359L15.1616 4.18864C15.4122 4.43211 15.55 4.76166 15.55 5.12C15.55 5.47834 15.4122 5.80789 15.1616 6.05136ZM7.87645 11.9236L8.23 12.2771L8.58355 11.9236L11.0642 9.44293L11.4177 9.08938L11.0642 8.73582L8.80918 6.48082L8.45562 6.12727L8.10207 6.48082L5.62645 8.95645L5.48 9.10289V9.31V9.32V9.52711L5.62645 9.67355L7.87645 11.9236ZM11.4177 8.38227L11.7713 8.73582L12.1248 8.38227L14.8036 5.70355C15.1288 5.37829 15.1288 4.86171 14.8036 4.53645L13.0136 2.74645C12.8186 2.55146 12.5792 2.5 12.43 2.5H12.4134L12.3967 2.50111L12.43 3C12.3967 2.50111 12.3966 2.50112 12.3965 2.50112L12.3963 2.50114L12.3957 2.50117L12.3947 2.50125L12.3924 2.50142L12.387 2.50184L12.3732 2.50311C12.3628 2.50416 12.3498 2.50567 12.3346 2.50784C12.3049 2.51208 12.2642 2.51925 12.2178 2.53146C12.1396 2.55202 11.9797 2.60317 11.8464 2.73645L9.16273 5.42016L8.80918 5.77372L9.16273 6.12727L11.4177 8.38227ZM1.5 16H15.5V17H1.5V16Z" stroke="#15141A"/>
|
||||
</svg>
|
Before Width: | Height: | Size: 2.9 KiB |
@ -1,3 +0,0 @@
|
||||
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M12 2.75H12.5V2.25V1V0.5H12H10.358C9.91165 0.5 9.47731 0.625661 9.09989 0.860442L9.09886 0.861087L8 1.54837L6.89997 0.860979L6.89911 0.860443C6.5218 0.625734 6.08748 0.5 5.642 0.5H4H3.5V1V2.25V2.75H4H5.642C5.66478 2.75 5.6885 2.75641 5.71008 2.76968C5.71023 2.76977 5.71038 2.76986 5.71053 2.76995L6.817 3.461C6.81704 3.46103 6.81709 3.46105 6.81713 3.46108C6.81713 3.46108 6.81713 3.46108 6.81714 3.46109C6.8552 3.48494 6.876 3.52285 6.876 3.567V8V12.433C6.876 12.4771 6.85523 12.515 6.81722 12.5389C6.81715 12.5389 6.81707 12.539 6.817 12.539L5.70953 13.23C5.70941 13.2301 5.70929 13.2302 5.70917 13.2303C5.68723 13.2438 5.6644 13.25 5.641 13.25H4H3.5V13.75V15V15.5H4H5.642C6.08835 15.5 6.52269 15.3743 6.90011 15.1396L6.90086 15.1391L8 14.4526L9.10003 15.14L9.10089 15.1406C9.47831 15.3753 9.91265 15.501 10.359 15.501H12H12.5V15.001V13.751V13.251H12H10.358C10.3352 13.251 10.3115 13.2446 10.2899 13.2313C10.2897 13.2312 10.2896 13.2311 10.2895 13.231L9.183 12.54C9.18298 12.54 9.18295 12.54 9.18293 12.54C9.18291 12.5399 9.18288 12.5399 9.18286 12.5399C9.14615 12.5169 9.125 12.4797 9.125 12.434V8V3.567C9.125 3.52266 9.14603 3.48441 9.18364 3.4606C9.18377 3.46052 9.1839 3.46043 9.18404 3.46035L10.2895 2.76995C10.2896 2.76985 10.2898 2.76975 10.2899 2.76966C10.3119 2.75619 10.3346 2.75 10.358 2.75H12Z" fill="black" stroke="white"/>
|
||||
</svg>
|
Before Width: | Height: | Size: 1.4 KiB |
@ -1,4 +0,0 @@
|
||||
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M0.0189877 13.6645L0.612989 10.4635C0.687989 10.0545 0.884989 9.6805 1.18099 9.3825L9.98199 0.5805C10.756 -0.1925 12.015 -0.1945 12.792 0.5805L14.42 2.2085C15.194 2.9835 15.194 4.2435 14.42 5.0185L5.61599 13.8215C5.31999 14.1165 4.94599 14.3125 4.53799 14.3875L1.33599 14.9815C1.26599 14.9935 1.19799 15.0005 1.12999 15.0005C0.832989 15.0005 0.544988 14.8835 0.330988 14.6695C0.0679874 14.4055 -0.0490122 14.0305 0.0189877 13.6645Z" fill="white"/>
|
||||
<path d="M0.0189877 13.6645L0.612989 10.4635C0.687989 10.0545 0.884989 9.6805 1.18099 9.3825L9.98199 0.5805C10.756 -0.1925 12.015 -0.1945 12.792 0.5805L14.42 2.2085C15.194 2.9835 15.194 4.2435 14.42 5.0185L5.61599 13.8215C5.31999 14.1165 4.94599 14.3125 4.53799 14.3875L1.33599 14.9815C1.26599 14.9935 1.19799 15.0005 1.12999 15.0005C0.832989 15.0005 0.544988 14.8835 0.330988 14.6695C0.0679874 14.4055 -0.0490122 14.0305 0.0189877 13.6645ZM12.472 5.1965L13.632 4.0365L13.631 3.1885L11.811 1.3675L10.963 1.3685L9.80299 2.5285L12.472 5.1965ZM4.31099 13.1585C4.47099 13.1285 4.61799 13.0515 4.73399 12.9345L11.587 6.0815L8.91899 3.4135L2.06599 10.2655C1.94899 10.3835 1.87199 10.5305 1.84099 10.6915L1.36699 13.2485L1.75199 13.6335L4.31099 13.1585Z" fill="black"/>
|
||||
</svg>
|
Before Width: | Height: | Size: 1.3 KiB |
@ -1,8 +0,0 @@
|
||||
<svg width="29" height="32" viewBox="0 0 29 32" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M28 16.75C28.2761 16.75 28.5 16.5261 28.5 16.25V15C28.5 14.7239 28.2761 14.5 28 14.5H26.358C25.9117 14.5 25.4773 14.6257 25.0999 14.8604L25.0989 14.8611L24 15.5484L22.9 14.861L22.8991 14.8604C22.5218 14.6257 22.0875 14.5 21.642 14.5H20C19.7239 14.5 19.5 14.7239 19.5 15V16.25C19.5 16.5261 19.7239 16.75 20 16.75H21.642C21.6648 16.75 21.6885 16.7564 21.7101 16.7697C21.7102 16.7698 21.7104 16.7699 21.7105 16.77L22.817 17.461C22.817 17.461 22.8171 17.4611 22.8171 17.4611C22.8171 17.4611 22.8171 17.4611 22.8171 17.4611C22.8552 17.4849 22.876 17.5229 22.876 17.567V22.625V27.683C22.876 27.7271 22.8552 27.765 22.8172 27.7889C22.8171 27.7889 22.8171 27.789 22.817 27.789L21.7095 28.48C21.7094 28.4801 21.7093 28.4802 21.7092 28.4803C21.6872 28.4938 21.6644 28.5 21.641 28.5H20C19.7239 28.5 19.5 28.7239 19.5 29V30.25C19.5 30.5261 19.7239 30.75 20 30.75H21.642C22.0883 30.75 22.5227 30.6243 22.9001 30.3896L22.9009 30.3891L24 29.7026L25.1 30.39L25.1009 30.3906C25.4783 30.6253 25.9127 30.751 26.359 30.751H28C28.2761 30.751 28.5 30.5271 28.5 30.251V29.001C28.5 28.7249 28.2761 28.501 28 28.501H26.358C26.3352 28.501 26.3115 28.4946 26.2899 28.4813C26.2897 28.4812 26.2896 28.4811 26.2895 28.481L25.183 27.79C25.183 27.79 25.183 27.79 25.1829 27.79C25.1829 27.7899 25.1829 27.7899 25.1829 27.7899C25.1462 27.7669 25.125 27.7297 25.125 27.684V22.625V17.567C25.125 17.5227 25.146 17.4844 25.1836 17.4606C25.1838 17.4605 25.1839 17.4604 25.184 17.4603L26.2895 16.77C26.2896 16.7699 26.2898 16.7698 26.2899 16.7697C26.3119 16.7562 26.3346 16.75 26.358 16.75H28Z" fill="black" stroke="#FBFBFE" stroke-linejoin="round"/>
|
||||
<path d="M24.625 17.567C24.625 17.35 24.735 17.152 24.918 17.037L26.026 16.345C26.126 16.283 26.24 16.25 26.358 16.25H28V15H26.358C26.006 15 25.663 15.099 25.364 15.285L24.256 15.978C24.161 16.037 24.081 16.113 24 16.187C23.918 16.113 23.839 16.037 23.744 15.978L22.635 15.285C22.336 15.099 21.993 15 21.642 15H20V16.25H21.642C21.759 16.25 21.874 16.283 21.974 16.345L23.082 17.037C23.266 17.152 23.376 17.35 23.376 17.567V22.625V27.683C23.376 27.9 23.266 28.098 23.082 28.213L21.973 28.905C21.873 28.967 21.759 29 21.641 29H20V30.25H21.642C21.994 30.25 22.337 30.151 22.636 29.965L23.744 29.273C23.84 29.213 23.919 29.137 24 29.064C24.081 29.137 24.161 29.213 24.256 29.273L25.365 29.966C25.664 30.152 26.007 30.251 26.359 30.251H28V29.001H26.358C26.241 29.001 26.126 28.968 26.026 28.906L24.918 28.214C24.734 28.099 24.625 27.901 24.625 27.684V22.625V17.567Z" fill="black"/>
|
||||
<path fill-rule="evenodd" clip-rule="evenodd" d="M12.2 2.59C12.28 2.51 12.43 2.5 12.43 2.5C12.48 2.5 12.58 2.52 12.66 2.6L14.45 4.39C14.58 4.52 14.58 4.72 14.45 4.85L11.7713 7.52872L9.51628 5.27372L12.2 2.59ZM13.2658 4.62L11.7713 6.1145L10.9305 5.27372L12.425 3.77921L13.2658 4.62Z" fill="#FBFBFE"/>
|
||||
<path fill-rule="evenodd" clip-rule="evenodd" d="M5.98 8.82L8.23 11.07L10.7106 8.58938L8.45562 6.33438L5.98 8.81V8.82ZM8.23 9.65579L9.29641 8.58938L8.45562 7.74859L7.38921 8.815L8.23 9.65579Z" fill="#FBFBFE"/>
|
||||
<path fill-rule="evenodd" clip-rule="evenodd" d="M10.1526 12.6816L16.2125 6.6217C16.7576 6.08919 17.05 5.3707 17.05 4.62C17.05 3.86931 16.7576 3.15084 16.2126 2.61834L14.4317 0.837474C13.8992 0.29242 13.1807 0 12.43 0C11.6643 0 10.9529 0.312929 10.4329 0.832893L3.68289 7.58289C3.04127 8.22452 3.00459 9.25075 3.57288 9.93634L1.29187 12.2239C1.09186 12.4245 0.990263 12.6957 1.0007 12.9685L1 14C0.447715 14 0 14.4477 0 15V17C0 17.5523 0.447715 18 1 18H16C16.5523 18 17 17.5523 17 17V15C17 14.4477 16.5523 14 16 14H10.2325C9.83594 14 9.39953 13.4347 10.1526 12.6816ZM4.39 9.35L4.9807 9.9407L2.39762 12.5312H6.63877L7.10501 12.065L7.57125 12.5312H8.88875L15.51 5.91C15.86 5.57 16.05 5.11 16.05 4.62C16.05 4.13 15.86 3.67 15.51 3.33L13.72 1.54C13.38 1.19 12.92 1 12.43 1C11.94 1 11.48 1.2 11.14 1.54L4.39 8.29C4.1 8.58 4.1 9.06 4.39 9.35ZM16 17V15H1V17H16Z" fill="#FBFBFE"/>
|
||||
<path d="M15.1616 5.55136L15.1616 5.55132L15.1564 5.55645L8.40645 12.3064C8.35915 12.3537 8.29589 12.38 8.23 12.38C8.16411 12.38 8.10085 12.3537 8.05355 12.3064L7.45857 11.7115L7.10501 11.3579L6.75146 11.7115L6.03289 12.43H3.20465L5.33477 10.2937L5.6873 9.94019L5.33426 9.58715L4.74355 8.99645C4.64882 8.90171 4.64882 8.73829 4.74355 8.64355L11.4936 1.89355C11.7436 1.64354 12.0779 1.5 12.43 1.5C12.7883 1.5 13.1179 1.63776 13.3614 1.88839L13.3613 1.88843L13.3664 1.89355L15.1564 3.68355L15.1564 3.68359L15.1616 3.68864C15.4122 3.93211 15.55 4.26166 15.55 4.62C15.55 4.97834 15.4122 5.30789 15.1616 5.55136ZM5.48 8.82V9.02711L5.62645 9.17355L7.87645 11.4236L8.23 11.7771L8.58355 11.4236L11.0642 8.94293L11.4177 8.58938L11.0642 8.23582L8.80918 5.98082L8.45562 5.62727L8.10207 5.98082L5.62645 8.45645L5.48 8.60289V8.81V8.82ZM11.4177 7.88227L11.7713 8.23582L12.1248 7.88227L14.8036 5.20355C15.1288 4.87829 15.1288 4.36171 14.8036 4.03645L13.0136 2.24645C12.8186 2.05146 12.5792 2 12.43 2H12.4134L12.3967 2.00111L12.43 2.5C12.3967 2.00111 12.3966 2.00112 12.3965 2.00112L12.3963 2.00114L12.3957 2.00117L12.3947 2.00125L12.3924 2.00142L12.387 2.00184L12.3732 2.00311C12.3628 2.00416 12.3498 2.00567 12.3346 2.00784C12.3049 2.01208 12.2642 2.01925 12.2178 2.03146C12.1396 2.05202 11.9797 2.10317 11.8464 2.23645L9.16273 4.92016L8.80918 5.27372L9.16273 5.62727L11.4177 7.88227ZM1.5 16.5V15.5H15.5V16.5H1.5Z" stroke="#15141A"/>
|
||||
</svg>
|
Before Width: | Height: | Size: 5.3 KiB |
@ -1,5 +0,0 @@
|
||||
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path fill-rule="evenodd" clip-rule="evenodd"
|
||||
d="M11 3H13.6C14 3 14.3 3.3 14.3 3.6C14.3 3.9 14 4.2 13.7 4.2H13.3V14C13.3 15.1 12.4 16 11.3 16H4.80005C3.70005 16 2.80005 15.1 2.80005 14V4.2H2.40005C2.00005 4.2 1.80005 4 1.80005 3.6C1.80005 3.2 2.00005 3 2.40005 3H5.00005V2C5.00005 0.9 5.90005 0 7.00005 0H9.00005C10.1 0 11 0.9 11 2V3ZM6.90005 1.2L6.30005 1.8V3H9.80005V1.8L9.20005 1.2H6.90005ZM11.4 14.7L12 14.1V4.2H4.00005V14.1L4.60005 14.7H11.4ZM7.00005 12.4C7.00005 12.7 6.70005 13 6.40005 13C6.10005 13 5.80005 12.7 5.80005 12.4V7.6C5.70005 7.3 6.00005 7 6.40005 7C6.80005 7 7.00005 7.3 7.00005 7.6V12.4ZM10.2001 12.4C10.2001 12.7 9.90006 13 9.60006 13C9.30006 13 9.00006 12.7 9.00006 12.4V7.6C9.00006 7.3 9.30006 7 9.60006 7C9.90006 7 10.2001 7.3 10.2001 7.6V12.4Z"
|
||||
fill="black" />
|
||||
</svg>
|
Before Width: | Height: | Size: 909 B |
6
cps/static/css/libs/images/findbarButton-next-dark.svg
Normal file
@ -0,0 +1,6 @@
|
||||
<!-- This Source Code Form is subject to the terms of the Mozilla Public
|
||||
- License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"
|
||||
fill="rgba(255,255,255,1)"><path d="M8 12a1 1 0 0 1-.707-.293l-5-5a1 1 0 0 1 1.414-1.414L8
|
||||
9.586l4.293-4.293a1 1 0 0 1 1.414 1.414l-5 5A1 1 0 0 1 8 12z"></path></svg>
|
After Width: | Height: | Size: 461 B |
@ -1,3 +1,4 @@
|
||||
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M10.999 8.352L5.534 13.818C5.41551 13.9303 5.25786 13.9918 5.09466 13.9895C4.93146 13.9872 4.77561 13.9212 4.66033 13.8057C4.54505 13.6902 4.47945 13.5342 4.47752 13.3709C4.47559 13.2077 4.53748 13.0502 4.65 12.932L9.585 7.998L4.651 3.067C4.53862 2.94864 4.47691 2.79106 4.47903 2.62786C4.48114 2.46466 4.54692 2.30874 4.66233 2.19333C4.77774 2.07792 4.93366 2.01215 5.09686 2.01003C5.26006 2.00792 5.41763 2.06962 5.536 2.182L11 7.647L10.999 8.352Z" fill="black"/>
|
||||
</svg>
|
||||
<!-- This Source Code Form is subject to the terms of the Mozilla Public
|
||||
- License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"><path d="M8 12a1 1 0 0 1-.707-.293l-5-5a1 1 0 0 1 1.414-1.414L8 9.586l4.293-4.293a1 1 0 0 1 1.414 1.414l-5 5A1 1 0 0 1 8 12z"></path></svg>
|
Before Width: | Height: | Size: 578 B After Width: | Height: | Size: 434 B |
@ -0,0 +1,5 @@
|
||||
<!-- This Source Code Form is subject to the terms of the Mozilla Public
|
||||
- License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"
|
||||
fill="rgba(255,255,255,1)"><path d="M13 11a1 1 0 0 1-.707-.293L8 6.414l-4.293 4.293a1 1 0 0 1-1.414-1.414l5-5a1 1 0 0 1 1.414 0l5 5A1 1 0 0 1 13 11z"></path></svg>
|
After Width: | Height: | Size: 458 B |
@ -1,3 +1,4 @@
|
||||
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M5.001 8.352L10.466 13.818C10.5845 13.9303 10.7421 13.9918 10.9053 13.9895C11.0685 13.9872 11.2244 13.9212 11.3397 13.8057C11.4549 13.6902 11.5205 13.5342 11.5225 13.3709C11.5244 13.2077 11.4625 13.0502 11.35 12.932L6.416 7.999L11.349 3.067C11.4614 2.94864 11.5231 2.79106 11.521 2.62786C11.5189 2.46466 11.4531 2.30874 11.3377 2.19333C11.2223 2.07792 11.0663 2.01215 10.9031 2.01003C10.7399 2.00792 10.5824 2.06962 10.464 2.182L5 7.647L5.001 8.352Z" fill="black"/>
|
||||
</svg>
|
||||
<!-- This Source Code Form is subject to the terms of the Mozilla Public
|
||||
- License, v. 2.0. If a copy of the MPL was not distributed with this
|
||||
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
|
||||
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"><path d="M13 11a1 1 0 0 1-.707-.293L8 6.414l-4.293 4.293a1 1 0 0 1-1.414-1.414l5-5a1 1 0 0 1 1.414 0l5 5A1 1 0 0 1 13 11z"></path></svg>
|
Before Width: | Height: | Size: 578 B After Width: | Height: | Size: 431 B |