1
0
mirror of https://github.com/janeczku/calibre-web synced 2025-09-08 22:06:00 +00:00

Merge remote-tracking branch 'upstream'

This commit is contained in:
Oskar Manhart
2025-05-04 11:46:07 +02:00
169 changed files with 37371 additions and 20117 deletions

View File

@@ -11,7 +11,7 @@ assignees: ''
After 6 years of more or less intensive programming on Calibre-Web, I need a break. After 6 years of more or less intensive programming on Calibre-Web, I need a break.
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web. The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months. I have turned off all notifications from GitHub/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me. I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.

View File

@@ -11,7 +11,7 @@ assignees: ''
After 6 years of more or less intensive programming on Calibre-Web, I need a break. After 6 years of more or less intensive programming on Calibre-Web, I need a break.
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web. The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months. I have turned off all notifications from GitHub/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me. I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md) Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)

View File

@@ -20,7 +20,7 @@ Some of the user languages in Calibre-Web having missing translations. We are ha
### **Documentation** ### **Documentation**
The Calibre-Web documentation is hosted in the Github [Wiki](https://github.com/janeczku/calibre-web/wiki). The Wiki is open to everybody, if you find a problem, feel free to correct it. If information is missing, you are welcome to add it. The content will be reviewed time by time. Please try to be consistent with the form with the other Wiki pages (e.g. the project name is Calibre-Web with 2 capital letters and a dash in between). The Calibre-Web documentation is hosted in the GitHub [Wiki](https://github.com/janeczku/calibre-web/wiki). The Wiki is open to everybody, if you find a problem, feel free to correct it. If information is missing, you are welcome to add it. The content will be reviewed time by time. Please try to be consistent with the form with the other Wiki pages (e.g. the project name is Calibre-Web with 2 capital letters and a dash in between).
### **Reporting a bug** ### **Reporting a bug**
@@ -28,12 +28,12 @@ Do not open up a GitHub issue if the bug is a **security vulnerability** in Cali
Ensure the **bug was not already reported** by searching on GitHub under [Issues](https://github.com/janeczku/calibre-web/issues). Please also check if a solution for your problem can be found in the [wiki](https://github.com/janeczku/calibre-web/wiki). Ensure the **bug was not already reported** by searching on GitHub under [Issues](https://github.com/janeczku/calibre-web/issues). Please also check if a solution for your problem can be found in the [wiki](https://github.com/janeczku/calibre-web/wiki).
If you're unable to find an **open issue** addressing the problem, open a [new one](https://github.com/janeczku/calibre-web/issues/new/choose). Be sure to include a **title** and **clear description**, as much relevant information as possible, the **issue form** helps you providing the right information. Deleting the form and just pasting the stack trace doesn't speed up fixing the problem. If your issue could be resolved, consider closing the issue. If you're unable to find an **open issue** addressing the problem, open a [new one](https://github.com/janeczku/calibre-web/issues/new/choose). Be sure to include a **title** and **clear description**, as much relevant information as possible, the **issue form** helps you provide the right information. Deleting the form and just pasting the stack trace doesn't speed up fixing the problem. If your issue could be resolved, consider closing the issue.
### **Feature Request** ### **Feature Request**
If there is a feature missing in Calibre-Web and you can't find a feature request in the [Issues](https://github.com/janeczku/calibre-web/issues) section, you could create a [feature request](https://github.com/janeczku/calibre-web/issues/new?assignees=&labels=&template=feature_request.md&title=). If there is a feature missing in Calibre-Web and you can't find a feature request in the [Issues](https://github.com/janeczku/calibre-web/issues) section, you could create a [feature request](https://github.com/janeczku/calibre-web/issues/new?assignees=&labels=&template=feature_request.md&title=).
We will not extend Calibre-Web with any more login abilities or add further files storages, or file syncing ability. Furthermore Calibre-Web is made for home usage for company in-house usage, so requests regarding any sorts of social interaction capability, payment routines, search engine or web site analytics integration will not be implemented. We will not extend Calibre-Web with any more login abilities or add further files storages, or file syncing ability. Calibre-Web is made for home usage for company in-house usage, so requests regarding any sorts of social interaction capability, payment routines, search engine or website analytics integration will not be implemented.
### **Contributing code to Calibre-Web** ### **Contributing code to Calibre-Web**
@@ -42,5 +42,5 @@ Open a new GitHub pull request with the patch. Ensure the PR description clearly
In case your code enhances features of Calibre-Web: Create your pull request for the development branch if your enhancement consists of more than some lines of code in a local section of Calibre-Webs code. This makes it easier to test it and check all implication before it's made public. In case your code enhances features of Calibre-Web: Create your pull request for the development branch if your enhancement consists of more than some lines of code in a local section of Calibre-Webs code. This makes it easier to test it and check all implication before it's made public.
Please check if your code runs with python 3, python 2 is no longer supported. If possible and the feature is related to operating system functions, try to check it on Windows and Linux. Please check if your code runs with python 3, python 2 is no longer supported. If possible and the feature is related to operating system functions, try to check it on Windows and Linux.
Calibre-Web is automatically tested on Linux in combination with python 3.8. The code for testing is in a [separate repo](https://github.com/OzzieIsaacs/calibre-web-test) on Github. It uses unit tests and performs real system tests with selenium; it would be great if you could consider also writing some tests. Calibre-Web is automatically tested on Linux in combination with python 3.8. The code for testing is in a [separate repo](https://github.com/OzzieIsaacs/calibre-web-test) on GitHub. It uses unit tests and performs real system tests with selenium; it would be great if you could consider also writing some tests.
A static code analysis is done by Codacy, but it's partly broken and doesn't run automatically. You could check your code with ESLint before contributing, a configuration file can be found in the projects root folder. A static code analysis is done by Codacy, but it's partly broken and doesn't run automatically. You could check your code with ESLint before contributing, a configuration file can be found in the projects root folder.

View File

@@ -1 +1,3 @@
graft src/calibreweb graft src/calibreweb
global-exclude __pycache__
global-exclude *.pyc

146
README.md
View File

@@ -1,10 +1,3 @@
# Short Notice from the maintainer
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
# Calibre-Web # Calibre-Web
Calibre-Web is a web app that offers a clean and intuitive interface for browsing, reading, and downloading eBooks using a valid [Calibre](https://calibre-ebook.com) database. Calibre-Web is a web app that offers a clean and intuitive interface for browsing, reading, and downloading eBooks using a valid [Calibre](https://calibre-ebook.com) database.
@@ -26,13 +19,13 @@ Calibre-Web is a web app that offers a clean and intuitive interface for browsin
- [Quick start](#quick-start) - [Quick start](#quick-start)
- [Requirements](#requirements) - [Requirements](#requirements)
4. [Docker Images](#docker-images) 4. [Docker Images](#docker-images)
5. [Contributor Recognition](#contributor-recognition) 5. [Troubleshooting](#troubleshooting)
6. [Contact](#contact) 6. [Contributor Recognition](#contributor-recognition)
7. [Contributing to Calibre-Web](#contributing-to-calibre-web) 7. [Contact](#contact)
8. [Contributing to Calibre-Web](#contributing-to-calibre-web)
</details> </details>
*This software is a fork of [library](https://github.com/mutschler/calibreserver) and licensed under the GPL v3 License.* *This software is a fork of [library](https://github.com/mutschler/calibreserver) and licensed under the GPL v3 License.*
![Main screen](https://github.com/janeczku/calibre-web/wiki/images/main_screen.png) ![Main screen](https://github.com/janeczku/calibre-web/wiki/images/main_screen.png)
@@ -64,52 +57,102 @@ Calibre-Web is a web app that offers a clean and intuitive interface for browsin
## Installation ## Installation
#### Installation via pip (recommended) ### Installation via pip (recommended)
1. Create a virtual environment for Calibre-Web to avoid conflicts with existing Python dependencies
2. Install Calibre-Web via pip: `pip install calibreweb` (or `pip3` depending on your OS/distro)
3. Install optional features via pip as needed, see [this page](https://github.com/janeczku/calibre-web/wiki/Dependencies-in-Calibre-Web-Linux-and-Windows) for details
4. Start Calibre-Web by typing `cps`
*Note: Raspberry Pi OS users may encounter issues during installation. If so, please update pip (`./venv/bin/python3 -m pip install --upgrade pip`) and/or install cargo (`sudo apt install cargo`) before retrying the installation.* 1. **Create a virtual environment**: Its essential to isolate your Calibre-Web installation to avoid dependency conflicts. You can create a virtual environment by running:
```
python3 -m venv calibre-web-env
```
2. **Activate the virtual environment**:
```
source calibre-web-env/bin/activate
```
3. **Install Calibre-Web**: Use pip to install the application:
```
pip install calibreweb
```
4. **Install optional features**: For additional functionality, you may need to install optional features. Refer to [this page](https://github.com/janeczku/calibre-web/wiki/Dependencies-in-Calibre-Web-Linux-and-Windows) for details on what can be installed.
5. **Start Calibre-Web**: After installation, you can start the application with:
```
cps
```
Refer to the Wiki for additional installation examples: [manual installation](https://github.com/janeczku/calibre-web/wiki/Manual-installation), [Linux Mint](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-in-Linux-Mint-19-or-20), [Cloud Provider](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-on-a-Cloud-Provider). *Note: Users of Raspberry Pi OS may encounter installation issues. If you do, try upgrading pip and/or installing cargo as follows:*
```
./venv/bin/python3 -m pip install --upgrade pip
sudo apt install cargo
```
### Important Links
- For additional installation examples, check the following:
- [Manual installation](https://github.com/janeczku/calibre-web/wiki/Manual-installation)
- [Linux Mint installation](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-in-Linux-Mint-19-or-20)
- [Cloud Provider setup](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-on-a-Cloud-Provider)
## Quick Start ## Quick Start
1. Open your browser and navigate to `http://localhost:8083` or `http://localhost:8083/opds` for the OPDS catalog 1. **Access Calibre-Web**: Open your browser and navigate to:
2. Log in with the default admin credentials ```
3. If you don't have a Calibre database, you can use [this database](https://github.com/janeczku/calibre-web/raw/master/library/metadata.db) (move it out of the Calibre-Web folder to prevent overwriting during updates) http://localhost:8083
4. Set `Location of Calibre database` to the path of the folder containing your Calibre library (metadata.db) and click "Save" ```
5. Optionally, use Google Drive to host your Calibre library by following the [Google Drive integration guide](https://github.com/janeczku/calibre-web/wiki/G-Drive-Setup#using-google-drive-integration) or for the OPDS catalog:
6. Configure your Calibre-Web instance via the admin page, referring to the [Basic Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#basic-configuration) and [UI Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#ui-configuration) guides ```
http://localhost:8083/opds
#### Default Admin Login: ```
- **Username:** admin 2. **Log in**: Use the default admin credentials:
- **Password:** admin123 - **Username:** admin
- **Password:** admin123
3. **Database Setup**: If you do not have a Calibre database, download a sample from:
```
https://github.com/janeczku/calibre-web/raw/master/library/metadata.db
```
Move it out of the Calibre-Web folder to avoid overwriting during updates.
4. **Configure Calibre Database**: In the admin interface, set the `Location of Calibre database` to the path of the folder containing your Calibre library (where `metadata.db` is located) and click "Save".
5. **Google Drive Integration**: For hosting your Calibre library on Google Drive, refer to the [Google Drive integration guide](https://github.com/janeczku/calibre-web/wiki/G-Drive-Setup#using-google-drive-integration).
6. **Admin Configuration**: Configure your instance via the admin page, referring to the [Basic Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#basic-configuration) and [UI Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#ui-configuration) guides.
## Requirements ## Requirements
- Python 3.7+ - **Python Version**: Ensure you have Python 3.7 or newer.
- [Imagemagick](https://imagemagick.org/script/download.php) for cover extraction from EPUBs (Windows users may need to install [Ghostscript](https://ghostscript.com/releases/gsdnld.html) for PDF cover extraction) - **Imagemagick**: Required for cover extraction from EPUBs. Windows users may also need to install [Ghostscript](https://ghostscript.com/releases/gsdnld.html) for PDF cover extraction.
- Windows users need to install [libmagic for 32bit python](https://gnuwin32.sourceforge.net/downlinks/file.php) or [libmagic for 64bit python](https://github.com/nscaife/file-windows/releases/tag/20170108), depending on the python version; The files need to be installed in path (e.g. script folder of your Calibre-Web venv, or in the root folder of Calibre-Web - **Optional Tools**:
- Optional: [Calibre desktop program](https://calibre-ebook.com/download) for on-the-fly conversion and metadata editing (set "calibre's converter tool" path on the setup page) - **Calibre desktop program**: Recommended for on-the-fly conversion and metadata editing. Set the path to Calibres converter tool on the setup page.
- Optional: [Kepubify tool](https://github.com/pgaskin/kepubify/releases/latest) for Kobo device support (place the binary in `/opt/kepubify` on Linux or `C:\Program Files\kepubify` on Windows) - **Kepubify tool**: Needed for Kobo device support. Download the tool and place the binary in `/opt/kepubify` on Linux or `C:\Program Files\kepubify` on Windows.
## Docker Images ## Docker Images
Pre-built Docker images are available in the following Docker Hub repositories (maintained by the LinuxServer team): Pre-built Docker images are available:
#### **LinuxServer - x64, aarch64** ### **LinuxServer - x64, aarch64**
- [Docker Hub](https://hub.docker.com/r/linuxserver/calibre-web) - **Docker Hub**: [linuxserver/calibre-web](https://hub.docker.com/r/linuxserver/calibre-web)
- [GitHub](https://github.com/linuxserver/docker-calibre-web) - **GitHub**: [linuxserver/docker-calibre-web](https://github.com/linuxserver/docker-calibre-web)
- [GitHub - Optional Calibre layer](https://github.com/linuxserver/docker-mods/tree/universal-calibre) - **Optional Calibre layer**: [linuxserver/docker-mods](https://github.com/linuxserver/docker-mods/tree/universal-calibre)
Include the environment variable `DOCKER_MODS=linuxserver/mods:universal-calibre` in your Docker run/compose file to add the Calibre `ebook-convert` binary (x64 only). Omit this variable for a lightweight image. To include the Calibre `ebook-convert` binary (x64 only), add the environment variable:
```
DOCKER_MODS=linuxserver/mods:universal-calibre
```
in your Docker run/compose file. Omit this variable for a lightweight image.
Both the Calibre-Web and Calibre-Mod images are automatically rebuilt on new releases and updates. - **Paths Configuration**:
- Set **Path to Calibre Binaries** to `/usr/bin`.
- Set **Path to Unrar** to `/usr/bin/unrar`.
- Set "Path to Calibre Binaries" to `/usr/bin` ## Troubleshooting
- Set "Path to Unrar" to `/usr/bin/unrar`
- **Common Issues**:
- If you experience issues starting the application, check the log files located in the `logs` directory for error messages.
- If eBooks fail to load, verify that the `Location of Calibre database` is correctly set and that the database file is accessible.
- **Configuration Errors**: Ensure that your Calibre database is compatible and properly formatted. Refer to the Calibre documentation for guidance on maintaining the database.
- **Performance Problems**:
- If the application is slow, consider increasing the allocated resources (CPU/RAM) to your server or optimizing the Calibre database by removing duplicates and unnecessary entries.
- Regularly clear the cache in your web browser to improve loading times.
- **User Management Issues**: If users are unable to log in or register, check the user permission settings in the admin interface. Ensure that registration is enabled and that users are being assigned appropriate roles.
- **Support Resources**: For additional help, consider visiting the [FAQ section](https://github.com/janeczku/calibre-web/wiki/FAQ) of the wiki or posting your questions in the [Discord community](https://discord.gg/h2VsJ2NEfB).
## Contributor Recognition ## Contributor Recognition
@@ -123,4 +166,21 @@ For more information, How To's, and FAQs, please visit the [Wiki](https://github
## Contributing to Calibre-Web ## Contributing to Calibre-Web
Check out our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md) To contribute, please check our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md). We welcome issues, feature requests, and pull requests from the community.
### Reporting Bugs
If you encounter bugs or issues, please report them in the [issues section](https://github.com/janeczku/calibre-web/issues) of the repository. Be sure to include detailed information about your setup and the problem encountered.
### Feature Requests
We welcome suggestions for new features. Please create a new issue in the repository to discuss your ideas.
## Additional Resources
- **Documentation**: Comprehensive documentation is available on the [Calibre-Web wiki](https://github.com/janeczku/calibre-web/wiki).
- **Community Contributions**: Explore the [community contributions](https://github.com/janeczku/calibre-web/pulls) to see ongoing work and how you can get involved.
---
Thank you for using Calibre-Web! We hope you enjoy managing your eBook library with our tool.

View File

@@ -10,41 +10,46 @@ To receive fixes for security vulnerabilities it is required to always upgrade t
## History ## History
| Fixed in | Description |CVE number | | Fixed in | Description | CVE number |
|---------------|--------------------------------------------------------------------------------------------------------------------|---------| |---------------|----------------------------------------------------------------------------------------------------------------------------------|----------------|
| 3rd July 2018 | Guest access acts as a backdoor || | 3rd July 2018 | Guest access acts as a backdoor | |
| V 0.6.7 | Hardcoded secret key for sessions |CVE-2020-12627 | | V 0.6.7 | Hardcoded secret key for sessions | CVE-2020-12627 |
| V 0.6.13 | Calibre-Web Metadata cross site scripting |CVE-2021-25964| | V 0.6.13 | Calibre-Web Metadata cross site scripting | CVE-2021-25964 |
| V 0.6.13 | Name of Shelves are only visible to users who can access the corresponding shelf Thanks to @ibarrionuevo || | V 0.6.13 | Name of Shelves are only visible to users who can access the corresponding shelf Thanks to @ibarrionuevo | |
| V 0.6.13 | JavaScript could get executed in the description field. Thanks to @ranjit-git and Hagai Wechsler (WhiteSource) || | V 0.6.13 | JavaScript could get executed in the description field. Thanks to @ranjit-git and Hagai Wechsler (WhiteSource) | |
| V 0.6.13 | JavaScript could get executed in a custom column of type "comment" field || | V 0.6.13 | JavaScript could get executed in a custom column of type "comment" field | |
| V 0.6.13 | JavaScript could get executed after converting a book to another format with a title containing javascript code || | V 0.6.13 | JavaScript could get executed after converting a book to another format with a title containing javascript code | |
| V 0.6.13 | JavaScript could get executed after converting a book to another format with a username containing javascript code || | V 0.6.13 | JavaScript could get executed after converting a book to another format with a username containing javascript code | |
| V 0.6.13 | JavaScript could get executed in the description series, categories or publishers title || | V 0.6.13 | JavaScript could get executed in the description series, categories or publishers title | |
| V 0.6.13 | JavaScript could get executed in the shelf title || | V 0.6.13 | JavaScript could get executed in the shelf title | |
| V 0.6.13 | Login with the old session cookie after logout. Thanks to @ibarrionuevo || | V 0.6.13 | Login with the old session cookie after logout. Thanks to @ibarrionuevo | |
| V 0.6.14 | CSRF was possible. Thanks to @mik317 and Hagai Wechsler (WhiteSource) |CVE-2021-25965| | V 0.6.14 | CSRF was possible. Thanks to @mik317 and Hagai Wechsler (WhiteSource) | CVE-2021-25965 |
| V 0.6.14 | Migrated some routes to POST-requests (CSRF protection). Thanks to @scara31 |CVE-2021-4164| | V 0.6.14 | Migrated some routes to POST-requests (CSRF protection). Thanks to @scara31 | CVE-2021-4164 |
| V 0.6.15 | Fix for "javascript:" script links in identifier. Thanks to @scara31 |CVE-2021-4170| | V 0.6.15 | Fix for "javascript:" script links in identifier. Thanks to @scara31 | CVE-2021-4170 |
| V 0.6.15 | Cross-Site Scripting vulnerability on uploaded cover file names. Thanks to @ibarrionuevo || | V 0.6.15 | Cross-Site Scripting vulnerability on uploaded cover file names. Thanks to @ibarrionuevo | |
| V 0.6.15 | Creating public shelfs is now denied if user is missing the edit public shelf right. Thanks to @ibarrionuevo || | V 0.6.15 | Creating public shelfs is now denied if user is missing the edit public shelf right. Thanks to @ibarrionuevo | |
| V 0.6.15 | Changed error message in case of trying to delete a shelf unauthorized. Thanks to @ibarrionuevo || | V 0.6.15 | Changed error message in case of trying to delete a shelf unauthorized. Thanks to @ibarrionuevo | |
| V 0.6.16 | JavaScript could get executed on authors page. Thanks to @alicaz |CVE-2022-0352| | V 0.6.16 | JavaScript could get executed on authors page. Thanks to @alicaz | CVE-2022-0352 |
| V 0.6.16 | Localhost can no longer be used to upload covers. Thanks to @scara31 |CVE-2022-0339| | V 0.6.16 | Localhost can no longer be used to upload covers. Thanks to @scara31 | CVE-2022-0339 |
| V 0.6.16 | Another case where public shelfs could be created without permission is prevented. Thanks to @nhiephon |CVE-2022-0273| | V 0.6.16 | Another case where public shelfs could be created without permission is prevented. Thanks to @nhiephon | CVE-2022-0273 |
| V 0.6.16 | It's prevented to get the name of a private shelfs. Thanks to @nhiephon |CVE-2022-0405| | V 0.6.16 | It's prevented to get the name of a private shelfs. Thanks to @nhiephon | CVE-2022-0405 |
| V 0.6.17 | The SSRF Protection can no longer be bypassed via an HTTP redirect. Thanks to @416e6e61 |CVE-2022-0767| | V 0.6.17 | The SSRF Protection can no longer be bypassed via an HTTP redirect. Thanks to @416e6e61 | CVE-2022-0767 |
| V 0.6.17 | The SSRF Protection can no longer be bypassed via 0.0.0.0 and it's ipv6 equivalent. Thanks to @r0hanSH |CVE-2022-0766| | V 0.6.17 | The SSRF Protection can no longer be bypassed via 0.0.0.0 and it's ipv6 equivalent. Thanks to @r0hanSH | CVE-2022-0766 |
| V 0.6.18 | Possible SQL Injection is prevented in user table Thanks to Iman Sharafaldin (Forward Security) |CVE-2022-30765| | V 0.6.18 | Possible SQL Injection is prevented in user table Thanks to Iman Sharafaldin (Forward Security) | CVE-2022-30765 |
| V 0.6.18 | The SSRF protection no longer can be bypassed by IPV6/IPV4 embedding. Thanks to @416e6e61 |CVE-2022-0939| | V 0.6.18 | The SSRF protection no longer can be bypassed by IPV6/IPV4 embedding. Thanks to @416e6e61 | CVE-2022-0939 |
| V 0.6.18 | The SSRF protection no longer can be bypassed to connect to other servers in the local network. Thanks to @michaellrowley |CVE-2022-0990| | V 0.6.18 | The SSRF protection no longer can be bypassed to connect to other servers in the local network. Thanks to @michaellrowley | CVE-2022-0990 |
| V 0.6.20 | Credentials for emails are now stored encrypted || | V 0.6.20 | Credentials for emails are now stored encrypted | |
| V 0.6.20 | Login is rate limited || | V 0.6.20 | Login is rate limited | |
| V 0.6.20 | Passwordstrength can be forced || | V 0.6.20 | Passwordstrength can be forced | |
| V 0.6.21 | SMTP server credentials are no longer returned to client || | V 0.6.21 | SMTP server credentials are no longer returned to client | |
| V 0.6.21 | Cross-site scripting (XSS) stored in href bypasses filter using data wrapper no longer possible || | V 0.6.21 | Cross-site scripting (XSS) stored in href bypasses filter using data wrapper no longer possible | |
| V 0.6.21 | Cross-site scripting (XSS) is no longer possible via pathchooser || | V 0.6.21 | Cross-site scripting (XSS) is no longer possible via pathchooser | |
| V 0.6.21 | Error Handling at non existent rating, language, and user downloaded books was fixed || | V 0.6.21 | Error Handling at non existent rating, language, and user downloaded books was fixed | |
| V 0.6.22 | Upload mimetype is checked to prevent malicious file content in the books library | |
| V 0.6.22 | Cross-site scripting (XSS) stored in comments section is prevented better (switching from lxml to bleach for sanitizing strings) | |
| V 0.6.23 | Cookies are no longer stored for opds basic authentication and proxy authentication | |
## Statement regarding Log4j (CVE-2021-44228 and related) ## Statement regarding Log4j (CVE-2021-44228 and related)

View File

@@ -35,7 +35,6 @@ from .reverseproxy import ReverseProxied
from .server import WebServer from .server import WebServer
from .dep_check import dependency_check from .dep_check import dependency_check
from .updater import Updater from .updater import Updater
from .babel import babel, get_locale
from . import config_sql from . import config_sql
from . import cache_buster from . import cache_buster
from . import ub, db from . import ub, db
@@ -56,35 +55,41 @@ mimetypes.init()
mimetypes.add_type('application/xhtml+xml', '.xhtml') mimetypes.add_type('application/xhtml+xml', '.xhtml')
mimetypes.add_type('application/epub+zip', '.epub') mimetypes.add_type('application/epub+zip', '.epub')
mimetypes.add_type('application/epub+zip', '.kepub') mimetypes.add_type('application/epub+zip', '.kepub')
mimetypes.add_type('text/xml', '.fb2') mimetypes.add_type('application/fb2+zip', '.fb2')
mimetypes.add_type('application/octet-stream', '.mobi') mimetypes.add_type('application/x-mobipocket-ebook', '.mobi')
mimetypes.add_type('application/octet-stream', '.prc') mimetypes.add_type('application/octet-stream', '.prc')
mimetypes.add_type('application/vnd.amazon.ebook', '.azw') mimetypes.add_type('application/x-mobipocket-ebook', '.azw')
mimetypes.add_type('application/x-mobi8-ebook', '.azw3') mimetypes.add_type('application/x-mobipocket-ebook', '.azw3')
mimetypes.add_type('application/x-rar', '.cbr') mimetypes.add_type('application/x-cbr', '.cbr')
mimetypes.add_type('application/zip', '.cbz') mimetypes.add_type('application/x-cbz', '.cbz')
mimetypes.add_type('application/x-tar', '.cbt') mimetypes.add_type('application/x-tar', '.cbt')
mimetypes.add_type('application/x-7z-compressed', '.cb7') mimetypes.add_type('application/x-7z-compressed', '.cb7')
mimetypes.add_type('image/vnd.djv', '.djv') mimetypes.add_type('image/vnd.djvu', '.djv')
mimetypes.add_type('image/vnd.djv', '.djvu') mimetypes.add_type('image/vnd.djvu', '.djvu')
mimetypes.add_type('application/mpeg', '.mpeg') mimetypes.add_type('application/mpeg', '.mpeg')
mimetypes.add_type('audio/mpeg', '.mp3') mimetypes.add_type('audio/mpeg', '.mp3')
mimetypes.add_type('audio/x-m4a', '.m4a') mimetypes.add_type('audio/x-m4a', '.m4a')
mimetypes.add_type('audio/x-m4a', '.m4b') mimetypes.add_type('audio/x-m4a', '.m4b')
mimetypes.add_type('audio/x-hx-aac-adts', '.aac')
mimetypes.add_type('audio/vnd.dolby.dd-raw', '.ac3')
mimetypes.add_type('video/x-ms-asf', '.asf')
mimetypes.add_type('audio/ogg', '.ogg') mimetypes.add_type('audio/ogg', '.ogg')
mimetypes.add_type('application/ogg', '.oga') mimetypes.add_type('application/ogg', '.oga')
mimetypes.add_type('text/css', '.css') mimetypes.add_type('text/css', '.css')
mimetypes.add_type('application/x-ms-reader', '.lit') mimetypes.add_type('application/x-ms-reader', '.lit')
mimetypes.add_type('text/javascript; charset=UTF-8', '.js') mimetypes.add_type('text/javascript', '.js')
mimetypes.add_type('text/rtf', '.rtf')
log = logger.create() log = logger.create()
app = Flask(__name__) app = Flask(__name__)
app.config.update( app.config.update(
SESSION_COOKIE_HTTPONLY=True, SESSION_COOKIE_HTTPONLY=True,
SESSION_COOKIE_SAMESITE='Strict', SESSION_COOKIE_SAMESITE='Lax',
REMEMBER_COOKIE_SAMESITE='Strict', # will be available in flask-login 0.5.1 earliest REMEMBER_COOKIE_SAMESITE='Strict',
WTF_CSRF_SSL_STRICT=False WTF_CSRF_SSL_STRICT=False,
SESSION_COOKIE_NAME=os.environ.get('COOKIE_PREFIX', "") + "session",
REMEMBER_COOKIE_NAME=os.environ.get('COOKIE_PREFIX', "") + "remember_token"
) )
lm = MyLoginManager() lm = MyLoginManager()
@@ -98,7 +103,7 @@ if wtf_present:
else: else:
csrf = None csrf = None
calibre_db = db.CalibreDB() calibre_db = db.CalibreDB(app)
web_server = WebServer() web_server = WebServer()
@@ -142,9 +147,7 @@ def create_app():
lm.anonymous_user = ub.Anonymous lm.anonymous_user = ub.Anonymous
lm.session_protection = 'strong' if config.config_session == 1 else "basic" lm.session_protection = 'strong' if config.config_session == 1 else "basic"
db.CalibreDB.update_config(config) db.CalibreDB.update_config(config, config.config_calibre_dir, cli_param.settings_path)
db.CalibreDB.setup_db(config.config_calibre_dir, cli_param.settings_path)
calibre_db.init_db()
updater_thread.init_updater(config, web_server) updater_thread.init_updater(config, web_server)
# Perform dry run of updater and exit afterward # Perform dry run of updater and exit afterward
@@ -177,6 +180,7 @@ def create_app():
app.secret_key = os.getenv('SECRET_KEY', config_sql.get_flask_session_key(ub.session)) app.secret_key = os.getenv('SECRET_KEY', config_sql.get_flask_session_key(ub.session))
web_server.init_app(app, config) web_server.init_app(app, config)
from .cw_babel import babel, get_locale
if hasattr(babel, "localeselector"): if hasattr(babel, "localeselector"):
babel.init_app(app) babel.init_app(app)
babel.localeselector(get_locale) babel.localeselector(get_locale)

View File

@@ -23,10 +23,10 @@
import sys import sys
import platform import platform
import sqlite3 import sqlite3
from importlib.metadata import metadata
from collections import OrderedDict from collections import OrderedDict
import flask import flask
import jinja2
from flask_babel import gettext as _ from flask_babel import gettext as _
from . import db, calibre_db, converter, uploader, constants, dep_check from . import db, calibre_db, converter, uploader, constants, dep_check
@@ -41,17 +41,18 @@ req = dep_check.load_dependencies(False)
opt = dep_check.load_dependencies(True) opt = dep_check.load_dependencies(True)
for i in (req + opt): for i in (req + opt):
modules[i[1]] = i[0] modules[i[1]] = i[0]
modules['Jinja2'] = jinja2.__version__ modules['Jinja2'] = metadata("jinja2")["Version"]
modules['pySqlite'] = sqlite3.version if sys.version_info < (3, 12):
modules['pySqlite'] = sqlite3.version
modules['SQLite'] = sqlite3.sqlite_version modules['SQLite'] = sqlite3.sqlite_version
sorted_modules = OrderedDict((sorted(modules.items(), key=lambda x: x[0].casefold()))) sorted_modules = OrderedDict((sorted(modules.items(), key=lambda x: x[0].casefold())))
def collect_stats(): def collect_stats():
if constants.NIGHTLY_VERSION[0] == "$Format:%H$": if constants.NIGHTLY_VERSION[0] == "$Format:%H$":
calibre_web_version = constants.STABLE_VERSION['version'].replace("b", " Beta") calibre_web_version = constants.STABLE_VERSION.replace("b", " Beta")
else: else:
calibre_web_version = (constants.STABLE_VERSION['version'].replace("b", " Beta") + ' - ' calibre_web_version = (constants.STABLE_VERSION.replace("b", " Beta") + ' - '
+ constants.NIGHTLY_VERSION[0].replace('%', '%%') + ' - ' + constants.NIGHTLY_VERSION[0].replace('%', '%%') + ' - '
+ constants.NIGHTLY_VERSION[1].replace('%', '%%')) + constants.NIGHTLY_VERSION[1].replace('%', '%%'))

View File

@@ -32,7 +32,8 @@ from datetime import time as datetime_time
from functools import wraps from functools import wraps
from urllib.parse import urlparse from urllib.parse import urlparse
from flask import Blueprint, flash, redirect, url_for, abort, request, make_response, send_from_directory, g, Response from flask import Blueprint, flash, redirect, url_for, abort, request, make_response, \
send_from_directory, g, jsonify
from markupsafe import Markup from markupsafe import Markup
from .cw_login import current_user from .cw_login import current_user
from flask_babel import gettext as _ from flask_babel import gettext as _
@@ -52,8 +53,9 @@ from .gdriveutils import is_gdrive_ready, gdrive_support
from .render_template import render_title_template, get_sidebar_config from .render_template import render_title_template, get_sidebar_config
from .services.worker import WorkerThread from .services.worker import WorkerThread
from .usermanagement import user_login_required from .usermanagement import user_login_required
from .babel import get_available_translations, get_available_locale, get_user_locale_language from .cw_babel import get_available_translations, get_available_locale, get_user_locale_language
from . import debug_info from . import debug_info
from .string_helper import strip_whitespaces
log = logger.create() log = logger.create()
@@ -117,7 +119,7 @@ def before_request():
g.allow_upload = config.config_uploading g.allow_upload = config.config_uploading
g.current_theme = config.config_theme g.current_theme = config.config_theme
g.config_authors_max = config.config_authors_max g.config_authors_max = config.config_authors_max
if '/static/' not in request.path and not config.db_configured and \ if ('/static/' not in request.path and not config.db_configured and
request.endpoint not in ('admin.ajax_db_config', request.endpoint not in ('admin.ajax_db_config',
'admin.simulatedbchange', 'admin.simulatedbchange',
'admin.db_configuration', 'admin.db_configuration',
@@ -125,7 +127,7 @@ def before_request():
'web.login_post', 'web.login_post',
'web.logout', 'web.logout',
'admin.load_dialogtexts', 'admin.load_dialogtexts',
'admin.ajax_pathchooser'): 'admin.ajax_pathchooser')):
return redirect(url_for('admin.db_configuration')) return redirect(url_for('admin.db_configuration'))
@@ -143,7 +145,6 @@ def shutdown():
show_text = {} show_text = {}
if task in (0, 1): # valid commandos received if task in (0, 1): # valid commandos received
# close all database connections # close all database connections
calibre_db.dispose()
ub.dispose() ub.dispose()
if task == 0: if task == 0:
@@ -305,7 +306,15 @@ def edit_user_table():
.group_by(text('books_tags_link.tag')) \ .group_by(text('books_tags_link.tag')) \
.order_by(db.Tags.name).all() .order_by(db.Tags.name).all()
if config.config_restricted_column: if config.config_restricted_column:
custom_values = calibre_db.session.query(db.cc_classes[config.config_restricted_column]).all() try:
custom_values = calibre_db.session.query(db.cc_classes[config.config_restricted_column]).all()
except (KeyError, AttributeError, IndexError):
custom_values = []
log.error("Custom Column No.{} does not exist in calibre database".format(
config.config_restricted_column))
flash(_("Custom Column No.%(column)d does not exist in calibre database",
column=config.config_restricted_column),
category="error")
else: else:
custom_values = [] custom_values = []
if not config.config_anonbrowse: if not config.config_anonbrowse:
@@ -370,10 +379,7 @@ def list_users():
user.default = get_user_locale_language(user.default_language) user.default = get_user_locale_language(user.default_language)
table_entries = {'totalNotFiltered': total_count, 'total': filtered_count, "rows": users} table_entries = {'totalNotFiltered': total_count, 'total': filtered_count, "rows": users}
js_list = json.dumps(table_entries, cls=db.AlchemyEncoder) return make_response(json.dumps(table_entries, cls=db.AlchemyEncoder))
response = make_response(js_list)
response.headers["Content-Type"] = "application/json; charset=utf-8"
return response
@admi.route("/ajax/deleteuser", methods=['POST']) @admi.route("/ajax/deleteuser", methods=['POST'])
@@ -392,7 +398,7 @@ def delete_user():
success = list() success = list()
if not users: if not users:
log.error("User not found") log.error("User not found")
return Response(json.dumps({'type': "danger", 'message': _("User not found")}), mimetype='application/json') return make_response(jsonify(type="danger", message=_("User not found")))
for user in users: for user in users:
try: try:
message = _delete_user(user) message = _delete_user(user)
@@ -408,7 +414,7 @@ def delete_user():
log.info("Users {} deleted".format(user_ids)) log.info("Users {} deleted".format(user_ids))
success = [{'type': "success", 'message': _("{} users deleted successfully").format(count)}] success = [{'type': "success", 'message': _("{} users deleted successfully").format(count)}]
success.extend(errors) success.extend(errors)
return Response(json.dumps(success), mimetype='application/json') return make_response(jsonify(success))
@admi.route("/ajax/getlocale") @admi.route("/ajax/getlocale")
@@ -420,7 +426,7 @@ def table_get_locale():
current_locale = get_locale() current_locale = get_locale()
for loc in locale: for loc in locale:
ret.append({'value': str(loc), 'text': loc.get_language_name(current_locale)}) ret.append({'value': str(loc), 'text': loc.get_language_name(current_locale)})
return json.dumps(ret) return json.dumps(sorted(ret, key=lambda x: x['text']))
@admi.route("/ajax/getdefaultlanguage") @admi.route("/ajax/getdefaultlanguage")
@@ -432,7 +438,7 @@ def table_get_default_lang():
ret.append({'value': 'all', 'text': _('Show All')}) ret.append({'value': 'all', 'text': _('Show All')})
for lang in languages: for lang in languages:
ret.append({'value': lang.lang_code, 'text': lang.name}) ret.append({'value': lang.lang_code, 'text': lang.name})
return json.dumps(ret) return json.dumps(sorted(ret, key=lambda x: x['text']))
@admi.route("/ajax/editlistusers/<param>", methods=['POST']) @admi.route("/ajax/editlistusers/<param>", methods=['POST'])
@@ -463,9 +469,9 @@ def edit_list_user(param):
if 'value[]' in vals: if 'value[]' in vals:
setattr(user, param, prepare_tags(user, vals['action'][0], param, vals['value[]'])) setattr(user, param, prepare_tags(user, vals['action'][0], param, vals['value[]']))
else: else:
setattr(user, param, vals['value'].strip()) setattr(user, param, strip_whitespaces(vals['value']))
else: else:
vals['value'] = vals['value'].strip() vals['value'] = strip_whitespaces(vals['value'])
if param == 'name': if param == 'name':
if user.name == "Guest": if user.name == "Guest":
raise Exception(_("Guest Name can't be changed")) raise Exception(_("Guest Name can't be changed"))
@@ -490,10 +496,10 @@ def edit_list_user(param):
if not ub.session.query(ub.User). \ if not ub.session.query(ub.User). \
filter(ub.User.role.op('&')(constants.ROLE_ADMIN) == constants.ROLE_ADMIN, filter(ub.User.role.op('&')(constants.ROLE_ADMIN) == constants.ROLE_ADMIN,
ub.User.id != user.id).count(): ub.User.id != user.id).count():
return Response( return make_response(
json.dumps([{'type': "danger", jsonify([{'type': "danger",
'message': _("No admin user remaining, can't remove admin role", 'message': _("No admin user remaining, can't remove admin role",
nick=user.name)}]), mimetype='application/json') nick=user.name)}]))
user.role &= ~value user.role &= ~value
else: else:
raise Exception(_("Value has to be true or false")) raise Exception(_("Value has to be true or false"))
@@ -566,7 +572,7 @@ def update_view_configuration():
_config_string(to_save, "config_calibre_web_title") _config_string(to_save, "config_calibre_web_title")
_config_string(to_save, "config_columns_to_ignore") _config_string(to_save, "config_columns_to_ignore")
if _config_string(to_save, "config_title_regex"): if _config_string(to_save, "config_title_regex"):
calibre_db.update_title_sort(config) calibre_db.create_functions(config)
if not check_valid_read_column(to_save.get("config_read_column", "0")): if not check_valid_read_column(to_save.get("config_read_column", "0")):
flash(_("Invalid Read Column"), category="error") flash(_("Invalid Read Column"), category="error")
@@ -690,7 +696,7 @@ def delete_domain():
def list_domain(allow): def list_domain(allow):
answer = ub.session.query(ub.Registration).filter(ub.Registration.allow == allow).all() answer = ub.session.query(ub.Registration).filter(ub.Registration.allow == allow).all()
json_dumps = json.dumps([{"domain": r.domain.replace('%', '*').replace('_', '?'), "id": r.id} for r in answer]) json_dumps = json.dumps([{"domain": r.domain.replace('%', '*').replace('_', '?'), "id": r.id} for r in answer])
js = json.dumps(json_dumps.replace('"', "'")).lstrip('"').strip('"') js = json.dumps(json_dumps.replace('"', "'")).strip('"')
response = make_response(js.replace("'", '"')) response = make_response(js.replace("'", '"'))
response.headers["Content-Type"] = "application/json; charset=utf-8" response.headers["Content-Type"] = "application/json; charset=utf-8"
return response return response
@@ -939,7 +945,7 @@ def do_full_kobo_sync(userid):
count = ub.session.query(ub.KoboSyncedBooks).filter(userid == ub.KoboSyncedBooks.user_id).delete() count = ub.session.query(ub.KoboSyncedBooks).filter(userid == ub.KoboSyncedBooks.user_id).delete()
message = _("{} sync entries deleted").format(count) message = _("{} sync entries deleted").format(count)
ub.session_commit(message) ub.session_commit(message)
return Response(json.dumps([{"type": "success", "message": message}]), mimetype='application/json') return make_response(jsonify(type="success", message=message))
def check_valid_read_column(column): def check_valid_read_column(column):
@@ -981,8 +987,14 @@ def prepare_tags(user, action, tags_name, id_list):
raise Exception(_("Tag not found")) raise Exception(_("Tag not found"))
new_tags_list = [x.name for x in tags] new_tags_list = [x.name for x in tags]
else: else:
tags = calibre_db.session.query(db.cc_classes[config.config_restricted_column]) \ try:
.filter(db.cc_classes[config.config_restricted_column].id.in_(id_list)).all() tags = calibre_db.session.query(db.cc_classes[config.config_restricted_column]) \
.filter(db.cc_classes[config.config_restricted_column].id.in_(id_list)).all()
except (KeyError, AttributeError, IndexError):
log.error("Custom Column No.{} does not exist in calibre database".format(
config.config_restricted_column))
raise Exception(_("Custom Column No.%(column)d does not exist in calibre database",
column=config.config_restricted_column))
new_tags_list = [x.value for x in tags] new_tags_list = [x.value for x in tags]
saved_tags_list = user.__dict__[tags_name].split(",") if len(user.__dict__[tags_name]) else [] saved_tags_list = user.__dict__[tags_name].split(",") if len(user.__dict__[tags_name]) else []
if action == "remove": if action == "remove":
@@ -1100,7 +1112,7 @@ def _config_checkbox_int(to_save, x):
def _config_string(to_save, x): def _config_string(to_save, x):
return config.set_from_dictionary(to_save, x, lambda y: y.strip().strip(u'\u200B\u200C\u200D\ufeff') if y else y) return config.set_from_dictionary(to_save, x, lambda y: strip_whitespaces(y) if y else y)
def _configuration_gdrive_helper(to_save): def _configuration_gdrive_helper(to_save):
@@ -1250,7 +1262,7 @@ def _configuration_ldap_helper(to_save):
@admin_required @admin_required
def simulatedbchange(): def simulatedbchange():
db_change, db_valid = _db_simulate_change() db_change, db_valid = _db_simulate_change()
return Response(json.dumps({"change": db_change, "valid": db_valid}), mimetype='application/json') return make_response(jsonify(change=db_change, valid=db_valid))
@admi.route("/admin/user/new", methods=["GET", "POST"]) @admi.route("/admin/user/new", methods=["GET", "POST"])
@@ -1311,9 +1323,9 @@ def update_mailsettings():
if to_save.get("mail_password_e", ""): if to_save.get("mail_password_e", ""):
_config_string(to_save, "mail_password_e") _config_string(to_save, "mail_password_e")
_config_int(to_save, "mail_size", lambda y: int(y) * 1024 * 1024) _config_int(to_save, "mail_size", lambda y: int(y) * 1024 * 1024)
config.mail_server = to_save.get('mail_server', "").strip() config.mail_server = strip_whitespaces(to_save.get('mail_server', ""))
config.mail_from = to_save.get('mail_from', "").strip() config.mail_from = strip_whitespaces(to_save.get('mail_from', ""))
config.mail_login = to_save.get('mail_login', "").strip() config.mail_login = strip_whitespaces(to_save.get('mail_login', ""))
try: try:
config.save() config.save()
except (OperationalError, InvalidRequestError) as e: except (OperationalError, InvalidRequestError) as e:
@@ -1678,10 +1690,10 @@ def cancel_task():
def _db_simulate_change(): def _db_simulate_change():
param = request.form.to_dict() param = request.form.to_dict()
to_save = dict() to_save = dict()
to_save['config_calibre_dir'] = re.sub(r'[\\/]metadata\.db$', to_save['config_calibre_dir'] = strip_whitespaces(re.sub(r'[\\/]metadata\.db$',
'', '',
param['config_calibre_dir'], param['config_calibre_dir'],
flags=re.IGNORECASE).strip() flags=re.IGNORECASE))
db_valid, db_change = calibre_db.check_valid_db(to_save["config_calibre_dir"], db_valid, db_change = calibre_db.check_valid_db(to_save["config_calibre_dir"],
ub.app_DB_path, ub.app_DB_path,
config.config_calibre_uuid) config.config_calibre_uuid)
@@ -1715,14 +1727,19 @@ def _db_configuration_update_helper():
db_change = True db_change = True
except Exception as ex: except Exception as ex:
return _db_configuration_result('{}'.format(ex), gdrive_error) return _db_configuration_result('{}'.format(ex), gdrive_error)
config.config_calibre_split = to_save.get('config_calibre_split', 0) == "on"
if db_change or not db_valid or not config.db_configured \ if config.config_calibre_split:
or config.config_calibre_dir != to_save["config_calibre_dir"]: split_dir = to_save.get("config_calibre_split_dir")
if not os.path.exists(split_dir):
return _db_configuration_result(_("Books path not valid"), gdrive_error)
else:
_config_string(to_save, "config_calibre_split_dir")
if (db_change or not db_valid or not config.db_configured
or config.config_calibre_dir != to_save["config_calibre_dir"]):
if not os.path.exists(metadata_db) or not to_save['config_calibre_dir']: if not os.path.exists(metadata_db) or not to_save['config_calibre_dir']:
return _db_configuration_result(_('DB Location is not Valid, Please Enter Correct Path'), gdrive_error) return _db_configuration_result(_('DB Location is not Valid, Please Enter Correct Path'), gdrive_error)
else: else:
calibre_db.setup_db(to_save['config_calibre_dir'], ub.app_DB_path) calibre_db.setup_db(to_save['config_calibre_dir'], ub.app_DB_path)
config.store_calibre_uuid(calibre_db, db.Library_Id)
# if db changed -> delete shelfs, delete download books, delete read books, kobo sync... # if db changed -> delete shelfs, delete download books, delete read books, kobo sync...
if db_change: if db_change:
log.info("Calibre Database changed, all Calibre-Web info related to old Database gets deleted") log.info("Calibre Database changed, all Calibre-Web info related to old Database gets deleted")
@@ -1736,13 +1753,20 @@ def _db_configuration_update_helper():
ub.session.query(ub.KoboSyncedBooks).delete() ub.session.query(ub.KoboSyncedBooks).delete()
helper.delete_thumbnail_cache() helper.delete_thumbnail_cache()
ub.session_commit() ub.session_commit()
# deleted visibilities based on custom column and tags
config.config_restricted_column = 0
config.config_denied_tags = ""
config.config_allowed_tags = ""
config.config_columns_to_ignore = ""
config.config_denied_column_value = ""
config.config_allowed_column_value = ""
config.config_read_column = 0
_config_string(to_save, "config_calibre_dir") _config_string(to_save, "config_calibre_dir")
calibre_db.update_config(config) calibre_db.update_config(config, config.config_calibre_dir, ub.app_DB_path)
config.store_calibre_uuid(calibre_db, db.Library_Id)
if not os.access(os.path.join(config.config_calibre_dir, "metadata.db"), os.W_OK): if not os.access(os.path.join(config.config_calibre_dir, "metadata.db"), os.W_OK):
flash(_("DB is not Writeable"), category="warning") flash(_("DB is not Writeable"), category="warning")
_config_string(to_save, "config_calibre_split_dir") calibre_db.update_config(config, config.config_calibre_dir, ub.app_DB_path)
config.config_calibre_split = to_save.get('config_calibre_split', 0) == "on"
calibre_db.update_config(config)
config.save() config.save()
return _db_configuration_result(None, gdrive_error) return _db_configuration_result(None, gdrive_error)
@@ -1775,9 +1799,8 @@ def _configuration_update_helper():
if "config_upload_formats" in to_save: if "config_upload_formats" in to_save:
to_save["config_upload_formats"] = ','.join( to_save["config_upload_formats"] = ','.join(
helper.uniq([x.lstrip().rstrip().lower() for x in to_save["config_upload_formats"].split(',')])) helper.uniq([x.strip().lower() for x in to_save["config_upload_formats"].split(',')]))
_config_string(to_save, "config_upload_formats") _config_string(to_save, "config_upload_formats")
# constants.EXTENSIONS_UPLOAD = config.config_upload_formats.split(',')
_config_string(to_save, "config_calibre") _config_string(to_save, "config_calibre")
_config_string(to_save, "config_binariesdir") _config_string(to_save, "config_binariesdir")
@@ -1871,7 +1894,7 @@ def _configuration_result(error_flash=None, reboot=False):
resp['result'] = [{'type': "success", 'message': _("Calibre-Web configuration updated")}] resp['result'] = [{'type': "success", 'message': _("Calibre-Web configuration updated")}]
resp['reboot'] = reboot resp['reboot'] = reboot
resp['config_upload'] = config.config_upload_formats resp['config_upload'] = config.config_upload_formats
return Response(json.dumps(resp), mimetype='application/json') return make_response(jsonify(resp))
def _db_configuration_result(error_flash=None, gdrive_error=None): def _db_configuration_result(error_flash=None, gdrive_error=None):
@@ -2076,7 +2099,7 @@ def _handle_edit_user(to_save, content, languages, translations, kobo_support):
def extract_user_data_from_field(user, field): def extract_user_data_from_field(user, field):
match = re.search(field + r"=([@\.\d\s\w-]+)", user, re.IGNORECASE | re.UNICODE) match = re.search(field + r"=(.*?)($|(?<!\\),)", user, re.IGNORECASE | re.UNICODE)
if match: if match:
return match.group(1) return match.group(1)
else: else:
@@ -2084,7 +2107,7 @@ def extract_user_data_from_field(user, field):
def extract_dynamic_field_from_filter(user, filtr): def extract_dynamic_field_from_filter(user, filtr):
match = re.search("([a-zA-Z0-9-]+)=%s", filtr, re.IGNORECASE | re.UNICODE) match = re.search(r"([a-zA-Z0-9-]+)=%s", filtr, re.IGNORECASE | re.UNICODE)
if match: if match:
return match.group(1) return match.group(1)
else: else:

147
cps/audio.py Normal file
View File

@@ -0,0 +1,147 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2024 Ozzieisaacs
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import mutagen
import base64
from . import cover, logger
from cps.constants import BookMeta
log = logger.create()
def get_audio_file_info(tmp_file_path, original_file_extension, original_file_name, no_cover_processing):
tmp_cover_name = None
audio_file = mutagen.File(tmp_file_path)
comments = None
if original_file_extension in [".mp3", ".wav", ".aiff"]:
cover_data = list()
for key, val in audio_file.tags.items():
if key.startswith("APIC:"):
cover_data.append(val)
if key.startswith("COMM:"):
comments = val.text[0]
title = audio_file.tags.get('TIT2').text[0] if "TIT2" in audio_file.tags else None
author = audio_file.tags.get('TPE1').text[0] if "TPE1" in audio_file.tags else None
if author is None:
author = audio_file.tags.get('TPE2').text[0] if "TPE2" in audio_file.tags else None
tags = audio_file.tags.get('TCON').text[0] if "TCON" in audio_file.tags else None # Genre
series = audio_file.tags.get('TALB').text[0] if "TALB" in audio_file.tags else None# Album
series_id = audio_file.tags.get('TRCK').text[0] if "TRCK" in audio_file.tags else None # track no.
publisher = audio_file.tags.get('TPUB').text[0] if "TPUB" in audio_file.tags else None
pubdate = str(audio_file.tags.get('TDRL').text[0]) if "TDRL" in audio_file.tags else None
if not pubdate:
pubdate = str(audio_file.tags.get('TDRC').text[0]) if "TDRC" in audio_file.tags else None
if not pubdate:
pubdate = str(audio_file.tags.get('TDOR').text[0]) if "TDOR" in audio_file.tags else None
if cover_data and not no_cover_processing:
cover_info = cover_data[0]
for dat in cover_data:
if dat.type == mutagen.id3.PictureType.COVER_FRONT:
cover_info = dat
break
tmp_cover_name = cover.cover_processing(tmp_file_path, cover_info.data, "." + cover_info.mime[-3:])
elif original_file_extension in [".ogg", ".flac", ".opus", ".ogv"]:
title = audio_file.tags.get('TITLE')[0] if "TITLE" in audio_file else None
author = audio_file.tags.get('ARTIST')[0] if "ARTIST" in audio_file else None
comments = audio_file.tags.get('COMMENTS')[0] if "COMMENTS" in audio_file else None
tags = audio_file.tags.get('GENRE')[0] if "GENRE" in audio_file else None # Genre
series = audio_file.tags.get('ALBUM')[0] if "ALBUM" in audio_file else None
series_id = audio_file.tags.get('TRACKNUMBER')[0] if "TRACKNUMBER" in audio_file else None
publisher = audio_file.tags.get('LABEL')[0] if "LABEL" in audio_file else None
pubdate = audio_file.tags.get('DATE')[0] if "DATE" in audio_file else None
cover_data = audio_file.tags.get('METADATA_BLOCK_PICTURE')
if not no_cover_processing:
if cover_data:
cover_info = mutagen.flac.Picture(base64.b64decode(cover_data[0]))
tmp_cover_name = cover.cover_processing(tmp_file_path, cover_info.data, "." + cover_info.mime[-3:])
if hasattr(audio_file, "pictures"):
cover_info = audio_file.pictures[0]
for dat in audio_file.pictures:
if dat.type == mutagen.id3.PictureType.COVER_FRONT:
cover_info = dat
break
tmp_cover_name = cover.cover_processing(tmp_file_path, cover_info.data, "." + cover_info.mime[-3:])
elif original_file_extension in [".aac"]:
title = audio_file.tags.get('Title').value if "Title" in audio_file else None
author = audio_file.tags.get('Artist').value if "Artist" in audio_file else None
comments = audio_file.tags.get('Comment').value if "Comment" in audio_file else None
tags = audio_file.tags.get('Genre').value if "Genre" in audio_file else None
series = audio_file.tags.get('Album').value if "Album" in audio_file else None
series_id = audio_file.tags.get('Track').value if "Track" in audio_file else None
publisher = audio_file.tags.get('Label').value if "Label" in audio_file else None
pubdate = audio_file.tags.get('Year').value if "Year" in audio_file else None
cover_data = audio_file.tags['Cover Art (Front)']
if cover_data and not no_cover_processing:
tmp_cover_name = tmp_file_path + '.jpg'
with open(tmp_cover_name, "wb") as cover_file:
cover_file.write(cover_data.value.split(b"\x00",1)[1])
elif original_file_extension in [".asf"]:
title = audio_file.tags.get('Title')[0].value if "Title" in audio_file else None
author = audio_file.tags.get('Artist')[0].value if "Artist" in audio_file else None
comments = audio_file.tags.get('Comments')[0].value if "Comments" in audio_file else None
tags = audio_file.tags.get('Genre')[0].value if "Genre" in audio_file else None
series = audio_file.tags.get('Album')[0].value if "Album" in audio_file else None
series_id = audio_file.tags.get('Track')[0].value if "Track" in audio_file else None
publisher = audio_file.tags.get('Label')[0].value if "Label" in audio_file else None
pubdate = audio_file.tags.get('Year')[0].value if "Year" in audio_file else None
cover_data = audio_file.tags.get('WM/Picture', None)
if cover_data and not no_cover_processing:
tmp_cover_name = tmp_file_path + '.jpg'
with open(tmp_cover_name, "wb") as cover_file:
cover_file.write(cover_data[0].value)
elif original_file_extension in [".mp4", ".m4a", ".m4b"]:
title = audio_file.tags.get('©nam')[0] if "©nam" in audio_file.tags else None
author = audio_file.tags.get('©ART')[0] if "©ART" in audio_file.tags else None
comments = audio_file.tags.get('©cmt')[0] if "©cmt" in audio_file.tags else None
tags = audio_file.tags.get('©gen')[0] if "©gen" in audio_file.tags else None
series = audio_file.tags.get('©alb')[0] if "©alb" in audio_file.tags else None
series_id = str(audio_file.tags.get('trkn')[0][0]) if "trkn" in audio_file.tags else None
publisher = ""
pubdate = audio_file.tags.get('©day')[0] if "©day" in audio_file.tags else None
cover_data = audio_file.tags.get('covr', None)
if cover_data and not no_cover_processing:
cover_type = None
for c in cover_data:
if c.imageformat == mutagen.mp4.AtomDataType.JPEG:
cover_type =".jpg"
cover_bin = c
break
elif c.imageformat == mutagen.mp4.AtomDataType.PNG:
cover_type = ".png"
cover_bin = c
break
if cover_type:
tmp_cover_name = cover.cover_processing(tmp_file_path, cover_bin, cover_type)
else:
logger.error("Unknown covertype in file {} ".format(original_file_name))
return BookMeta(
file_path=tmp_file_path,
extension=original_file_extension,
title=title or original_file_name ,
author="Unknown" if author is None else author,
cover=tmp_cover_name,
description="" if comments is None else comments,
tags="" if tags is None else tags,
series="" if series is None else series,
series_id="1" if series_id is None else series_id.split("/")[0],
languages="",
publisher= "" if publisher is None else publisher,
pubdate="" if pubdate is None else pubdate,
identifiers=[],
)

90
cps/basic.py Normal file
View File

@@ -0,0 +1,90 @@
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2018-2019 OzzieIsaacs, cervinko, jkrehm, bodybybuddha, ok11,
# andy29485, idalin, Kyosfonica, wuqi, Kennyl, lemmsh,
# falgh1, grunjol, csitko, ytils, xybydy, trasba, vrabe,
# ruben-herold, marblepebble, JackED42, SiphonSquirrel,
# apetresc, nanu-c, mutschler, carderne
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from cps.pagination import Pagination
from flask import Blueprint
from flask_babel import gettext as _
from flask_babel import get_locale
from flask import request, redirect, url_for
from . import logger, isoLanguages
from . import db, config
from . import calibre_db
from .usermanagement import login_required_if_no_ano
from .render_template import render_title_template
from .web import get_sort_function
try:
from natsort import natsorted as sort
except ImportError:
sort = sorted # Just use regular sort then, may cause issues with badly named pages in cbz/cbr files
basic = Blueprint('basic', __name__)
log = logger.create()
@basic.route("/basic", methods=["GET"])
@login_required_if_no_ano
def index():
term = request.args.get("query", "") # default to showing all books
limit = 15
page = int(request.args.get("page") or 1)
off = (page - 1) * limit
order = get_sort_function("stored", "search")
join = db.books_series_link, db.Books.id == db.books_series_link.c.book, db.Series
entries, result_count, pagination = calibre_db.get_search_results(term,
config,
off,
order,
limit,
*join)
return render_title_template('basic_index.html',
searchterm=term,
pagination=pagination,
query=term,
adv_searchterm=term,
entries=entries,
result_count=result_count,
title=_("Search"),
page="search",
order=order[1])
@basic.route("/basic_book/<int:book_id>")
@login_required_if_no_ano
def show_book(book_id):
entries = calibre_db.get_book_read_archived(book_id, config.config_read_column, allow_show_archived=True)
if entries:
entry = entries[0]
for lang_index in range(0, len(entry.languages)):
entry.languages[lang_index].language_name = isoLanguages.get_language_name(get_locale(), entry.languages[
lang_index].lang_code)
entry.ordered_authors = calibre_db.order_authors([entry])
return render_title_template('basic_detail.html',
entry=entry,
is_xhr=request.headers.get('X-Requested-With') == 'XMLHttpRequest',
title=entry.title,
page="book")
else:
log.debug("Selected book is unavailable. File does not exist or is not accessible")
return redirect(url_for("basic.index"))

View File

@@ -29,8 +29,8 @@ from .constants import DEFAULT_SETTINGS_FILE, DEFAULT_GDRIVE_FILE
def version_info(): def version_info():
if _NIGHTLY_VERSION[1].startswith('$Format'): if _NIGHTLY_VERSION[1].startswith('$Format'):
return "Calibre-Web version: %s - unknown git-clone" % _STABLE_VERSION['version'].replace("b", " Beta") return "Calibre-Web version: %s - unknown git-clone" % _STABLE_VERSION.replace("b", " Beta")
return "Calibre-Web version: %s -%s" % (_STABLE_VERSION['version'].replace("b", " Beta"), _NIGHTLY_VERSION[1]) return "Calibre-Web version: %s -%s" % (_STABLE_VERSION.replace("b", " Beta"), _NIGHTLY_VERSION[1])
class CliParameter(object): class CliParameter(object):

View File

@@ -90,7 +90,7 @@ def _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_exec
if len(ext) > 1: if len(ext) > 1:
extension = ext[1].lower() extension = ext[1].lower()
if extension in cover.COVER_EXTENSIONS: if extension in cover.COVER_EXTENSIONS:
cover_data = cf.read([name]) cover_data = cf.read(name)
break break
except Exception as ex: except Exception as ex:
log.error('Rarfile failed with error: {}'.format(ex)) log.error('Rarfile failed with error: {}'.format(ex))
@@ -109,13 +109,13 @@ def _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_exec
return cover_data, extension return cover_data, extension
def _extract_cover(tmp_file_name, original_file_extension, rar_executable): def _extract_cover(tmp_file_path, original_file_extension, rar_executable):
cover_data = extension = None cover_data = extension = None
if use_comic_meta: if use_comic_meta:
try: try:
archive = ComicArchive(tmp_file_name, rar_exe_path=rar_executable) archive = ComicArchive(tmp_file_path, rar_exe_path=rar_executable)
except TypeError: except TypeError:
archive = ComicArchive(tmp_file_name) archive = ComicArchive(tmp_file_path)
name_list = archive.getPageNameList if hasattr(archive, "getPageNameList") else archive.get_page_name_list name_list = archive.getPageNameList if hasattr(archive, "getPageNameList") else archive.get_page_name_list
for index, name in enumerate(name_list()): for index, name in enumerate(name_list()):
ext = os.path.splitext(name) ext = os.path.splitext(name)
@@ -126,11 +126,11 @@ def _extract_cover(tmp_file_name, original_file_extension, rar_executable):
cover_data = get_page(index) cover_data = get_page(index)
break break
else: else:
cover_data, extension = _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_executable) cover_data, extension = _extract_cover_from_archive(original_file_extension, tmp_file_path, rar_executable)
return cover.cover_processing(tmp_file_name, cover_data, extension) return cover.cover_processing(tmp_file_path, cover_data, extension)
def get_comic_info(tmp_file_path, original_file_name, original_file_extension, rar_executable): def get_comic_info(tmp_file_path, original_file_name, original_file_extension, rar_executable, no_cover_processing):
if use_comic_meta: if use_comic_meta:
try: try:
archive = ComicArchive(tmp_file_path, rar_exe_path=rar_executable) archive = ComicArchive(tmp_file_path, rar_exe_path=rar_executable)
@@ -155,14 +155,17 @@ def get_comic_info(tmp_file_path, original_file_name, original_file_extension, r
lang = loaded_metadata.language or "" lang = loaded_metadata.language or ""
loaded_metadata.language = isoLanguages.get_lang3(lang) loaded_metadata.language = isoLanguages.get_lang3(lang)
if not no_cover_processing:
cover_file = _extract_cover(tmp_file_path, original_file_extension, rar_executable)
else:
cover_file = None
return BookMeta( return BookMeta(
file_path=tmp_file_path, file_path=tmp_file_path,
extension=original_file_extension, extension=original_file_extension,
title=loaded_metadata.title or original_file_name, title=loaded_metadata.title or original_file_name,
author=" & ".join([credit["person"] author=" & ".join([credit["person"]
for credit in loaded_metadata.credits if credit["role"] == "Writer"]) or 'Unknown', for credit in loaded_metadata.credits if credit["role"] == "Writer"]) or 'Unknown',
cover=_extract_cover(tmp_file_path, original_file_extension, rar_executable), cover=cover_file,
description=loaded_metadata.comments or "", description=loaded_metadata.comments or "",
tags="", tags="",
series=loaded_metadata.series or "", series=loaded_metadata.series or "",
@@ -171,13 +174,17 @@ def get_comic_info(tmp_file_path, original_file_name, original_file_extension, r
publisher="", publisher="",
pubdate="", pubdate="",
identifiers=[]) identifiers=[])
if not no_cover_processing:
cover_file = _extract_cover(tmp_file_path, original_file_extension, rar_executable)
else:
cover_file = None
return BookMeta( return BookMeta(
file_path=tmp_file_path, file_path=tmp_file_path,
extension=original_file_extension, extension=original_file_extension,
title=original_file_name, title=original_file_name,
author='Unknown', author='Unknown',
cover=_extract_cover(tmp_file_path, original_file_extension, rar_executable), cover=cover_file,
description="", description="",
tags="", tags="",
series="", series="",

View File

@@ -35,7 +35,7 @@ except ImportError:
from . import constants, logger from . import constants, logger
from .subproc_wrapper import process_wait from .subproc_wrapper import process_wait
from .string_helper import strip_whitespaces
log = logger.create() log = logger.create()
_Base = declarative_base() _Base = declarative_base()
@@ -182,26 +182,6 @@ class _Settings(_Base):
class ConfigSQL(object): class ConfigSQL(object):
# pylint: disable=no-member # pylint: disable=no-member
def __init__(self): def __init__(self):
'''self.config_calibre_uuid = None
self.config_calibre_split_dir = None
self.dirty = None
self.config_logfile = None
self.config_upload_formats = None
self.mail_gmail_token = None
self.mail_server_type = None
self.mail_server = None
self.config_log_level = None
self.config_allowed_column_value = None
self.config_denied_column_value = None
self.config_allowed_tags = None
self.config_denied_tags = None
self.config_default_show = None
self.config_default_role = None
self.config_keyfile = None
self.config_certfile = None
self.config_rarfile_location = None
self.config_kepubifypath = None
self.config_binariesdir = None'''
self.__dict__["dirty"] = list() self.__dict__["dirty"] = list()
def init_config(self, session, secret_key, cli): def init_config(self, session, secret_key, cli):
@@ -288,19 +268,19 @@ class ConfigSQL(object):
def list_denied_tags(self): def list_denied_tags(self):
mct = self.config_denied_tags or "" mct = self.config_denied_tags or ""
return [t.strip() for t in mct.split(",")] return [strip_whitespaces(t) for t in mct.split(",")]
def list_allowed_tags(self): def list_allowed_tags(self):
mct = self.config_allowed_tags or "" mct = self.config_allowed_tags or ""
return [t.strip() for t in mct.split(",")] return [strip_whitespaces(t) for t in mct.split(",")]
def list_denied_column_values(self): def list_denied_column_values(self):
mct = self.config_denied_column_value or "" mct = self.config_denied_column_value or ""
return [t.strip() for t in mct.split(",")] return [strip_whitespaces(t) for t in mct.split(",")]
def list_allowed_column_values(self): def list_allowed_column_values(self):
mct = self.config_allowed_column_value or "" mct = self.config_allowed_column_value or ""
return [t.strip() for t in mct.split(",")] return [strip_whitespaces(t) for t in mct.split(",")]
def get_log_level(self): def get_log_level(self):
return logger.get_level_name(self.config_log_level) return logger.get_level_name(self.config_log_level)
@@ -372,7 +352,7 @@ class ConfigSQL(object):
db_file = os.path.join(self.config_calibre_dir, 'metadata.db') db_file = os.path.join(self.config_calibre_dir, 'metadata.db')
have_metadata_db = os.path.isfile(db_file) have_metadata_db = os.path.isfile(db_file)
self.db_configured = have_metadata_db self.db_configured = have_metadata_db
# constants.EXTENSIONS_UPLOAD = [x.lstrip().rstrip().lower() for x in self.config_upload_formats.split(',')]
from . import cli_param from . import cli_param
if os.environ.get('FLASK_DEBUG'): if os.environ.get('FLASK_DEBUG'):
logfile = logger.setup(logger.LOG_TO_STDOUT, logger.logging.DEBUG) logfile = logger.setup(logger.LOG_TO_STDOUT, logger.logging.DEBUG)
@@ -425,11 +405,13 @@ class ConfigSQL(object):
return self.config_calibre_split_dir if self.config_calibre_split_dir else self.config_calibre_dir return self.config_calibre_split_dir if self.config_calibre_split_dir else self.config_calibre_dir
def store_calibre_uuid(self, calibre_db, Library_table): def store_calibre_uuid(self, calibre_db, Library_table):
from . import app
try: try:
calibre_uuid = calibre_db.session.query(Library_table).one_or_none() with app.app_context():
if self.config_calibre_uuid != calibre_uuid.uuid: calibre_uuid = calibre_db.session.query(Library_table).one_or_none()
self.config_calibre_uuid = calibre_uuid.uuid if self.config_calibre_uuid != calibre_uuid.uuid:
self.save() self.config_calibre_uuid = calibre_uuid.uuid
self.save()
except AttributeError: except AttributeError:
pass pass
@@ -503,6 +485,8 @@ def autodetect_calibre_binaries():
"C:\\program files(x86)\\calibre\\", "C:\\program files(x86)\\calibre\\",
"C:\\program files(x86)\\calibre2\\", "C:\\program files(x86)\\calibre2\\",
"C:\\program files\\calibre2\\"] "C:\\program files\\calibre2\\"]
elif sys.platform.startswith("freebsd"):
calibre_path = ["/usr/local/bin/"]
else: else:
calibre_path = ["/opt/calibre/"] calibre_path = ["/opt/calibre/"]
for element in calibre_path: for element in calibre_path:
@@ -533,6 +517,8 @@ def autodetect_unrar_binary():
if sys.platform == "win32": if sys.platform == "win32":
calibre_path = ["C:\\program files\\WinRar\\unRAR.exe", calibre_path = ["C:\\program files\\WinRar\\unRAR.exe",
"C:\\program files(x86)\\WinRar\\unRAR.exe"] "C:\\program files(x86)\\WinRar\\unRAR.exe"]
elif sys.platform.startswith("freebsd"):
calibre_path = ["/usr/local/bin/unrar"]
else: else:
calibre_path = ["/usr/bin/unrar"] calibre_path = ["/usr/bin/unrar"]
for element in calibre_path: for element in calibre_path:
@@ -545,6 +531,8 @@ def autodetect_kepubify_binary():
if sys.platform == "win32": if sys.platform == "win32":
calibre_path = ["C:\\program files\\kepubify\\kepubify-windows-64Bit.exe", calibre_path = ["C:\\program files\\kepubify\\kepubify-windows-64Bit.exe",
"C:\\program files(x86)\\kepubify\\kepubify-windows-64Bit.exe"] "C:\\program files(x86)\\kepubify\\kepubify-windows-64Bit.exe"]
elif sys.platform.startswith("freebsd"):
calibre_path = ["/usr/local/bin/kepubify"]
else: else:
calibre_path = ["/opt/kepubify/kepubify-linux-64bit", "/opt/kepubify/kepubify-linux-32bit"] calibre_path = ["/opt/kepubify/kepubify-linux-64bit", "/opt/kepubify/kepubify-linux-32bit"]
for element in calibre_path: for element in calibre_path:

View File

@@ -19,9 +19,6 @@
import sys import sys
import os import os
from collections import namedtuple from collections import namedtuple
from sqlalchemy import __version__ as sql_version
sqlalchemy_version2 = ([int(x) for x in sql_version.split('.')] >= [2, 0, 0])
# APP_MODE - production, development, or test # APP_MODE - production, development, or test
APP_MODE = os.environ.get('APP_MODE', 'production') APP_MODE = os.environ.get('APP_MODE', 'production')
@@ -175,7 +172,7 @@ BookMeta = namedtuple('BookMeta', 'file_path, extension, title, author, cover, d
'series_id, languages, publisher, pubdate, identifiers') 'series_id, languages, publisher, pubdate, identifiers')
# python build process likes to have x.y.zbw -> b for beta and w a counting number # python build process likes to have x.y.zbw -> b for beta and w a counting number
STABLE_VERSION = {'version': '0.6.23b'} STABLE_VERSION = '0.6.25b'
NIGHTLY_VERSION = dict() NIGHTLY_VERSION = dict()
NIGHTLY_VERSION[0] = '$Format:%H$' NIGHTLY_VERSION[0] = '$Format:%H$'
@@ -193,7 +190,7 @@ THUMBNAIL_TYPE_AUTHOR = 3
COVER_THUMBNAIL_ORIGINAL = 0 COVER_THUMBNAIL_ORIGINAL = 0
COVER_THUMBNAIL_SMALL = 1 COVER_THUMBNAIL_SMALL = 1
COVER_THUMBNAIL_MEDIUM = 2 COVER_THUMBNAIL_MEDIUM = 2
COVER_THUMBNAIL_LARGE = 3 COVER_THUMBNAIL_LARGE = 4
# clean-up the module namespace # clean-up the module namespace
del sys, os, namedtuple del sys, os, namedtuple

View File

@@ -29,13 +29,14 @@ NO_JPEG_EXTENSIONS = ['.png', '.webp', '.bmp']
COVER_EXTENSIONS = ['.png', '.webp', '.bmp', '.jpg', '.jpeg'] COVER_EXTENSIONS = ['.png', '.webp', '.bmp', '.jpg', '.jpeg']
def cover_processing(tmp_file_name, img, extension): def cover_processing(tmp_file_path, img, extension):
tmp_cover_name = os.path.join(os.path.dirname(tmp_file_name), 'cover.jpg') # tmp_cover_name = os.path.join(os.path.dirname(tmp_file_name), 'cover.jpg')
tmp_cover_name = tmp_file_path + '.jpg'
if extension in NO_JPEG_EXTENSIONS: if extension in NO_JPEG_EXTENSIONS:
if use_IM: if use_IM:
with Image(blob=img) as imgc: with Image(blob=img) as imgc:
imgc.format = 'jpeg' imgc.format = 'jpeg'
imgc.transform_colorspace('rgb') imgc.transform_colorspace('srgb')
imgc.save(filename=tmp_cover_name) imgc.save(filename=tmp_cover_name)
return tmp_cover_name return tmp_cover_name
else: else:

View File

@@ -0,0 +1,22 @@
#
# Copyright 2015 Jordan Milne
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Source: https://github.com/JordanMilne/Advocate
from .adapters import ValidatingHTTPAdapter
from .api import *
from .addrvalidator import AddrValidator
from .exceptions import UnacceptableAddressException

View File

@@ -0,0 +1,48 @@
#
# Copyright 2015 Jordan Milne
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Source: https://github.com/JordanMilne/Advocate
from requests.adapters import HTTPAdapter, DEFAULT_POOLBLOCK
from .addrvalidator import AddrValidator
from .exceptions import ProxyDisabledException
from .poolmanager import ValidatingPoolManager
class ValidatingHTTPAdapter(HTTPAdapter):
__attrs__ = HTTPAdapter.__attrs__ + ['_validator']
def __init__(self, *args, **kwargs):
self._validator = kwargs.pop('validator', None)
if not self._validator:
self._validator = AddrValidator()
super().__init__(*args, **kwargs)
def init_poolmanager(self, connections, maxsize, block=DEFAULT_POOLBLOCK,
**pool_kwargs):
self._pool_connections = connections
self._pool_maxsize = maxsize
self._pool_block = block
self.poolmanager = ValidatingPoolManager(
num_pools=connections,
maxsize=maxsize,
block=block,
validator=self._validator,
**pool_kwargs
)
def proxy_manager_for(self, proxy, **proxy_kwargs):
raise ProxyDisabledException("Proxies cannot be used with Advocate")

View File

@@ -0,0 +1,281 @@
#
# Copyright 2015 Jordan Milne
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Source: https://github.com/JordanMilne/Advocate
import functools
import fnmatch
import ipaddress
import re
try:
import netifaces
HAVE_NETIFACES = True
except ImportError:
netifaces = None
HAVE_NETIFACES = False
from .exceptions import NameserverException, ConfigException
def canonicalize_hostname(hostname):
"""Lowercase and punycodify a hostname"""
# We do the lowercasing after IDNA encoding because we only want to
# lowercase the *ASCII* chars.
# TODO: The differences between IDNA2003 and IDNA2008 might be relevant
# to us, but both specs are damn confusing.
return str(hostname.encode("idna").lower(), 'utf-8')
def determine_local_addresses():
"""Get all IPs that refer to this machine according to netifaces"""
if not HAVE_NETIFACES:
raise ConfigException("Tried to determine local addresses, "
"but netifaces module was not importable")
ips = []
for interface in netifaces.interfaces():
if_families = netifaces.ifaddresses(interface)
for family_kind in {netifaces.AF_INET, netifaces.AF_INET6}:
addrs = if_families.get(family_kind, [])
for addr in (x.get("addr", "") for x in addrs):
if family_kind == netifaces.AF_INET6:
# We can't do anything sensible with the scope here
addr = addr.split("%")[0]
ips.append(ipaddress.ip_network(addr))
return ips
def add_local_address_arg(func):
"""Add the "_local_addresses" kwarg if it's missing
IMO this information shouldn't be cached between calls (what if one of the
adapters got a new IP at runtime?,) and we don't want each function to
recalculate it. Just recalculate it if the caller didn't provide it for us.
"""
@functools.wraps(func)
def wrapper(self, *args, **kwargs):
if "_local_addresses" not in kwargs:
if self.autodetect_local_addresses:
kwargs["_local_addresses"] = determine_local_addresses()
else:
kwargs["_local_addresses"] = []
return func(self, *args, **kwargs)
return wrapper
class AddrValidator:
_6TO4_RELAY_NET = ipaddress.ip_network("192.88.99.0/24")
# Just the well known prefix, DNS64 servers can set their own
# prefix, but in practice most probably don't.
_DNS64_WK_PREFIX = ipaddress.ip_network("64:ff9b::/96")
DEFAULT_PORT_WHITELIST = {80, 8080, 443, 8443, 8000}
def __init__(
self,
ip_blacklist=None,
ip_whitelist=None,
port_whitelist=None,
port_blacklist=None,
hostname_blacklist=None,
allow_ipv6=False,
allow_teredo=False,
allow_6to4=False,
allow_dns64=False,
# Must be explicitly set to "False" if you don't want to try
# detecting local interface addresses with netifaces.
autodetect_local_addresses=True,
):
if not port_blacklist and not port_whitelist:
# An assortment of common HTTPS? ports.
port_whitelist = self.DEFAULT_PORT_WHITELIST.copy()
self.ip_blacklist = ip_blacklist or set()
self.ip_whitelist = ip_whitelist or set()
self.port_blacklist = port_blacklist or set()
self.port_whitelist = port_whitelist or set()
# TODO: ATM this can contain either regexes or globs that are converted
# to regexes upon every check. Create a collection that automagically
# converts them to regexes on insert?
self.hostname_blacklist = hostname_blacklist or set()
self.allow_ipv6 = allow_ipv6
self.allow_teredo = allow_teredo
self.allow_6to4 = allow_6to4
self.allow_dns64 = allow_dns64
self.autodetect_local_addresses = autodetect_local_addresses
@add_local_address_arg
def is_ip_allowed(self, addr_ip, _local_addresses=None):
if not isinstance(addr_ip,
(ipaddress.IPv4Address, ipaddress.IPv6Address)):
addr_ip = ipaddress.ip_address(addr_ip)
# The whitelist should take precedence over the blacklist so we can
# punch holes in blacklisted ranges
if any(addr_ip in net for net in self.ip_whitelist):
return True
if any(addr_ip in net for net in self.ip_blacklist):
return False
if any(addr_ip in net for net in _local_addresses):
return False
if addr_ip.version == 4:
if not addr_ip.is_private:
# IPs for carrier-grade NAT. Seems weird that it doesn't set
# `is_private`, but we need to check `not is_global`
if not ipaddress.ip_network(addr_ip).is_global:
return False
elif addr_ip.version == 6:
# You'd better have a good reason for enabling IPv6
# because Advocate's techniques don't work well without NAT.
if not self.allow_ipv6:
return False
# v6 addresses can also map to IPv4 addresses! Tricky!
v4_nested = []
if addr_ip.ipv4_mapped:
v4_nested.append(addr_ip.ipv4_mapped)
# WTF IPv6? Why you gotta have a billion tunneling mechanisms?
# XXX: Do we even really care about these? If we're tunneling
# through public servers we shouldn't be able to access
# addresses on our private network, right?
if addr_ip.sixtofour:
if not self.allow_6to4:
return False
v4_nested.append(addr_ip.sixtofour)
if addr_ip.teredo:
if not self.allow_teredo:
return False
# Check both the client *and* server IPs
v4_nested.extend(addr_ip.teredo)
if addr_ip in self._DNS64_WK_PREFIX:
if not self.allow_dns64:
return False
# When using the well-known prefix the last 4 bytes
# are the IPv4 addr
v4_nested.append(ipaddress.ip_address(addr_ip.packed[-4:]))
if not all(self.is_ip_allowed(addr_v4) for addr_v4 in v4_nested):
return False
# fec0::*, apparently deprecated?
if addr_ip.is_site_local:
return False
else:
raise ValueError("Unsupported IP version(?): %r" % addr_ip)
# 169.254.XXX.XXX, AWS uses these for autoconfiguration
if addr_ip.is_link_local:
return False
# 127.0.0.1, ::1, etc.
if addr_ip.is_loopback:
return False
if addr_ip.is_multicast:
return False
# 192.168.XXX.XXX, 10.XXX.XXX.XXX
if addr_ip.is_private:
return False
# 255.255.255.255, ::ffff:XXXX:XXXX (v6->v4) mapping
if addr_ip.is_reserved:
return False
# There's no reason to connect directly to a 6to4 relay
if addr_ip in self._6TO4_RELAY_NET:
return False
# 0.0.0.0
if addr_ip.is_unspecified:
return False
# It doesn't look bad, so... it's must be ok!
return True
def _hostname_matches_pattern(self, hostname, pattern):
# If they specified a string, just assume they only want basic globbing.
# This stops people from not realizing they're dealing in REs and
# not escaping their periods unless they specifically pass in an RE.
# This has the added benefit of letting us sanely handle globbed
# IDNs by default.
if isinstance(pattern, str):
# convert the glob to a punycode glob, then a regex
pattern = fnmatch.translate(canonicalize_hostname(pattern))
hostname = canonicalize_hostname(hostname)
# Down the line the hostname may get treated as a null-terminated string
# (as with `socket.getaddrinfo`.) Try to account for that.
#
# >>> socket.getaddrinfo("example.com\x00aaaa", 80)
# [(2, 1, 6, '', ('93.184.216.34', 80)), [...]
no_null_hostname = hostname.split("\x00")[0]
return any(re.match(pattern, x.strip(".")) for x
in (no_null_hostname, hostname))
def is_hostname_allowed(self, hostname):
# Sometimes (like with "external" services that your IP has privileged
# access to) you might not always know the IP range to blacklist access
# to, or the `A` record might change without you noticing.
# For e.x.: `foocorp.external.org`.
#
# Another option is doing something like:
#
# for addrinfo in socket.getaddrinfo("foocorp.external.org", 80):
# global_validator.ip_blacklist.add(ip_address(addrinfo[4][0]))
#
# but that's not always a good idea if they're behind a third-party lb.
for pattern in self.hostname_blacklist:
if self._hostname_matches_pattern(hostname, pattern):
return False
return True
@add_local_address_arg
def is_addrinfo_allowed(self, addrinfo, _local_addresses=None):
assert(len(addrinfo) == 5)
# XXX: Do we care about any of the other elements? Guessing not.
family, socktype, proto, canonname, sockaddr = addrinfo
# The 4th elem inaddrinfo may either be a touple of two or four items,
# depending on whether we're dealing with IPv4 or v6
if len(sockaddr) == 2:
# v4
ip, port = sockaddr
elif len(sockaddr) == 4:
# v6
# XXX: what *are* `flow_info` and `scope_id`? Anything useful?
# Seems like we can figure out all we need about the scope from
# the `is_<x>` properties.
ip, port, flow_info, scope_id = sockaddr
else:
raise ValueError("Unexpected addrinfo format %r" % sockaddr)
# Probably won't help protect against SSRF, but might prevent our being
# used to attack others' non-HTTP services. See
# http://www.remote.org/jochen/sec/hfpa/
if self.port_whitelist and port not in self.port_whitelist:
return False
if port in self.port_blacklist:
return False
if self.hostname_blacklist:
if not canonname:
raise NameserverException(
"addrinfo must contain the canon name to do blacklisting "
"based on hostname. Make sure you use the "
"`socket.AI_CANONNAME` flag, and that each record contains "
"the canon name. Your DNS server might also be garbage."
)
if not self.is_hostname_allowed(canonname):
return False
return self.is_ip_allowed(ip, _local_addresses=_local_addresses)

202
cps/cw_advocate/api.py Normal file
View File

@@ -0,0 +1,202 @@
#
# Copyright 2015 Jordan Milne
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Source: https://github.com/JordanMilne/Advocate
"""
advocate.api
~~~~~~~~~~~~
This module implements the Requests API, largely a copy/paste from `requests`
itself.
:copyright: (c) 2015 by Jordan Milne.
:license: Apache2, see LICENSE for more details.
"""
from collections import OrderedDict
import hashlib
import pickle
from requests import Session as RequestsSession
# import cw_advocate
from .adapters import ValidatingHTTPAdapter
from .exceptions import MountDisabledException
class Session(RequestsSession):
"""Convenience wrapper around `requests.Session` set up for `advocate`ing"""
__attrs__ = RequestsSession.__attrs__ + ["validator"]
DEFAULT_VALIDATOR = None
"""
User-replaceable default validator to use for all Advocate sessions,
includes sessions created by advocate.get()
"""
def __init__(self, *args, **kwargs):
self.validator = kwargs.pop("validator", None) or self.DEFAULT_VALIDATOR
adapter_kwargs = kwargs.pop("_adapter_kwargs", {})
# `Session.__init__()` calls `mount()` internally, so we need to allow
# it temporarily
self.__mount_allowed = True
RequestsSession.__init__(self, *args, **kwargs)
# Drop any existing adapters
self.adapters = OrderedDict()
self.mount("http://", ValidatingHTTPAdapter(validator=self.validator, **adapter_kwargs))
self.mount("https://", ValidatingHTTPAdapter(validator=self.validator, **adapter_kwargs))
self.__mount_allowed = False
def mount(self, *args, **kwargs):
"""Wrapper around `mount()` to prevent a protection bypass"""
if self.__mount_allowed:
super().mount(*args, **kwargs)
else:
raise MountDisabledException(
"mount() is disabled to prevent protection bypasses"
)
def session(*args, **kwargs):
return Session(*args, **kwargs)
def request(method, url, **kwargs):
"""Constructs and sends a :class:`Request <Request>`.
:param method: method for the new :class:`Request` object.
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`.
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`.
:param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`.
:param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': ('filename', fileobj)}``) for multipart encoding upload.
:param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
:param timeout: (optional) How long to wait for the server to send data
before giving up, as a float, or a (`connect timeout, read timeout
<user/advanced.html#timeouts>`_) tuple.
:type timeout: float or tuple
:param allow_redirects: (optional) Boolean. Set to True if POST/PUT/DELETE redirect following is allowed.
:type allow_redirects: bool
:param proxies: (optional) Dictionary mapping protocol to the URL of the proxy.
:param verify: (optional) if ``True``, the SSL cert will be verified. A CA_BUNDLE path can also be provided.
:param stream: (optional) if ``False``, the response content will be immediately downloaded.
:param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
validator = kwargs.pop("validator", None)
with Session(validator=validator) as sess:
response = sess.request(method=method, url=url, **kwargs)
return response
def get(url, **kwargs):
"""Sends a GET request.
:param url: URL for the new :class:`Request` object.
:param **kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
"""
kwargs.setdefault('allow_redirects', True)
return request('get', url, **kwargs)
class RequestsAPIWrapper:
"""Provides a `requests.api`-like interface with a specific validator"""
# Due to how the classes are dynamically constructed pickling may not work
# correctly unless loaded within the same interpreter instance.
# Enable at your peril.
SUPPORT_WRAPPER_PICKLING = False
def __init__(self, validator):
# Do this here to avoid circular import issues
try:
from .futures import FuturesSession
have_requests_futures = True
except ImportError as e:
have_requests_futures = False
self.validator = validator
outer_self = self
class _WrappedSession(Session):
"""An `advocate.Session` that uses the wrapper's blacklist
the wrapper is meant to be a transparent replacement for `requests`,
so people should be able to subclass `wrapper.Session` and still
get the desired validation behaviour
"""
DEFAULT_VALIDATOR = outer_self.validator
self._make_wrapper_cls_global(_WrappedSession)
if have_requests_futures:
class _WrappedFuturesSession(FuturesSession):
"""Like _WrappedSession, but for `FuturesSession`s"""
DEFAULT_VALIDATOR = outer_self.validator
self._make_wrapper_cls_global(_WrappedFuturesSession)
self.FuturesSession = _WrappedFuturesSession
self.request = self._default_arg_wrapper(request)
self.get = self._default_arg_wrapper(get)
self.Session = _WrappedSession
def __getattr__(self, item):
# This class is meant to mimic the requests base module, so if we don't
# have this attribute, it might be on the base module (like the Request
# class, etc.)
try:
return object.__getattribute__(self, item)
except AttributeError:
from . import cw_advocate
return getattr(cw_advocate, item)
def _default_arg_wrapper(self, fun):
def wrapped_func(*args, **kwargs):
kwargs.setdefault("validator", self.validator)
return fun(*args, **kwargs)
return wrapped_func
def _make_wrapper_cls_global(self, cls):
if not self.SUPPORT_WRAPPER_PICKLING:
return
# Gnarly, but necessary to give pickle a consistent module-level
# reference for each wrapper.
wrapper_hash = hashlib.sha256(pickle.dumps(self)).hexdigest()
cls.__name__ = "_".join((cls.__name__, wrapper_hash))
cls.__qualname__ = ".".join((__name__, cls.__name__))
if not globals().get(cls.__name__):
globals()[cls.__name__] = cls
__all__ = (
"get",
"request",
"session",
"Session",
"RequestsAPIWrapper",
)

View File

@@ -0,0 +1,201 @@
#
# Copyright 2015 Jordan Milne
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Source: https://github.com/JordanMilne/Advocate
import ipaddress
import socket
from socket import timeout as SocketTimeout
from urllib3.connection import HTTPSConnection, HTTPConnection
from urllib3.exceptions import ConnectTimeoutError
from urllib3.util.connection import _set_socket_options
from urllib3.util.connection import create_connection as old_create_connection
from . import addrvalidator
from .exceptions import UnacceptableAddressException
def advocate_getaddrinfo(host, port, get_canonname=False):
addrinfo = socket.getaddrinfo(
host,
port,
0,
socket.SOCK_STREAM,
0,
# We need what the DNS client sees the hostname as, correctly handles
# IDNs and tricky things like `private.foocorp.org\x00.google.com`.
# All IDNs will be converted to punycode.
socket.AI_CANONNAME if get_canonname else 0,
)
return fix_addrinfo(addrinfo)
def fix_addrinfo(records):
"""
Propagate the canonname across records and parse IPs
I'm not sure if this is just the behaviour of `getaddrinfo` on Linux, but
it seems like only the first record in the set has the canonname field
populated.
"""
def fix_record(record, canonname):
sa = record[4]
sa = (ipaddress.ip_address(sa[0]),) + sa[1:]
return record[0], record[1], record[2], canonname, sa
canonname = None
if records:
# Apparently the canonical name is only included in the first record?
# Add it to all of them.
assert(len(records[0]) == 5)
canonname = records[0][3]
return tuple(fix_record(x, canonname) for x in records)
# Lifted from requests' urllib3, which in turn lifted it from `socket.py`. Oy!
def validating_create_connection(address,
timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None,
validator=None):
"""Connect to *address* and return the socket object.
Convenience function. Connect to *address* (a 2-tuple ``(host,
port)``) and return the socket object. Passing the optional
*timeout* parameter will set the timeout on the socket instance
before attempting to connect. If no *timeout* is supplied, the
global default timeout setting returned by :func:`getdefaulttimeout`
is used. If *source_address* is set it must be a tuple of (host, port)
for the socket to bind as a source address before making the connection.
An host of '' or port 0 tells the OS to use the default.
"""
host, port = address
# We can skip asking for the canon name if we're not doing hostname-based
# blacklisting.
need_canonname = False
if validator.hostname_blacklist:
need_canonname = True
# We check both the non-canonical and canonical hostnames so we can
# catch both of these:
# CNAME from nonblacklisted.com -> blacklisted.com
# CNAME from blacklisted.com -> nonblacklisted.com
if not validator.is_hostname_allowed(host):
raise UnacceptableAddressException(host)
err = None
addrinfo = advocate_getaddrinfo(host, port, get_canonname=need_canonname)
if addrinfo:
if validator.autodetect_local_addresses:
local_addresses = addrvalidator.determine_local_addresses()
else:
local_addresses = []
for res in addrinfo:
# Are we allowed to connect with this result?
if not validator.is_addrinfo_allowed(
res,
_local_addresses=local_addresses,
):
continue
af, socktype, proto, canonname, sa = res
# Unparse the validated IP
sa = (sa[0].exploded,) + sa[1:]
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
# This is the only addition urllib3 makes to this function.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
sock.settimeout(timeout)
if source_address:
sock.bind(source_address)
sock.connect(sa)
return sock
except socket.error as _:
err = _
if sock is not None:
sock.close()
sock = None
if err is None:
# If we got here, none of the results were acceptable
err = UnacceptableAddressException(address)
if err is not None:
raise err
else:
raise socket.error("getaddrinfo returns an empty list")
# TODO: Is there a better way to add this to multiple classes with different
# base classes? I tried a mixin, but it used the base method instead.
def _validating_new_conn(self):
""" Establish a socket connection and set nodelay settings on it.
:return: New socket connection.
"""
extra_kw = {}
if self.source_address:
extra_kw['source_address'] = self.source_address
if self.socket_options:
extra_kw['socket_options'] = self.socket_options
try:
# Hack around HTTPretty's patched sockets
# TODO: some better method of hacking around it that checks if we
# _would have_ connected to a private addr?
conn_func = validating_create_connection
if socket.getaddrinfo.__module__.startswith("httpretty"):
conn_func = old_create_connection
else:
extra_kw["validator"] = self._validator
conn = conn_func(
(self.host, self.port),
self.timeout,
**extra_kw
)
except SocketTimeout:
raise ConnectTimeoutError(
self, "Connection to %s timed out. (connect timeout=%s)" %
(self.host, self.timeout))
return conn
# Don't silently break if the private API changes across urllib3 versions
assert(hasattr(HTTPConnection, '_new_conn'))
assert(hasattr(HTTPSConnection, '_new_conn'))
class ValidatingHTTPConnection(HTTPConnection):
_new_conn = _validating_new_conn
def __init__(self, *args, **kwargs):
self._validator = kwargs.pop("validator")
HTTPConnection.__init__(self, *args, **kwargs)
class ValidatingHTTPSConnection(HTTPSConnection):
_new_conn = _validating_new_conn
def __init__(self, *args, **kwargs):
self._validator = kwargs.pop("validator")
HTTPSConnection.__init__(self, *args, **kwargs)

View File

@@ -0,0 +1,39 @@
#
# Copyright 2015 Jordan Milne
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Source: https://github.com/JordanMilne/Advocate
from urllib3 import HTTPConnectionPool, HTTPSConnectionPool
from .connection import (
ValidatingHTTPConnection,
ValidatingHTTPSConnection,
)
# Don't silently break if the private API changes across urllib3 versions
assert(hasattr(HTTPConnectionPool, 'ConnectionCls'))
assert(hasattr(HTTPSConnectionPool, 'ConnectionCls'))
assert(hasattr(HTTPConnectionPool, 'scheme'))
assert(hasattr(HTTPSConnectionPool, 'scheme'))
class ValidatingHTTPConnectionPool(HTTPConnectionPool):
scheme = 'http'
ConnectionCls = ValidatingHTTPConnection
class ValidatingHTTPSConnectionPool(HTTPSConnectionPool):
scheme = 'https'
ConnectionCls = ValidatingHTTPSConnection

View File

@@ -0,0 +1,39 @@
#
# Copyright 2015 Jordan Milne
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Source: https://github.com/JordanMilne/Advocate
class AdvocateException(Exception):
pass
class UnacceptableAddressException(AdvocateException):
pass
class NameserverException(AdvocateException):
pass
class MountDisabledException(AdvocateException):
pass
class ProxyDisabledException(NotImplementedError, AdvocateException):
pass
class ConfigException(AdvocateException):
pass

View File

@@ -0,0 +1,61 @@
#
# Copyright 2015 Jordan Milne
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Source: https://github.com/JordanMilne/Advocate
import collections
import functools
from urllib3 import PoolManager
from urllib3.poolmanager import _default_key_normalizer, PoolKey
from .connectionpool import (
ValidatingHTTPSConnectionPool,
ValidatingHTTPConnectionPool,
)
pool_classes_by_scheme = {
"http": ValidatingHTTPConnectionPool,
"https": ValidatingHTTPSConnectionPool,
}
AdvocatePoolKey = collections.namedtuple('AdvocatePoolKey',
PoolKey._fields + ('key_validator',))
def key_normalizer(key_class, request_context):
request_context = request_context.copy()
# TODO: add ability to serialize validator rules to dict,
# allowing pool to be shared between sessions with the same
# rules.
request_context["validator"] = id(request_context["validator"])
return _default_key_normalizer(key_class, request_context)
key_fn_by_scheme = {
'http': functools.partial(key_normalizer, AdvocatePoolKey),
'https': functools.partial(key_normalizer, AdvocatePoolKey),
}
class ValidatingPoolManager(PoolManager):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Make sure the API hasn't changed
assert (hasattr(self, 'pool_classes_by_scheme'))
self.pool_classes_by_scheme = pool_classes_by_scheme
self.key_fn_by_scheme = key_fn_by_scheme.copy()

View File

@@ -34,7 +34,7 @@ def get_user_locale_language(user_language):
def get_available_locale(): def get_available_locale():
return [Locale('en')] + babel.list_translations() return sorted(babel.list_translations(), key=lambda x: x.display_name.lower())
def get_available_translations(): def get_available_translations():

View File

@@ -1,4 +1,5 @@
from datetime import datetime from datetime import datetime
from datetime import timezone
from datetime import timedelta from datetime import timedelta
import hashlib import hashlib
@@ -496,7 +497,7 @@ class LoginManager:
duration = timedelta(seconds=duration) duration = timedelta(seconds=duration)
try: try:
expires = datetime.utcnow() + duration expires = datetime.now(timezone.utc) + duration
except TypeError as e: except TypeError as e:
raise Exception( raise Exception(
"REMEMBER_COOKIE_DURATION must be a datetime.timedelta," "REMEMBER_COOKIE_DURATION must be a datetime.timedelta,"

184
cps/db.py
View File

@@ -20,10 +20,11 @@
import os import os
import re import re
import json import json
from datetime import datetime from datetime import datetime, timezone
from urllib.parse import quote from urllib.parse import quote
import unidecode import unidecode
from weakref import WeakSet # from weakref import WeakSet
from uuid import uuid4
from sqlite3 import OperationalError as sqliteOperationalError from sqlite3 import OperationalError as sqliteOperationalError
from sqlalchemy import create_engine from sqlalchemy import create_engine
@@ -44,11 +45,11 @@ from sqlalchemy.ext.associationproxy import association_proxy
from .cw_login import current_user from .cw_login import current_user
from flask_babel import gettext as _ from flask_babel import gettext as _
from flask_babel import get_locale from flask_babel import get_locale
from flask import flash from flask import flash, g, Flask
from . import logger, ub, isoLanguages from . import logger, ub, isoLanguages
from .pagination import Pagination from .pagination import Pagination
from .string_helper import strip_whitespaces
log = logger.create() log = logger.create()
@@ -101,6 +102,16 @@ class Identifiers(Base):
type = Column(String(collation='NOCASE'), nullable=False, default="isbn") type = Column(String(collation='NOCASE'), nullable=False, default="isbn")
val = Column(String(collation='NOCASE'), nullable=False) val = Column(String(collation='NOCASE'), nullable=False)
book = Column(Integer, ForeignKey('books.id'), nullable=False) book = Column(Integer, ForeignKey('books.id'), nullable=False)
amazon = {
"jp": "co.jp",
"uk": "co.uk",
"us": "com",
"au": "com.au",
"be": "com.be",
"br": "com.br",
"tr": "com.tr",
"mx": "com.mx",
}
def __init__(self, val, id_type, book): def __init__(self, val, id_type, book):
super().__init__() super().__init__()
@@ -113,7 +124,11 @@ class Identifiers(Base):
if format_type == 'amazon': if format_type == 'amazon':
return "Amazon" return "Amazon"
elif format_type.startswith("amazon_"): elif format_type.startswith("amazon_"):
return "Amazon.{0}".format(format_type[7:]) label_amazon = "Amazon.{0}"
country_code = format_type[7:].lower()
if country_code not in self.amazon:
return label_amazon.format(country_code)
return label_amazon.format(self.amazon[country_code])
elif format_type == "isbn": elif format_type == "isbn":
return "ISBN" return "ISBN"
elif format_type == "doi": elif format_type == "doi":
@@ -136,6 +151,8 @@ class Identifiers(Base):
return "ISSN" return "ISSN"
elif format_type == "isfdb": elif format_type == "isfdb":
return "ISFDB" return "ISFDB"
elif format_type == "storygraph":
return "StoryGraph"
if format_type == "lubimyczytac": if format_type == "lubimyczytac":
return "Lubimyczytac" return "Lubimyczytac"
if format_type == "databazeknih": if format_type == "databazeknih":
@@ -148,7 +165,11 @@ class Identifiers(Base):
if format_type == "amazon" or format_type == "asin": if format_type == "amazon" or format_type == "asin":
return "https://amazon.com/dp/{0}".format(self.val) return "https://amazon.com/dp/{0}".format(self.val)
elif format_type.startswith('amazon_'): elif format_type.startswith('amazon_'):
return "https://amazon.{0}/dp/{1}".format(format_type[7:], self.val) link_amazon = "https://amazon.{0}/dp/{1}"
country_code = format_type[7:].lower()
if country_code not in self.amazon:
return link_amazon.format(country_code, self.val)
return link_amazon.format(self.amazon[country_code], self.val)
elif format_type == "isbn": elif format_type == "isbn":
return "https://www.worldcat.org/isbn/{0}".format(self.val) return "https://www.worldcat.org/isbn/{0}".format(self.val)
elif format_type == "doi": elif format_type == "doi":
@@ -172,9 +193,11 @@ class Identifiers(Base):
elif format_type == "issn": elif format_type == "issn":
return "https://portal.issn.org/resource/ISSN/{0}".format(self.val) return "https://portal.issn.org/resource/ISSN/{0}".format(self.val)
elif format_type == "isfdb": elif format_type == "isfdb":
return "http://www.isfdb.org/cgi-bin/pl.cgi?{0}".format(self.val) return "https://www.isfdb.org/cgi-bin/pl.cgi?{0}".format(self.val)
elif format_type == "databazeknih": elif format_type == "databazeknih":
return "https://www.databazeknih.cz/knihy/{0}".format(self.val) return "https://www.databazeknih.cz/knihy/{0}".format(self.val)
elif format_type == "storygraph":
return "https://app.thestorygraph.com/books/{0}".format(self.val)
elif self.val.lower().startswith("javascript:"): elif self.val.lower().startswith("javascript:"):
return quote(self.val) return quote(self.val)
elif self.val.lower().startswith("data:"): elif self.val.lower().startswith("data:"):
@@ -378,10 +401,10 @@ class Books(Base):
title = Column(String(collation='NOCASE'), nullable=False, default='Unknown') title = Column(String(collation='NOCASE'), nullable=False, default='Unknown')
sort = Column(String(collation='NOCASE')) sort = Column(String(collation='NOCASE'))
author_sort = Column(String(collation='NOCASE')) author_sort = Column(String(collation='NOCASE'))
timestamp = Column(TIMESTAMP, default=datetime.utcnow) timestamp = Column(TIMESTAMP, default=lambda: datetime.now(timezone.utc))
pubdate = Column(TIMESTAMP, default=DEFAULT_PUBDATE) pubdate = Column(TIMESTAMP, default=DEFAULT_PUBDATE)
series_index = Column(String, nullable=False, default="1.0") series_index = Column(String, nullable=False, default="1.0")
last_modified = Column(TIMESTAMP, default=datetime.utcnow) last_modified = Column(TIMESTAMP, default=lambda: datetime.now(timezone.utc))
path = Column(String, default="", nullable=False) path = Column(String, default="", nullable=False)
has_cover = Column(Integer, default=0) has_cover = Column(Integer, default=0)
uuid = Column(String) uuid = Column(String)
@@ -509,34 +532,25 @@ class AlchemyEncoder(json.JSONEncoder):
class CalibreDB: class CalibreDB:
_init = False
engine = None
config = None config = None
session_factory = None config_calibre_dir = None
# This is a WeakSet so that references here don't keep other CalibreDB app_db_path = None
# instances alive once they reach the end of their respective scopes
instances = WeakSet()
def __init__(self, expire_on_commit=True, init=False): def __init__(self, _app: Flask=None): # , expire_on_commit=True, init=False):
""" Initialize a new CalibreDB session """ Initialize a new CalibreDB session
""" """
self.session = None self.Session = None
if init: #if init:
self.init_db(expire_on_commit) # self.init_db(expire_on_commit)
if _app is not None and not _app._got_first_request:
self.init_app(_app)
def init_db(self, expire_on_commit=True): def init_app(self, _app):
if self._init: _app.teardown_appcontext(self.teardown)
self.init_session(expire_on_commit)
self.instances.add(self)
def init_session(self, expire_on_commit=True):
self.session = self.session_factory()
self.session.expire_on_commit = expire_on_commit
self.update_title_sort(self.config)
@classmethod @classmethod
def setup_db_cc_classes(cls, cc): def setup_db_cc_classes(cls, cc):
global cc_classes
cc_ids = [] cc_ids = []
books_custom_column_links = {} books_custom_column_links = {}
for row in cc: for row in cc:
@@ -604,8 +618,6 @@ class CalibreDB:
secondary=books_custom_column_links[cc_id[0]], secondary=books_custom_column_links[cc_id[0]],
backref='books')) backref='books'))
return cc_classes
@classmethod @classmethod
def check_valid_db(cls, config_calibre_dir, app_db_path, config_calibre_uuid): def check_valid_db(cls, config_calibre_dir, app_db_path, config_calibre_uuid):
if not config_calibre_dir: if not config_calibre_dir:
@@ -625,7 +637,6 @@ class CalibreDB:
local_session = scoped_session(sessionmaker()) local_session = scoped_session(sessionmaker())
local_session.configure(bind=connection) local_session.configure(bind=connection)
database_uuid = local_session().query(Library_Id).one_or_none() database_uuid = local_session().query(Library_Id).one_or_none()
# local_session.dispose()
check_engine.connect() check_engine.connect()
db_change = config_calibre_uuid != database_uuid.uuid db_change = config_calibre_uuid != database_uuid.uuid
@@ -633,13 +644,30 @@ class CalibreDB:
return False, False return False, False
return True, db_change return True, db_change
def teardown(self, exception):
ctx = g.get("lib_sql")
if ctx:
ctx.close()
@property
def session(self):
# connect or get active connection
if not g.get("lib_sql"):
g.lib_sql = self.connect()
return g.lib_sql
@classmethod @classmethod
def update_config(cls, config): def update_config(cls, config, config_calibre_dir, app_db_path):
cls.config = config cls.config = config
cls.config_calibre_dir = config_calibre_dir
cls.app_db_path = app_db_path
def connect(self):
return self.setup_db(self.config_calibre_dir, self.app_db_path)
@classmethod @classmethod
def setup_db(cls, config_calibre_dir, app_db_path): def setup_db(cls, config_calibre_dir, app_db_path):
cls.dispose()
if not config_calibre_dir: if not config_calibre_dir:
cls.config.invalidate() cls.config.invalidate()
@@ -651,16 +679,17 @@ class CalibreDB:
return None return None
try: try:
cls.engine = create_engine('sqlite://', engine = create_engine('sqlite://',
echo=False, echo=False,
isolation_level="SERIALIZABLE", isolation_level="SERIALIZABLE",
connect_args={'check_same_thread': False}, connect_args={'check_same_thread': False},
poolclass=StaticPool) poolclass=StaticPool)
with cls.engine.begin() as connection: with engine.begin() as connection:
connection.execute(text('PRAGMA cache_size = 10000;'))
connection.execute(text("attach database '{}' as calibre;".format(dbpath))) connection.execute(text("attach database '{}' as calibre;".format(dbpath)))
connection.execute(text("attach database '{}' as app_settings;".format(app_db_path))) connection.execute(text("attach database '{}' as app_settings;".format(app_db_path)))
conn = cls.engine.connect() conn = engine.connect()
# conn.text_factory = lambda b: b.decode(errors = 'ignore') possible fix for #1302 # conn.text_factory = lambda b: b.decode(errors = 'ignore') possible fix for #1302
except Exception as ex: except Exception as ex:
cls.config.invalidate(ex) cls.config.invalidate(ex)
@@ -676,13 +705,10 @@ class CalibreDB:
log.error_or_exception(e) log.error_or_exception(e)
return None return None
cls.session_factory = scoped_session(sessionmaker(autocommit=False, return scoped_session(sessionmaker(autocommit=False,
autoflush=True, autoflush=False,
bind=cls.engine, future=True)) bind=engine, future=True))
for inst in cls.instances:
inst.init_session()
cls._init = True
def get_book(self, book_id): def get_book(self, book_id):
return self.session.query(Books).filter(Books.id == book_id).first() return self.session.query(Books).filter(Books.id == book_id).first()
@@ -875,10 +901,12 @@ class CalibreDB:
authors_ordered = list() authors_ordered = list()
# error = False # error = False
for auth in sort_authors: for auth in sort_authors:
results = self.session.query(Authors).filter(Authors.sort == auth.lstrip().strip()).all() auth = strip_whitespaces(auth)
results = self.session.query(Authors).filter(Authors.sort == auth).all()
# ToDo: How to handle not found author name # ToDo: How to handle not found author name
if not len(results): if not len(results):
log.error("Author {} not found to display name in right order".format(auth.strip())) book_id = entry.id if isinstance(entry, Books) else entry[0].id
log.error("Author '{}' of book {} not found to display name in right order".format(auth, book_id))
# error = True # error = True
break break
for r in results: for r in results:
@@ -900,7 +928,8 @@ class CalibreDB:
def get_typeahead(self, database, query, replace=('', ''), tag_filter=true()): def get_typeahead(self, database, query, replace=('', ''), tag_filter=true()):
query = query or '' query = query or ''
self.session.connection().connection.connection.create_function("lower", 1, lcase) self.create_functions()
# self.session.connection().connection.connection.create_function("lower", 1, lcase)
entries = self.session.query(database).filter(tag_filter). \ entries = self.session.query(database).filter(tag_filter). \
filter(func.lower(database.name).ilike("%" + query + "%")).all() filter(func.lower(database.name).ilike("%" + query + "%")).all()
# json_dumps = json.dumps([dict(name=escape(r.name.replace(*replace))) for r in entries]) # json_dumps = json.dumps([dict(name=escape(r.name.replace(*replace))) for r in entries])
@@ -908,7 +937,8 @@ class CalibreDB:
return json_dumps return json_dumps
def check_exists_book(self, authr, title): def check_exists_book(self, authr, title):
self.session.connection().connection.connection.create_function("lower", 1, lcase) self.create_functions()
# self.session.connection().connection.connection.create_function("lower", 1, lcase)
q = list() q = list()
author_terms = re.split(r'\s*&\s*', authr) author_terms = re.split(r'\s*&\s*', authr)
for author_term in author_terms: for author_term in author_terms:
@@ -918,8 +948,9 @@ class CalibreDB:
.filter(and_(Books.authors.any(and_(*q)), func.lower(Books.title).ilike("%" + title + "%"))).first() .filter(and_(Books.authors.any(and_(*q)), func.lower(Books.title).ilike("%" + title + "%"))).first()
def search_query(self, term, config, *join): def search_query(self, term, config, *join):
term.strip().lower() strip_whitespaces(term).lower()
self.session.connection().connection.connection.create_function("lower", 1, lcase) self.create_functions()
# self.session.connection().connection.connection.create_function("lower", 1, lcase)
q = list() q = list()
author_terms = re.split("[, ]+", term) author_terms = re.split("[, ]+", term)
for author_term in author_terms: for author_term in author_terms:
@@ -1017,7 +1048,7 @@ class CalibreDB:
lang.name = isoLanguages.get_language_name(get_locale(), lang.lang_code) lang.name = isoLanguages.get_language_name(get_locale(), lang.lang_code)
return sorted(languages, key=lambda x: x.name, reverse=reverse_order) return sorted(languages, key=lambda x: x.name, reverse=reverse_order)
def update_title_sort(self, config, conn=None): def create_functions(self, config=None):
# user defined sort function for calibre databases (Series, etc.) # user defined sort function for calibre databases (Series, etc.)
def _title_sort(title): def _title_sort(title):
# calibre sort stuff # calibre sort stuff
@@ -1026,56 +1057,27 @@ class CalibreDB:
if match: if match:
prep = match.group(1) prep = match.group(1)
title = title[len(prep):] + ', ' + prep title = title[len(prep):] + ', ' + prep
return title.strip() return strip_whitespaces(title)
try: try:
# sqlalchemy <1.4.24 # sqlalchemy <1.4.24 and sqlalchemy 2.0
conn = conn or self.session.connection().connection.driver_connection conn = self.session.connection().connection.driver_connection
except AttributeError: except AttributeError:
# sqlalchemy >1.4.24 and sqlalchemy 2.0 # sqlalchemy >1.4.24
conn = conn or self.session.connection().connection.connection conn = self.session.connection().connection.connection
try: try:
conn.create_function("title_sort", 1, _title_sort) if config:
conn.create_function("title_sort", 1, _title_sort)
conn.create_function('uuid4', 0, lambda: str(uuid4()))
conn.create_function("lower", 1, lcase)
except sqliteOperationalError: except sqliteOperationalError:
pass pass
@classmethod
def dispose(cls):
# global session
for inst in cls.instances:
old_session = inst.session
inst.session = None
if old_session:
try:
old_session.close()
except Exception:
pass
if old_session.bind:
try:
old_session.bind.dispose()
except Exception:
pass
for attr in list(Books.__dict__.keys()):
if attr.startswith("custom_column_"):
setattr(Books, attr, None)
for db_class in cc_classes.values():
Base.metadata.remove(db_class.__table__)
cc_classes.clear()
for table in reversed(Base.metadata.sorted_tables):
name = table.key
if name.startswith("custom_column_") or name.startswith("books_custom_column_"):
if table is not None:
Base.metadata.remove(table)
def reconnect_db(self, config, app_db_path): def reconnect_db(self, config, app_db_path):
self.dispose() # self.dispose()
self.engine.dispose() # self.engine.dispose()
self.setup_db(config.config_calibre_dir, app_db_path) self.setup_db(config.config_calibre_dir, app_db_path)
self.update_config(config) self.update_config(config, config.config_calibre_dir, app_db_path)
def lcase(s): def lcase(s):

View File

@@ -23,10 +23,10 @@ import zipfile
import json import json
from io import BytesIO from io import BytesIO
from flask_babel.speaklater import LazyString from flask_babel.speaklater import LazyString
from importlib.metadata import metadata
import os import os
from flask import send_file, __version__ from flask import send_file
from . import logger, config from . import logger, config
from .about import collect_stats from .about import collect_stats
@@ -49,7 +49,8 @@ def assemble_logfiles(file_name):
with open(f, 'rb') as fd: with open(f, 'rb') as fd:
shutil.copyfileobj(fd, wfd) shutil.copyfileobj(fd, wfd)
wfd.seek(0) wfd.seek(0)
if int(__version__.split('.')[0]) < 2: version = metadata("flask")["Version"]
if int(version.split('.')[0]) < 2:
return send_file(wfd, return send_file(wfd,
as_attachment=True, as_attachment=True,
attachment_filename=os.path.basename(file_name)) attachment_filename=os.path.basename(file_name))
@@ -72,7 +73,8 @@ def send_debug():
for fp in file_list: for fp in file_list:
zf.write(fp, os.path.basename(fp)) zf.write(fp, os.path.basename(fp))
memory_zip.seek(0) memory_zip.seek(0)
if int(__version__.split('.')[0]) < 2: version = metadata("flask")["Version"]
if int(version.split('.')[0]) < 2:
return send_file(memory_zip, return send_file(memory_zip,
as_attachment=True, as_attachment=True,
attachment_filename="Calibre-Web-debug-pack.zip") attachment_filename="Calibre-Web-debug-pack.zip")

View File

@@ -39,8 +39,24 @@ def load_dependencies(optional=False):
with open(req_path, 'r') as f: with open(req_path, 'r') as f:
for line in f: for line in f:
if not line.startswith('#') and not line == '\n' and not line.startswith('git'): if not line.startswith('#') and not line == '\n' and not line.startswith('git'):
res = re.match(r'(.*?)([<=>\s]+)([\d\.]+),?\s?([<=>\s]+)?([\d\.]+)?', line.strip()) res = re.match(r'(.*?)([<=>\s]+)([\d\.]+),?\s?([<=>\s]+)?([\d\.]+)?(?:\s?;\s?'
r'(?:(python_version)\s?([<=>]+)\s?\'([\d\.]+)\'|'
r'(sys_platform)\s?([\!=]+)\s?\'([\w]+)\'))?', line.strip())
try: try:
if res.group(7) and res.group(8):
val = res.group(8).split(".")
if not eval(str(sys.version_info[0]) + "." + "{:02d}".format(sys.version_info[1]) +
res.group(7) + val[0] + "." + "{:02d}".format(int(val[1]))):
continue
elif res.group(10) and res.group(11):
# only installed if platform is eqal, don't check if platform is not equal
if res.group(10) == "==":
if sys.platform != res.group(11):
continue
# installed if platform is not eqal, don't check if platform is equal
elif res.group(10) == "!=":
if sys.platform == res.group(11):
continue
if getattr(sys, 'frozen', False): if getattr(sys, 'frozen', False):
dep_version = exe_deps[res.group(1).lower().replace('_', '-')] dep_version = exe_deps[res.group(1).lower().replace('_', '-')]
else: else:
@@ -63,7 +79,7 @@ def dependency_check(optional=False):
deps = load_dependencies(optional) deps = load_dependencies(optional)
for dep in deps: for dep in deps:
try: try:
dep_version_int = [int(x) if x.isnumeric() else 0 for x in dep[0].split('.')] dep_version_int = [int(x) if x.isnumeric() else 0 for x in dep[0].split('.')[:3]]
low_check = [int(x) for x in dep[3].split('.')] low_check = [int(x) for x in dep[3].split('.')]
high_check = [int(x) for x in dep[5].split('.')] high_check = [int(x) for x in dep[5].split('.')]
except AttributeError: except AttributeError:

File diff suppressed because it is too large Load Diff

View File

@@ -35,9 +35,7 @@ def do_calibre_export(book_id, book_format):
my_env = os.environ.copy() my_env = os.environ.copy()
if config.config_calibre_split: if config.config_calibre_split:
my_env['CALIBRE_OVERRIDE_DATABASE_PATH'] = os.path.join(config.config_calibre_dir, "metadata.db") my_env['CALIBRE_OVERRIDE_DATABASE_PATH'] = os.path.join(config.config_calibre_dir, "metadata.db")
library_path = config.config_calibre_split_dir library_path = config.get_book_path()
else:
library_path = config.config_calibre_dir
opf_command = [calibredb_binarypath, 'export', '--dont-write-opf', '--with-library', library_path, opf_command = [calibredb_binarypath, 'export', '--dont-write-opf', '--with-library', library_path,
'--to-dir', tmp_dir, '--formats', book_format, "--template", "{}".format(temp_file_name), '--to-dir', tmp_dir, '--formats', book_format, "--template", "{}".format(temp_file_name),
str(book_id)] str(book_id)]

View File

@@ -25,6 +25,7 @@ from . import config, logger
from .helper import split_authors from .helper import split_authors
from .epub_helper import get_content_opf, default_ns from .epub_helper import get_content_opf, default_ns
from .constants import BookMeta from .constants import BookMeta
from .string_helper import strip_whitespaces
log = logger.create() log = logger.create()
@@ -55,7 +56,7 @@ def get_epub_layout(book, book_data):
p = tree.xpath('/pkg:package/pkg:metadata', namespaces=default_ns)[0] p = tree.xpath('/pkg:package/pkg:metadata', namespaces=default_ns)[0]
layout = p.xpath('pkg:meta[@property="rendition:layout"]/text()', namespaces=default_ns) layout = p.xpath('pkg:meta[@property="rendition:layout"]/text()', namespaces=default_ns)
except (etree.XMLSyntaxError, KeyError, IndexError, OSError) as e: except (etree.XMLSyntaxError, KeyError, IndexError, OSError, UnicodeDecodeError) as e:
log.error("Could not parse epub metadata of book {} during kobo sync: {}".format(book.id, e)) log.error("Could not parse epub metadata of book {} during kobo sync: {}".format(book.id, e))
layout = [] layout = []
@@ -65,7 +66,7 @@ def get_epub_layout(book, book_data):
return layout[0] return layout[0]
def get_epub_info(tmp_file_path, original_file_name, original_file_extension): def get_epub_info(tmp_file_path, original_file_name, original_file_extension, no_cover_processing):
ns = { ns = {
'n': 'urn:oasis:names:tc:opendocument:xmlns:container', 'n': 'urn:oasis:names:tc:opendocument:xmlns:container',
'pkg': 'http://www.idpf.org/2007/opf', 'pkg': 'http://www.idpf.org/2007/opf',
@@ -90,7 +91,7 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
elif s == 'date': elif s == 'date':
epub_metadata[s] = tmp[0][:10] epub_metadata[s] = tmp[0][:10]
else: else:
epub_metadata[s] = tmp[0].strip() epub_metadata[s] = strip_whitespaces(tmp[0])
else: else:
epub_metadata[s] = 'Unknown' epub_metadata[s] = 'Unknown'
@@ -116,7 +117,10 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
epub_metadata = parse_epub_series(ns, tree, epub_metadata) epub_metadata = parse_epub_series(ns, tree, epub_metadata)
epub_zip = zipfile.ZipFile(tmp_file_path) epub_zip = zipfile.ZipFile(tmp_file_path)
cover_file = parse_epub_cover(ns, tree, epub_zip, cover_path, tmp_file_path) if not no_cover_processing:
cover_file = parse_epub_cover(ns, tree, epub_zip, cover_path, tmp_file_path)
else:
cover_file = None
identifiers = [] identifiers = []
for node in p.xpath('dc:identifier', namespaces=ns): for node in p.xpath('dc:identifier', namespaces=ns):

View File

@@ -37,17 +37,31 @@ def error_http(error):
error_code="Error {0}".format(error.code), error_code="Error {0}".format(error.code),
error_name=error.name, error_name=error.name,
issue=False, issue=False,
goto_admin=False,
unconfigured=not config.db_configured, unconfigured=not config.db_configured,
instance=config.config_calibre_web_title instance=config.config_calibre_web_title
), error.code ), error.code
def internal_error(error): def internal_error(error):
if (isinstance(error.original_exception, AttributeError) and
error.original_exception.args[0] == "'NoneType' object has no attribute 'query'"
and error.original_exception.name == "query"):
return render_template('http_error.html',
error_code="Database Error",
error_name='The library used is invalid or has permission errors',
issue=False,
goto_admin=True,
unconfigured=False,
error_stack="",
instance=config.config_calibre_web_title
), 500
return render_template('http_error.html', return render_template('http_error.html',
error_code="500 Internal Server Error", error_code="500 Internal Server Error",
error_name='The server encountered an internal error and was unable to complete your ' error_name='The server encountered an internal error and was unable to complete your '
'request. There is an error in the application.', 'request. There is an error in the application.',
issue=True, issue=True,
goto_admin=False,
unconfigured=False, unconfigured=False,
error_stack=traceback.format_exc().split("\n"), error_stack=traceback.format_exc().split("\n"),
instance=config.config_calibre_web_title instance=config.config_calibre_web_title

View File

@@ -34,6 +34,15 @@ except ImportError as e:
error = "Cannot import python-magic, checking uploaded file metadata will not work: {}".format(e) error = "Cannot import python-magic, checking uploaded file metadata will not work: {}".format(e)
def get_mimetype(ext):
# overwrite some mimetypes for proper file detection
mimes = {".fb2": "text/xml",
".cbz": "application/zip",
".cbr": "application/x-rar"
}
return mimes.get(ext, mimetypes.types_map[ext])
def get_temp_dir(): def get_temp_dir():
tmp_dir = os.path.join(gettempdir(), 'calibre_web') tmp_dir = os.path.join(gettempdir(), 'calibre_web')
if not os.path.isdir(tmp_dir): if not os.path.isdir(tmp_dir):
@@ -54,7 +63,7 @@ def validate_mime_type(file_buffer, allowed_extensions):
allowed_mimetypes = list() allowed_mimetypes = list()
for x in allowed_extensions: for x in allowed_extensions:
try: try:
allowed_mimetypes.append(mimetypes.types_map["." + x]) allowed_mimetypes.append(get_mimetype("." + x))
except KeyError: except KeyError:
log.error("Unkown mimetype for Extension: {}".format(x)) log.error("Unkown mimetype for Extension: {}".format(x))
tmp_mime_type = mime.from_buffer(file_buffer.read()) tmp_mime_type = mime.from_buffer(file_buffer.read())

View File

@@ -25,12 +25,12 @@ import re
import regex import regex
import shutil import shutil
import socket import socket
from datetime import datetime, timedelta from datetime import datetime, timedelta, timezone
import requests import requests
import unidecode import unidecode
from uuid import uuid4 from uuid import uuid4
from flask import send_from_directory, make_response, abort, url_for, Response from flask import send_from_directory, make_response, abort, url_for, Response, request
from flask_babel import gettext as _ from flask_babel import gettext as _
from flask_babel import lazy_gettext as N_ from flask_babel import lazy_gettext as N_
from flask_babel import get_locale from flask_babel import get_locale
@@ -43,15 +43,16 @@ from markupsafe import escape
from urllib.parse import quote from urllib.parse import quote
try: try:
import advocate from . import cw_advocate
from advocate.exceptions import UnacceptableAddressException from .cw_advocate.exceptions import UnacceptableAddressException
use_advocate = True use_advocate = True
except ImportError: except ImportError as e:
use_advocate = False use_advocate = False
advocate = requests advocate = requests
UnacceptableAddressException = MissingSchema = BaseException UnacceptableAddressException = MissingSchema = BaseException
from . import calibre_db, cli_param from . import calibre_db, cli_param
from .string_helper import strip_whitespaces
from .tasks.convert import TaskConvert from .tasks.convert import TaskConvert
from . import logger, config, db, ub, fs from . import logger, config, db, ub, fs
from . import gdriveutils as gd from . import gdriveutils as gd
@@ -118,7 +119,7 @@ def convert_book_format(book_id, calibre_path, old_book_format, new_book_format,
# Texts are not lazy translated as they are supposed to get send out as is # Texts are not lazy translated as they are supposed to get send out as is
def send_test_mail(ereader_mail, user_name): def send_test_mail(ereader_mail, user_name):
for email in ereader_mail.split(','): for email in ereader_mail.split(','):
email = email.strip() email = strip_whitespaces(email)
WorkerThread.add(user_name, TaskEmail(_('Calibre-Web Test Email'), None, None, WorkerThread.add(user_name, TaskEmail(_('Calibre-Web Test Email'), None, None,
config.get_mail_settings(), email, N_("Test Email"), config.get_mail_settings(), email, N_("Test Email"),
_('This Email has been sent via Calibre-Web.'))) _('This Email has been sent via Calibre-Web.')))
@@ -198,7 +199,7 @@ def check_send_to_ereader(entry):
# Check if a reader is existing for any of the book formats, if not, return empty list, otherwise return # Check if a reader is existing for any of the book formats, if not, return empty list, otherwise return
# list with supported formats # list with supported formats
def check_read_formats(entry): def check_read_formats(entry):
extensions_reader = {'TXT', 'PDF', 'EPUB', 'CBZ', 'CBT', 'CBR', 'DJVU', 'DJV'} extensions_reader = {'TXT', 'PDF', 'EPUB', 'KEPUB', 'CBZ', 'CBT', 'CBR', 'DJVU', 'DJV'}
book_formats = list() book_formats = list()
if len(entry.data): if len(entry.data):
for ele in iter(entry.data): for ele in iter(entry.data):
@@ -228,7 +229,7 @@ def send_mail(book_id, book_format, convert, ereader_mail, calibrepath, user_id)
link = '<a href="{}">{}</a>'.format(url_for('web.show_book', book_id=book_id), escape(book.title)) link = '<a href="{}">{}</a>'.format(url_for('web.show_book', book_id=book_id), escape(book.title))
email_text = N_("%(book)s send to eReader", book=link) email_text = N_("%(book)s send to eReader", book=link)
for email in ereader_mail.split(','): for email in ereader_mail.split(','):
email = email.strip() email = strip_whitespaces(email)
WorkerThread.add(user_id, TaskEmail(_("Send to eReader"), book.path, converted_file_name, WorkerThread.add(user_id, TaskEmail(_("Send to eReader"), book.path, converted_file_name,
config.get_mail_settings(), email, config.get_mail_settings(), email,
email_text, _('This Email has been sent via Calibre-Web.'), book.id)) email_text, _('This Email has been sent via Calibre-Web.'), book.id))
@@ -236,7 +237,7 @@ def send_mail(book_id, book_format, convert, ereader_mail, calibrepath, user_id)
return _("The requested file could not be read. Maybe wrong permissions?") return _("The requested file could not be read. Maybe wrong permissions?")
def get_valid_filename(value, replace_whitespace=True, chars=128): def get_valid_filename(value, replace_whitespace=True, chars=128, force_unidecode=False):
""" """
Returns the given string converted to a string that can be used for a clean Returns the given string converted to a string that can be used for a clean
filename. Limits num characters to 128 max. filename. Limits num characters to 128 max.
@@ -244,7 +245,7 @@ def get_valid_filename(value, replace_whitespace=True, chars=128):
if value[-1:] == '.': if value[-1:] == '.':
value = value[:-1]+'_' value = value[:-1]+'_'
value = value.replace("/", "_").replace(":", "_").strip('\0') value = value.replace("/", "_").replace(":", "_").strip('\0')
if config.config_unicode_filename: if config.config_unicode_filename or force_unidecode:
value = (unidecode.unidecode(value)) value = (unidecode.unidecode(value))
if replace_whitespace: if replace_whitespace:
# *+:\"/<>? are replaced by _ # *+:\"/<>? are replaced by _
@@ -252,7 +253,7 @@ def get_valid_filename(value, replace_whitespace=True, chars=128):
# pipe has to be replaced with comma # pipe has to be replaced with comma
value = re.sub(r'[|]+', ',', value, flags=re.U) value = re.sub(r'[|]+', ',', value, flags=re.U)
value = value.encode('utf-8')[:chars].decode('utf-8', errors='ignore').strip() value = strip_whitespaces(value.encode('utf-8')[:chars].decode('utf-8', errors='ignore'))
if not value: if not value:
raise ValueError("Filename cannot be empty") raise ValueError("Filename cannot be empty")
@@ -267,11 +268,11 @@ def split_authors(values):
commas = author.count(',') commas = author.count(',')
if commas == 1: if commas == 1:
author_split = author.split(',') author_split = author.split(',')
authors_list.append(author_split[1].strip() + ' ' + author_split[0].strip()) authors_list.append(strip_whitespaces(author_split[1]) + ' ' + strip_whitespaces(author_split[0]))
elif commas > 1: elif commas > 1:
authors_list.extend([x.strip() for x in author.split(',')]) authors_list.extend([strip_whitespaces(x) for x in author.split(',')])
else: else:
authors_list.append(author.strip()) authors_list.append(strip_whitespaces(author))
return authors_list return authors_list
@@ -327,7 +328,7 @@ def edit_book_read_status(book_id, read_status=None):
ub.session_commit("Book {} readbit toggled".format(book_id)) ub.session_commit("Book {} readbit toggled".format(book_id))
else: else:
try: try:
calibre_db.update_title_sort(config) calibre_db.create_functions(config)
book = calibre_db.get_filtered_book(book_id) book = calibre_db.get_filtered_book(book_id)
book_read_status = getattr(book, 'custom_column_' + str(config.config_read_column)) book_read_status = getattr(book, 'custom_column_' + str(config.config_read_column))
if len(book_read_status): if len(book_read_status):
@@ -417,8 +418,6 @@ def rename_author_path(first_author, old_author_dir, renamed_author, calibre_pat
# Create new_author_dir from parameter or from database # Create new_author_dir from parameter or from database
# Create new title_dir from database and add id # Create new title_dir from database and add id
new_authordir = get_valid_filename(first_author, chars=96) new_authordir = get_valid_filename(first_author, chars=96)
# new_author = calibre_db.session.query(db.Authors).filter(db.Authors.name == renamed_author).first()
# old_author_dir = get_valid_filename(old_author_name, chars=96)
new_author_rename_dir = get_valid_filename(renamed_author, chars=96) new_author_rename_dir = get_valid_filename(renamed_author, chars=96)
if gdrive: if gdrive:
g_file = gd.getFileFromEbooksFolder(None, old_author_dir) g_file = gd.getFileFromEbooksFolder(None, old_author_dir)
@@ -467,7 +466,6 @@ def update_dir_structure_file(book_id, calibre_path, original_filepath, new_auth
db_filename, db_filename,
original_filepath, original_filepath,
path) path)
# old_path = os.path.join(calibre_path, author_dir, new_title_dir).replace('\\', '/')
new_path = os.path.join(calibre_path, new_author_dir, new_title_dir).replace('\\', '/') new_path = os.path.join(calibre_path, new_author_dir, new_title_dir).replace('\\', '/')
all_new_name = get_valid_filename(local_book.title, chars=42) + ' - ' \ all_new_name = get_valid_filename(local_book.title, chars=42) + ' - ' \
+ get_valid_filename(new_author, chars=42) + get_valid_filename(new_author, chars=42)
@@ -476,8 +474,6 @@ def update_dir_structure_file(book_id, calibre_path, original_filepath, new_auth
if error: if error:
return error return error
# Rename all files from old names to new names
return False return False
@@ -489,7 +485,7 @@ def upload_new_file_gdrive(book_id, first_author, title, title_dir, original_fil
title_dir + " (" + str(book_id) + ")") title_dir + " (" + str(book_id) + ")")
book.path = gdrive_path.replace("\\", "/") book.path = gdrive_path.replace("\\", "/")
gd.uploadFileToEbooksFolder(os.path.join(gdrive_path, file_name).replace("\\", "/"), original_filepath) gd.uploadFileToEbooksFolder(os.path.join(gdrive_path, file_name).replace("\\", "/"), original_filepath)
return False # rename_files_on_change(first_author, renamed_author, local_book=book, gdrive=True) return False
def update_dir_structure_gdrive(book_id, first_author): def update_dir_structure_gdrive(book_id, first_author):
@@ -517,24 +513,26 @@ def update_dir_structure_gdrive(book_id, first_author):
book.path = new_authordir + '/' + book.path.split('/')[1] book.path = new_authordir + '/' + book.path.split('/')[1]
gd.updateDatabaseOnEdit(g_file['id'], book.path) gd.updateDatabaseOnEdit(g_file['id'], book.path)
else: else:
return _('File %(file)s not found on Google Drive', file=authordir) # file not found''' return _('File %(file)s not found on Google Drive', file=authordir) # file not found
if titledir != new_titledir or authordir != new_authordir : if titledir != new_titledir or authordir != new_authordir :
all_new_name = get_valid_filename(book.title, chars=42) + ' - ' \ all_new_name = get_valid_filename(book.title, chars=42) + ' - ' \
+ get_valid_filename(new_authordir, chars=42) + get_valid_filename(new_authordir, chars=42)
rename_all_files_on_change(book, book.path, book.path, all_new_name, gdrive=True) # todo: Move filenames on gdrive rename_all_files_on_change(book, book.path, book.path, all_new_name, gdrive=True) # todo: Move filenames on gdrive
# change location in database to new author/title path
# book.path = os.path.join(authordir, new_titledir).replace('\\', '/')
return False return False
def move_files_on_change(calibre_path, new_author_dir, new_titledir, localbook, db_filename, original_filepath, path): def move_files_on_change(calibre_path, new_author_dir, new_titledir, localbook, db_filename, original_filepath, path):
new_path = os.path.join(calibre_path, new_author_dir, new_titledir) new_path = os.path.join(calibre_path, new_author_dir, new_titledir)
# new_name = get_valid_filename(localbook.title, chars=96) + ' - ' + new_author_dir
try: try:
if original_filepath: if original_filepath:
if not os.path.isdir(new_path): if not os.path.isdir(new_path):
os.makedirs(new_path) os.makedirs(new_path)
shutil.move(original_filepath, os.path.join(new_path, db_filename)) try:
shutil.move(original_filepath, os.path.join(new_path, db_filename))
except OSError:
log.error("Rename title from {} to {} failed with error, trying to "
"move without metadata".format(path, new_path))
shutil.move(original_filepath, os.path.join(new_path, db_filename), copy_function=shutil.copy)
log.debug("Moving title: %s to %s", original_filepath, new_path) log.debug("Moving title: %s to %s", original_filepath, new_path)
else: else:
# Check new path is not valid path # Check new path is not valid path
@@ -661,7 +659,7 @@ def check_email(email):
def check_username(username): def check_username(username):
username = username.strip() username = strip_whitespaces(username)
if ub.session.query(ub.User).filter(func.lower(ub.User.name) == username.lower()).scalar(): if ub.session.query(ub.User).filter(func.lower(ub.User.name) == username.lower()).scalar():
log.error("This username is already taken") log.error("This username is already taken")
raise Exception(_("This username is already taken")) raise Exception(_("This username is already taken"))
@@ -669,16 +667,18 @@ def check_username(username):
def valid_email(emails): def valid_email(emails):
valid_emails = []
for email in emails.split(','): for email in emails.split(','):
email = email.strip() email = strip_whitespaces(email)
# if email is not deleted # if email is not deleted
if email: if email:
# Regex according to https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input/email#validation # Regex according to https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input/email#validation
if not re.search(r"^[\w.!#$%&'*+\\/=?^_`{|}~-]+@[\w](?:[\w-]{0,61}[\w])?(?:\.[\w](?:[\w-]{0,61}[\w])?)*$", if not re.search(r"^[\w.!#$%&'*+\\/=?^_`{|}~-]+@[\w](?:[\w-]{0,61}[\w])?(?:\.[\w](?:[\w-]{0,61}[\w])?)*$",
email): email):
log.error("Invalid Email address format") log.error("Invalid Email address format for {}".format(email))
raise Exception(_("Invalid Email address format")) raise Exception(_("Invalid Email address format"))
return email valid_emails.append(email)
return ",".join(valid_emails)
def valid_password(check_password): def valid_password(check_password):
@@ -788,24 +788,23 @@ def get_book_cover_internal(book, resolution=None):
def get_book_cover_thumbnail(book, resolution): def get_book_cover_thumbnail(book, resolution):
if book and book.has_cover: if book and book.has_cover:
return ub.session \ return (ub.session
.query(ub.Thumbnail) \ .query(ub.Thumbnail)
.filter(ub.Thumbnail.type == THUMBNAIL_TYPE_COVER) \ .filter(ub.Thumbnail.type == THUMBNAIL_TYPE_COVER)
.filter(ub.Thumbnail.entity_id == book.id) \ .filter(ub.Thumbnail.entity_id == book.id)
.filter(ub.Thumbnail.resolution == resolution) \ .filter(ub.Thumbnail.resolution == resolution)
.filter(or_(ub.Thumbnail.expiration.is_(None), ub.Thumbnail.expiration > datetime.utcnow())) \ .filter(or_(ub.Thumbnail.expiration.is_(None), ub.Thumbnail.expiration > datetime.now(timezone.utc)))
.first() .first())
def get_series_thumbnail_on_failure(series_id, resolution): def get_series_thumbnail_on_failure(series_id, resolution):
book = calibre_db.session \ book = (calibre_db.session
.query(db.Books) \ .query(db.Books)
.join(db.books_series_link) \ .join(db.books_series_link)
.join(db.Series) \ .join(db.Series)
.filter(db.Series.id == series_id) \ .filter(db.Series.id == series_id)
.filter(db.Books.has_cover == 1) \ .filter(db.Books.has_cover == 1)
.first() .first())
return get_book_cover_internal(book, resolution=resolution) return get_book_cover_internal(book, resolution=resolution)
@@ -827,13 +826,13 @@ def get_series_cover_internal(series_id, resolution=None):
def get_series_thumbnail(series_id, resolution): def get_series_thumbnail(series_id, resolution):
return ub.session \ return (ub.session
.query(ub.Thumbnail) \ .query(ub.Thumbnail)
.filter(ub.Thumbnail.type == THUMBNAIL_TYPE_SERIES) \ .filter(ub.Thumbnail.type == THUMBNAIL_TYPE_SERIES)
.filter(ub.Thumbnail.entity_id == series_id) \ .filter(ub.Thumbnail.entity_id == series_id)
.filter(ub.Thumbnail.resolution == resolution) \ .filter(ub.Thumbnail.resolution == resolution)
.filter(or_(ub.Thumbnail.expiration.is_(None), ub.Thumbnail.expiration > datetime.utcnow())) \ .filter(or_(ub.Thumbnail.expiration.is_(None), ub.Thumbnail.expiration > datetime.now(timezone.utc)))
.first() .first())
# saves book cover from url # saves book cover from url
@@ -842,7 +841,7 @@ def save_cover_from_url(url, book_path):
if cli_param.allow_localhost: if cli_param.allow_localhost:
img = requests.get(url, timeout=(10, 200), allow_redirects=False) # ToDo: Error Handling img = requests.get(url, timeout=(10, 200), allow_redirects=False) # ToDo: Error Handling
elif use_advocate: elif use_advocate:
img = advocate.get(url, timeout=(10, 200), allow_redirects=False) # ToDo: Error Handling img = cw_advocate.get(url, timeout=(10, 200), allow_redirects=False) # ToDo: Error Handling
else: else:
log.error("python module advocate is not installed but is needed") log.error("python module advocate is not installed but is needed")
return False, _("Python module 'advocate' is not installed but is needed for cover uploads") return False, _("Python module 'advocate' is not installed but is needed for cover uploads")
@@ -906,7 +905,7 @@ def save_cover(img, book_path):
else: else:
imgc = Image(blob=io.BytesIO(img.content)) imgc = Image(blob=io.BytesIO(img.content))
imgc.format = 'jpeg' imgc.format = 'jpeg'
imgc.transform_colorspace("rgb") imgc.transform_colorspace("srgb")
img = imgc img = imgc
except (BlobError, MissingDelegateError): except (BlobError, MissingDelegateError):
log.error("Invalid cover file content") log.error("Invalid cover file content")
@@ -975,7 +974,8 @@ def do_download_file(book, book_format, client, data, headers):
# ToDo Check headers parameter # ToDo Check headers parameter
for element in headers: for element in headers:
response.headers[element[0]] = element[1] response.headers[element[0]] = element[1]
log.info('Downloading file: {}'.format(os.path.join(filename, book_name + "." + book_format))) log.info('Downloading file: \'%s\' by %s - %s', format(os.path.join(filename, book_name + "." + book_format)),
current_user.name, request.headers.get('X-Forwarded-For', request.remote_addr))
return response return response
@@ -1105,11 +1105,14 @@ def get_download_link(book_id, book_format, client):
file_name = book.title file_name = book.title
if len(book.authors) > 0: if len(book.authors) > 0:
file_name = file_name + ' - ' + book.authors[0].name file_name = file_name + ' - ' + book.authors[0].name
file_name = get_valid_filename(file_name, replace_whitespace=False) if client == "kindle":
file_name = get_valid_filename(file_name, replace_whitespace=False, force_unidecode=True)
else:
file_name = quote(get_valid_filename(file_name, replace_whitespace=False))
headers = Headers() headers = Headers()
headers["Content-Type"] = mimetypes.types_map.get('.' + book_format, "application/octet-stream") headers["Content-Type"] = mimetypes.types_map.get('.' + book_format, "application/octet-stream")
headers["Content-Disposition"] = "attachment; filename=%s.%s; filename*=UTF-8''%s.%s" % ( headers["Content-Disposition"] = ('attachment; filename="{}.{}"; filename*=UTF-8\'\'{}.{}').format(
quote(file_name), book_format, quote(file_name), book_format) file_name, book_format, file_name, book_format)
return do_download_file(book, book_format, client, data1, headers) return do_download_file(book, book_format, client, data1, headers)
else: else:
log.error("Book id {} not found for downloading".format(book_id)) log.error("Book id {} not found for downloading".format(book_id))

View File

@@ -15,24 +15,17 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
import sys
from .iso_language_names import LANGUAGE_NAMES as _LANGUAGE_NAMES from .iso_language_names import LANGUAGE_NAMES as _LANGUAGE_NAMES
from . import logger from . import logger
from .string_helper import strip_whitespaces
log = logger.create() log = logger.create()
try: try:
from iso639 import languages, __version__
get = languages.get
except ImportError:
from pycountry import languages as pyc_languages from pycountry import languages as pyc_languages
try:
import pkg_resources
__version__ = pkg_resources.get_distribution('pycountry').version + ' (PyCountry)'
del pkg_resources
except (ImportError, Exception):
__version__ = "? (PyCountry)"
def _copy_fields(l): def _copy_fields(l):
l.part1 = getattr(l, 'alpha_2', None) l.part1 = getattr(l, 'alpha_2', None)
@@ -46,6 +39,11 @@ except ImportError:
return _copy_fields(pyc_languages.get(alpha_2=part1)) return _copy_fields(pyc_languages.get(alpha_2=part1))
if name is not None: if name is not None:
return _copy_fields(pyc_languages.get(name=name)) return _copy_fields(pyc_languages.get(name=name))
except ImportError as ex:
if sys.version_info >= (3, 12):
print("Python 3.12 isn't compatible with iso-639. Please install pycountry.")
from iso639 import languages
get = languages.get
def get_language_names(locale): def get_language_names(locale):
@@ -69,20 +67,20 @@ def get_language_name(locale, lang_code):
return name return name
def get_language_codes(locale, language_names, remainder=None): def get_language_code_from_name(locale, language_names, remainder=None):
language_names = set(x.strip().lower() for x in language_names if x) language_names = set(strip_whitespaces(x).lower() for x in language_names if x)
lang = list() lang = list()
for k, v in get_language_names(locale).items(): for key, val in get_language_names(locale).items():
v = v.lower() val = val.lower()
if v in language_names: if val in language_names:
lang.append(k) lang.append(key)
language_names.remove(v) language_names.remove(val)
if remainder is not None and language_names: if remainder is not None and language_names:
remainder.extend(language_names) remainder.extend(language_names)
return lang return lang
def get_valid_language_codes(locale, language_names, remainder=None): def get_valid_language_codes_from_code(locale, language_names, remainder=None):
lang = list() lang = list()
if "" in language_names: if "" in language_names:
language_names.remove("") language_names.remove("")
@@ -103,6 +101,6 @@ def get_lang3(lang):
ret_value = lang ret_value = lang
else: else:
ret_value = "" ret_value = ""
except KeyError: except (KeyError, AttributeError):
ret_value = lang ret_value = lang
return ret_value return ret_value

View File

@@ -8138,6 +8138,384 @@ LANGUAGE_NAMES = {
"zul": "Zulu", "zul": "Zulu",
"zun": "Zuni" "zun": "Zuni"
}, },
"sl": {
"abk": "abhazijski",
"ace": "achinese",
"ach": "Acoli",
"ada": "Adangme",
"ady": "Adyghe",
"aar": "afarski",
"afh": "Afrihili",
"afr": "afrikanski",
"ain": "Ainu (Japan)",
"aka": "Akan",
"akk": "akadski",
"sqi": "albanščina",
"ale": "aleutski",
"amh": "amharski",
"anp": "Angika",
"ara": "arabski",
"arg": "aragonski",
"arp": "Arapaho",
"arw": "araukanski",
"hye": "armenščina",
"asm": "asamski",
"ast": "Asturian",
"ava": "avarski",
"ave": "avestijski jeziki",
"awa": "Awadhi",
"aym": "Aymara",
"aze": "azerbajdžanski",
"ban": "balijščina",
"bal": "belučijski",
"bam": "bambarski",
"bas": "Basa (Cameroon)",
"bak": "baškirski",
"eus": "baskovščina",
"bej": "Beja",
"bel": "beloruščina",
"bem": "Bemba (Zambia)",
"ben": "bengalščina",
"bit": "Berinomo",
"bho": "Bhojpuri",
"bik": "bikolščina",
"byn": "Bilin",
"bin": "Bini",
"bis": "bislama",
"zbl": "Blissymbols",
"bos": "bošnjaščina",
"bra": "Braj",
"bre": "bretonščina",
"bug": "buginščina",
"bul": "bolgarščina",
"bua": "burjatščina",
"mya": "burmanščina",
"cad": "kadajščina?",
"cat": "katalonščina",
"ceb": "cebuanščina",
"chg": "Chagatai",
"cha": "čamorščina",
"che": "čečenščina",
"chr": "čerokeščina",
"chy": "čejenščina",
"chb": "čibčevščina",
"zho": "kitajščina",
"chn": "Chinook jargon",
"chp": "čipevščina",
"cho": "Choctaw",
"cht": "Cholón",
"chk": "Chuukese",
"chv": "čuvaščina",
"cop": "koptščina",
"cor": "kornijščina",
"cos": "korzijščina",
"cre": "krijščina",
"mus": "Creek",
"hrv": "hrvaščina",
"ces": "češčina",
"dak": "Dakota",
"dan": "danski",
"dar": "Dargwa",
"del": "Delaware",
"div": "Dhivehi",
"din": "Dinka",
"doi": "Dogri (macrolanguage)",
"dgr": "Dogrib",
"dua": "Duala",
"nld": "nizozemščina",
"dse": "Dutch Sign Language",
"dyu": "Dyula",
"dzo": "dzongkha",
"efi": "Efik",
"egy": "egipčanski",
"eka": "Ekajuk",
"elx": "elamščina",
"eng": "angleščina",
"enu": "Enu",
"myv": "Erzya",
"epo": "esperanto",
"est": "estonščina",
"ewe": "evenščina",
"ewo": "Ewondo",
"fan": "Fang (Equatorial Guinea)",
"fat": "Fanti",
"fao": "ferščina",
"fij": "fidžijščina",
"fil": "Filipino",
"fin": "finščina",
"fon": "Fon",
"fra": "francoščina",
"fur": "furlanščina",
"ful": "fulščina",
"gaa": "Ga",
"glg": "Galician",
"lug": "Ganda",
"gay": "gajščina?",
"gba": "Gbaya (Central African Republic)",
"hmj": "Ge",
"gez": "etiopščina?",
"kat": "gruzinščina",
"deu": "nemški",
"gil": "gilbertščina",
"gon": "Gondi",
"gor": "Gorontalo",
"got": "gotščina",
"grb": "Grebo",
"grn": "gvaranijščina",
"guj": "gudžaratščina",
"gwi": "Gwichʼin",
"hai": "haidščina",
"hau": "havščina",
"haw": "havajščina",
"heb": "hebrejščina",
"her": "Herero",
"hil": "hilingajnonščina",
"hin": "hindijščina",
"hmo": "hiri motu",
"hit": "hetitščina",
"hmn": "hmonščina; miaojščina",
"hun": "madžarščina",
"hup": "hupščina",
"iba": "ibanščina",
"isl": "islandščina",
"ido": "Ido",
"ibo": "Igbo",
"ilo": "Iloko",
"ind": "indonezijščina",
"inh": "inguščina",
"ina": "interlingva",
"ile": "Interlingue",
"iku": "inuktituščina",
"ipk": "Inupiaq",
"gle": "irščina",
"ita": "italijanščina",
"jpn": "japonščina",
"jav": "javanščina",
"jrb": "Judeo-Arabic",
"jpr": "Judeo-Persian",
"kbd": "kabardinščina",
"kab": "Kabyle",
"kac": "Kachin",
"kal": "Kalaallisut",
"xal": "Kalmyk",
"kam": "Kamba (Kenya)",
"kan": "kanareščina",
"kau": "Kanuri",
"kaa": "Kara-Kalpak",
"krc": "Karachay-Balkar",
"krl": "Karelian",
"kas": "kašmirščina",
"csb": "Kashubian",
"kaw": "kavi",
"kaz": "kazaščina",
"kha": "Khasi",
"kho": "Khotanese",
"kik": "kikujščina",
"kmb": "Kimbundu",
"kin": "Kinyarwanda",
"kir": "kirgiščina",
"tlh": "Klingon",
"kom": "komijščina",
"kon": "Kongo",
"kok": "Konkani (macrolanguage)",
"kor": "korejščina",
"kos": "Kosraean",
"kpe": "Kpelle",
"kua": "Kuanyama",
"kum": "kumiščina",
"kur": "kurdščina",
"kru": "Kurukh",
"kut": "kutenajščina",
"lad": "ladinščina",
"lah": "Lahnda",
"lam": "Lamba",
"lao": "laoščina",
"lat": "latinščina",
"lav": "latvijščina",
"lez": "lezginščina",
"lim": "Limburgan",
"lin": "lingala",
"lit": "litvanščina",
"jbo": "Lojban",
"loz": "Lozi",
"lub": "Luba-Katanga",
"lua": "lubalulujščina",
"lui": "Luiseno",
"smj": "Lule Sami",
"lun": "Lunda",
"luo": "Luo (Kenya and Tanzania)",
"lus": "Lushai",
"ltz": "Luxembourgish",
"mkd": "makedonščina",
"mad": "madurščina",
"mag": "Magahi",
"mai": "Maithili",
"mak": "makasarščina",
"mlg": "malgaščina",
"msa": "Malay (macrolanguage)",
"mal": "malajalščina",
"mlt": "malteščina",
"mnc": "Manchu",
"mdr": "Mandar",
"man": "Mandingo",
"mni": "manipurščina",
"glv": "manska gelščina",
"mri": "maorščina",
"arn": "Mapudungun",
"mar": "maratščina",
"chm": "Mari (Russia)",
"mah": "Marshallese",
"mwr": "Marwari",
"mas": "masajščina",
"men": "Mende (Sierra Leone)",
"mic": "Mi'kmaq",
"min": "Minangkabau",
"mwl": "Mirandese",
"moh": "mohoščina",
"mdf": "Moksha",
"lol": "Mongo",
"mon": "mongolščina",
"mos": "mosanščina",
"mul": "Več jezikov",
"nqo": "N'Ko",
"nau": "Nauru",
"nav": "navaščina",
"ndo": "Ndonga",
"nap": "napolitanščina",
"nia": "niaščina",
"niu": "niuejščina",
"zxx": "No linguistic content",
"nog": "Nogai",
"nor": "norveščina",
"nob": "Norwegian Bokmål",
"nno": "norveščina; nynorsk",
"nym": "Nyamwezi",
"nya": "Nyanja",
"nyn": "Nyankole",
"nyo": "Nyoro",
"nzi": "Nzima",
"oci": "Occitan (post 1500)",
"oji": "Ojibwa",
"orm": "Oromo",
"osa": "Osage",
"oss": "Ossetian",
"pal": "Pahlavi",
"pau": "palavanščina",
"pli": "Pali",
"pam": "Pampanga",
"pag": "pangasinanščina",
"pan": "Panjabi",
"pap": "papiamentu",
"fas": "perzijščina",
"phn": "feničanščina",
"pon": "Pohnpeian",
"pol": "poljščina",
"por": "portugalđščina",
"pus": "paštu",
"que": "Quechua",
"raj": "radžastanščina",
"rap": "rapanujščina",
"ron": "romunščina",
"roh": "Romansh",
"rom": "romščina",
"run": "rundščina",
"rus": "ruščina",
"smo": "samoanščina",
"sad": "Sandawe",
"sag": "Sango",
"san": "sanskrt",
"sat": "santalščina",
"srd": "sardinščina",
"sas": "Sasak",
"sco": "škotščina",
"sel": "selkupščina",
"srp": "srbščina",
"srr": "Serer",
"shn": "šanščina",
"sna": "šonščina",
"scn": "sicilijanščina",
"sid": "Sidamo",
"bla": "Siksika",
"snd": "sindščina",
"sin": "Sinhala",
"den": "Slave (Athapascan)",
"slk": "slovaščina",
"slv": "slovenščina",
"sog": "Sogdian",
"som": "Somali",
"snk": "Soninke",
"spa": "španščina",
"srn": "Sranan Tongo",
"suk": "Sukuma",
"sux": "sumerščina",
"sun": "sundščina",
"sus": "susuamijščina?",
"swa": "Swahili (macrolanguage)",
"ssw": "svazijščina?",
"swe": "švedščina",
"syr": "sirščina",
"tgl": "tagaloščina",
"tah": "tahitijščina",
"tgk": "tadžiščina",
"tmh": "Tamashek",
"tam": "tamilščina",
"tat": "tatarščina",
"tel": "Telugu",
"ter": "Tereno",
"tet": "Tetum",
"tha": "tajščina",
"bod": "tibetanščina",
"tig": "Tigre",
"tir": "Tigrinya",
"tem": "Timne",
"tiv": "Tiv",
"tli": "Tlingit",
"tpi": "tok pisin",
"tkl": "Tokelau",
"tog": "Tonga (Nyasa)",
"ton": "tonganščina",
"tsi": "tsimšijščina",
"tso": "Tsonga",
"tsn": "Tswana",
"tum": "Tumbuka",
"tur": "turščina",
"tuk": "turkmenščina",
"tvl": "tuvalujščina",
"tyv": "Tuvinian",
"twi": "Twi",
"udm": "Udmurt",
"uga": "ugaritščina",
"uig": "ujgurščina",
"ukr": "ukrajinščina",
"umb": "Umbundu",
"mis": "Uncoded languages",
"und": "nedoločen",
"urd": "urdujščina",
"uzb": "uzbeščina",
"vai": "vajščina",
"ven": "Venda",
"vie": "vietnamščina",
"vol": "Volapük",
"vot": "votjaščina",
"wln": "valonščina",
"war": "Waray (Philippines)",
"was": "Washo",
"cym": "valižanščina",
"wal": "Wolaytta",
"wol": "Wolof",
"xho": "koščina",
"sah": "jakutščina",
"yao": "jaojščina",
"yap": "Yapese",
"yid": "jidiš",
"yor": "jorubščina",
"zap": "Zapotec",
"zza": "Zaza",
"zen": "Zenaga",
"zha": "Zhuang",
"zul": "zulujščina",
"zun": "Zuni"
},
"sv": { "sv": {
"aar": "Afar", "aar": "Afar",
"abk": "Abchaziska", "abk": "Abchaziska",

View File

@@ -27,7 +27,7 @@ import datetime
import mimetypes import mimetypes
from uuid import uuid4 from uuid import uuid4
from flask import Blueprint, request, url_for from flask import Blueprint, request, url_for, g
from flask_babel import format_date from flask_babel import format_date
from .cw_login import current_user from .cw_login import current_user
@@ -43,6 +43,8 @@ def url_for_other_page(page):
args = request.view_args.copy() args = request.view_args.copy()
args['page'] = page args['page'] = page
for get, val in request.args.items(): for get, val in request.args.items():
if get == "page":
continue
args[get] = val args[get] = val
return url_for(request.endpoint, **args) return url_for(request.endpoint, **args)
@@ -111,21 +113,12 @@ def yesno(value, yes, no):
@jinjia.app_template_filter('formatfloat') @jinjia.app_template_filter('formatfloat')
def formatfloat(value, decimals=1): def formatfloat(value, decimals=1):
value = 0 if not value else value if not value or (isinstance(value, str) and not value.is_numeric()):
return ('{0:.' + str(decimals) + 'f}').format(value).rstrip('0').rstrip('.') return value
formated_value = ('{0:.' + str(decimals) + 'f}').format(value)
if formated_value.endswith('.' + "0" * decimals):
@jinjia.app_template_filter('formatseriesindex') formated_value = formated_value.rstrip('0').rstrip('.')
def formatseriesindex_filter(series_index): return formated_value
if series_index:
try:
if int(series_index) - series_index == 0:
return int(series_index)
else:
return series_index
except (ValueError, TypeError):
return series_index
return 0
@jinjia.app_template_filter('escapedlink') @jinjia.app_template_filter('escapedlink')
@@ -179,3 +172,12 @@ def get_cover_srcset(series):
url = url_for('web.get_series_cover', series_id=series.id, resolution=shortname, c=cache_timestamp()) url = url_for('web.get_series_cover', series_id=series.id, resolution=shortname, c=cache_timestamp())
srcset.append(f'{url} {resolution}x') srcset.append(f'{url} {resolution}x')
return ', '.join(srcset) return ', '.join(srcset)
@jinjia.app_template_filter('music')
def contains_music(book_formats):
result = False
for format in book_formats:
if format.format.lower() in g.constants.EXTENSIONS_AUDIO:
result = True
return result

View File

@@ -18,7 +18,7 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
import base64 import base64
import datetime from datetime import datetime, timezone
import os import os
import uuid import uuid
import zipfile import zipfile
@@ -47,7 +47,7 @@ import requests
from . import config, logger, kobo_auth, db, calibre_db, helper, shelf as shelf_lib, ub, csrf, kobo_sync_status from . import config, logger, kobo_auth, db, calibre_db, helper, shelf as shelf_lib, ub, csrf, kobo_sync_status
from . import isoLanguages from . import isoLanguages
from .epub import get_epub_layout from .epub import get_epub_layout
from .constants import COVER_THUMBNAIL_SMALL from .constants import COVER_THUMBNAIL_SMALL, COVER_THUMBNAIL_MEDIUM, COVER_THUMBNAIL_LARGE
from .helper import get_download_link from .helper import get_download_link
from .services import SyncToken as SyncToken from .services import SyncToken as SyncToken
from .web import download_required from .web import download_required
@@ -106,24 +106,29 @@ def make_request_to_kobo_store(sync_token=None):
return store_response return store_response
def redirect_or_proxy_request(): def redirect_or_proxy_request(auth=False):
if config.config_kobo_proxy: if config.config_kobo_proxy:
if request.method == "GET": try:
return redirect(get_store_url_for_current_request(), 307) if request.method == "GET":
else: alfa = redirect(get_store_url_for_current_request(), 307)
# The Kobo device turns other request types into GET requests on redirects, return alfa
# so we instead proxy to the Kobo store ourselves. else:
store_response = make_request_to_kobo_store() # The Kobo device turns other request types into GET requests on redirects,
# so we instead proxy to the Kobo store ourselves.
store_response = make_request_to_kobo_store()
response_headers = store_response.headers response_headers = store_response.headers
for header_key in CONNECTION_SPECIFIC_HEADERS: for header_key in CONNECTION_SPECIFIC_HEADERS:
response_headers.pop(header_key, default=None) response_headers.pop(header_key, default=None)
return make_response( return make_response(
store_response.content, store_response.status_code, response_headers.items() store_response.content, store_response.status_code, response_headers.items()
) )
else: except Exception as e:
return make_response(jsonify({})) log.error("Failed to receive or parse response from Kobo's endpoint: {}".format(e))
if auth:
return make_calibre_web_auth_response()
return make_response(jsonify({}))
def convert_to_kobo_timestamp_string(timestamp): def convert_to_kobo_timestamp_string(timestamp):
@@ -131,7 +136,7 @@ def convert_to_kobo_timestamp_string(timestamp):
return timestamp.strftime("%Y-%m-%dT%H:%M:%SZ") return timestamp.strftime("%Y-%m-%dT%H:%M:%SZ")
except AttributeError as exc: except AttributeError as exc:
log.debug("Timestamp not valid: {}".format(exc)) log.debug("Timestamp not valid: {}".format(exc))
return datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ") return datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
@kobo.route("/v1/library/sync") @kobo.route("/v1/library/sync")
@@ -150,15 +155,15 @@ def HandleSyncRequest():
# if no books synced don't respect sync_token # if no books synced don't respect sync_token
if not ub.session.query(ub.KoboSyncedBooks).filter(ub.KoboSyncedBooks.user_id == current_user.id).count(): if not ub.session.query(ub.KoboSyncedBooks).filter(ub.KoboSyncedBooks.user_id == current_user.id).count():
sync_token.books_last_modified = datetime.datetime.min sync_token.books_last_modified = datetime.min
sync_token.books_last_created = datetime.datetime.min sync_token.books_last_created = datetime.min
sync_token.reading_state_last_modified = datetime.datetime.min sync_token.reading_state_last_modified = datetime.min
new_books_last_modified = sync_token.books_last_modified # needed for sync selected shelfs only new_books_last_modified = sync_token.books_last_modified # needed for sync selected shelfs only
new_books_last_created = sync_token.books_last_created # needed to distinguish between new and changed entitlement new_books_last_created = sync_token.books_last_created # needed to distinguish between new and changed entitlement
new_reading_state_last_modified = sync_token.reading_state_last_modified new_reading_state_last_modified = sync_token.reading_state_last_modified
new_archived_last_modified = datetime.datetime.min new_archived_last_modified = datetime.min
sync_results = [] sync_results = []
# We reload the book database so that the user gets a fresh view of the library # We reload the book database so that the user gets a fresh view of the library
@@ -323,7 +328,7 @@ def generate_sync_response(sync_token, sync_results, set_cont=False):
sync_token.to_headers(extra_headers) sync_token.to_headers(extra_headers)
# log.debug("Kobo Sync Content: {}".format(sync_results)) # log.debug("Kobo Sync Content: {}".format(sync_results))
# jsonify decodes the unicode string different to what kobo expects # jsonify decodes the Unicode string different to what kobo expects
response = make_response(json.dumps(sync_results), extra_headers) response = make_response(json.dumps(sync_results), extra_headers)
response.headers["Content-Type"] = "application/json; charset=utf-8" response.headers["Content-Type"] = "application/json; charset=utf-8"
return response return response
@@ -375,7 +380,7 @@ def create_book_entitlement(book, archived):
book_uuid = str(book.uuid) book_uuid = str(book.uuid)
return { return {
"Accessibility": "Full", "Accessibility": "Full",
"ActivePeriod": {"From": convert_to_kobo_timestamp_string(datetime.datetime.utcnow())}, "ActivePeriod": {"From": convert_to_kobo_timestamp_string(datetime.now(timezone.utc))},
"Created": convert_to_kobo_timestamp_string(book.timestamp), "Created": convert_to_kobo_timestamp_string(book.timestamp),
"CrossRevisionId": book_uuid, "CrossRevisionId": book_uuid,
"Id": book_uuid, "Id": book_uuid,
@@ -423,7 +428,7 @@ def get_series(book):
def get_seriesindex(book): def get_seriesindex(book):
return book.series_index or 1 return book.series_index if isinstance(book.series_index, float) else 1
def get_language(book): def get_language(book):
@@ -486,14 +491,16 @@ def get_metadata(book):
if get_series(book): if get_series(book):
name = get_series(book) name = get_series(book)
metadata["Series"] = { try:
"Name": get_series(book), metadata["Series"] = {
"Number": get_seriesindex(book), # ToDo Check int() ? "Name": get_series(book),
"NumberFloat": float(get_seriesindex(book)), "Number": get_seriesindex(book), # ToDo Check int() ?
# Get a deterministic id based on the series name. "NumberFloat": float(get_seriesindex(book)),
"Id": str(uuid.uuid3(uuid.NAMESPACE_DNS, name)), # Get a deterministic id based on the series name.
} "Id": str(uuid.uuid3(uuid.NAMESPACE_DNS, name)),
}
except Exception as e:
print(e)
return metadata return metadata
@@ -725,7 +732,7 @@ def sync_shelves(sync_token, sync_results, only_kobo_shelves=False):
ub.session_commit() ub.session_commit()
# Creates a Kobo "Tag" object from a ub.Shelf object # Creates a Kobo "Tag" object from an ub.Shelf object
def create_kobo_tag(shelf): def create_kobo_tag(shelf):
tag = { tag = {
"Created": convert_to_kobo_timestamp_string(shelf.created), "Created": convert_to_kobo_timestamp_string(shelf.created),
@@ -795,7 +802,7 @@ def HandleStateRequest(book_uuid):
if new_book_read_status == ub.ReadBook.STATUS_IN_PROGRESS \ if new_book_read_status == ub.ReadBook.STATUS_IN_PROGRESS \
and new_book_read_status != book_read.read_status: and new_book_read_status != book_read.read_status:
book_read.times_started_reading += 1 book_read.times_started_reading += 1
book_read.last_time_started_reading = datetime.datetime.utcnow() book_read.last_time_started_reading = datetime.now(timezone.utc)
book_read.read_status = new_book_read_status book_read.read_status = new_book_read_status
update_results_response["StatusInfoResult"] = {"Result": "Success"} update_results_response["StatusInfoResult"] = {"Result": "Success"}
except (KeyError, TypeError, ValueError, StatementError): except (KeyError, TypeError, ValueError, StatementError):
@@ -903,7 +910,12 @@ def get_current_bookmark_response(current_bookmark):
@requires_kobo_auth @requires_kobo_auth
def HandleCoverImageRequest(book_uuid, width, height, Quality, isGreyscale): def HandleCoverImageRequest(book_uuid, width, height, Quality, isGreyscale):
try: try:
resolution = None if int(height) > 1000 else COVER_THUMBNAIL_SMALL if int(height) > 1000:
resolution = COVER_THUMBNAIL_LARGE
elif int(height) > 500:
resolution = COVER_THUMBNAIL_MEDIUM
else:
resolution = COVER_THUMBNAIL_SMALL
except ValueError: except ValueError:
log.error("Requested height %s of book %s is invalid" % (book_uuid, height)) log.error("Requested height %s of book %s is invalid" % (book_uuid, height))
resolution = COVER_THUMBNAIL_SMALL resolution = COVER_THUMBNAIL_SMALL
@@ -948,7 +960,7 @@ def HandleBookDeletionRequest(book_uuid):
# TODO: Implement the following routes # TODO: Implement the following routes
@csrf.exempt @csrf.exempt
@kobo.route("/v1/library/<dummy>", methods=["DELETE", "GET"]) @kobo.route("/v1/library/<dummy>", methods=["DELETE", "GET", "POST"])
def HandleUnimplementedRequest(dummy=None): def HandleUnimplementedRequest(dummy=None):
log.debug("Unimplemented Library Request received: %s (request is forwarded to kobo if configured)", log.debug("Unimplemented Library Request received: %s (request is forwarded to kobo if configured)",
request.base_url) request.base_url)
@@ -1035,7 +1047,7 @@ def HandleAuthRequest():
log.debug('Kobo Auth request') log.debug('Kobo Auth request')
if config.config_kobo_proxy: if config.config_kobo_proxy:
try: try:
return redirect_or_proxy_request() return redirect_or_proxy_request(auth=True)
except Exception: except Exception:
log.error("Failed to receive or parse response from Kobo's auth endpoint. Falling back to un-proxied mode.") log.error("Failed to receive or parse response from Kobo's auth endpoint. Falling back to un-proxied mode.")
return make_calibre_web_auth_response() return make_calibre_web_auth_response()
@@ -1119,25 +1131,37 @@ def download_book(book_id, book_format):
def NATIVE_KOBO_RESOURCES(): def NATIVE_KOBO_RESOURCES():
return { return {
"account_page": "https://secure.kobobooks.com/profile", "account_page": "https://www.kobo.com/account/settings",
"account_page_rakuten": "https://my.rakuten.co.jp/", "account_page_rakuten": "https://my.rakuten.co.jp/",
"add_device": "https://storeapi.kobo.com/v1/user/add-device",
"add_entitlement": "https://storeapi.kobo.com/v1/library/{RevisionIds}", "add_entitlement": "https://storeapi.kobo.com/v1/library/{RevisionIds}",
"affiliaterequest": "https://storeapi.kobo.com/v1/affiliate", "affiliaterequest": "https://storeapi.kobo.com/v1/affiliate",
"assets": "https://storeapi.kobo.com/v1/assets",
"audiobook": "https://storeapi.kobo.com/v1/products/audiobooks/{ProductId}",
"audiobook_detail_page": "https://www.kobo.com/{region}/{language}/audiobook/{slug}",
"audiobook_landing_page": "https://www.kobo.com/{region}/{language}/audiobooks",
"audiobook_preview": "https://storeapi.kobo.com/v1/products/audiobooks/{Id}/preview",
"audiobook_purchase_withcredit": "https://storeapi.kobo.com/v1/store/audiobook/{Id}",
"audiobook_subscription_orange_deal_inclusion_url": "https://authorize.kobo.com/inclusion", "audiobook_subscription_orange_deal_inclusion_url": "https://authorize.kobo.com/inclusion",
"authorproduct_recommendations": "https://storeapi.kobo.com/v1/products/books/authors/recommendations", "authorproduct_recommendations": "https://storeapi.kobo.com/v1/products/books/authors/recommendations",
"autocomplete": "https://storeapi.kobo.com/v1/products/autocomplete", "autocomplete": "https://storeapi.kobo.com/v1/products/autocomplete",
"blackstone_header": {"key": "x-amz-request-payer", "value": "requester"}, "blackstone_header": {
"key": "x-amz-request-payer",
"value": "requester"
},
"book": "https://storeapi.kobo.com/v1/products/books/{ProductId}", "book": "https://storeapi.kobo.com/v1/products/books/{ProductId}",
"book_detail_page": "https://store.kobobooks.com/{culture}/ebook/{slug}", "book_detail_page": "https://www.kobo.com/{region}/{language}/ebook/{slug}",
"book_detail_page_rakuten": "https://books.rakuten.co.jp/rk/{crossrevisionid}", "book_detail_page_rakuten": "http://books.rakuten.co.jp/rk/{crossrevisionid}",
"book_landing_page": "https://store.kobobooks.com/ebooks", "book_landing_page": "https://www.kobo.com/ebooks",
"book_subscription": "https://storeapi.kobo.com/v1/products/books/subscriptions", "book_subscription": "https://storeapi.kobo.com/v1/products/books/subscriptions",
"browse_history": "https://storeapi.kobo.com/v1/user/browsehistory",
"categories": "https://storeapi.kobo.com/v1/categories", "categories": "https://storeapi.kobo.com/v1/categories",
"categories_page": "https://store.kobobooks.com/ebooks/categories", "categories_page": "https://www.kobo.com/ebooks/categories",
"category": "https://storeapi.kobo.com/v1/categories/{CategoryId}", "category": "https://storeapi.kobo.com/v1/categories/{CategoryId}",
"category_featured_lists": "https://storeapi.kobo.com/v1/categories/{CategoryId}/featured", "category_featured_lists": "https://storeapi.kobo.com/v1/categories/{CategoryId}/featured",
"category_products": "https://storeapi.kobo.com/v1/categories/{CategoryId}/products", "category_products": "https://storeapi.kobo.com/v1/categories/{CategoryId}/products",
"checkout_borrowed_book": "https://storeapi.kobo.com/v1/library/borrow", "checkout_borrowed_book": "https://storeapi.kobo.com/v1/library/borrow",
"client_authd_referral": "https://authorize.kobo.com/api/AuthenticatedReferral/client/v1/getLink",
"configuration_data": "https://storeapi.kobo.com/v1/configuration", "configuration_data": "https://storeapi.kobo.com/v1/configuration",
"content_access_book": "https://storeapi.kobo.com/v1/products/books/{ProductId}/access", "content_access_book": "https://storeapi.kobo.com/v1/products/books/{ProductId}/access",
"customer_care_live_chat": "https://v2.zopim.com/widget/livechat.html?key=Y6gwUmnu4OATxN3Tli4Av9bYN319BTdO", "customer_care_live_chat": "https://v2.zopim.com/widget/livechat.html?key=Y6gwUmnu4OATxN3Tli4Av9bYN319BTdO",
@@ -1148,92 +1172,109 @@ def NATIVE_KOBO_RESOURCES():
"delete_tag_items": "https://storeapi.kobo.com/v1/library/tags/{TagId}/items/delete", "delete_tag_items": "https://storeapi.kobo.com/v1/library/tags/{TagId}/items/delete",
"device_auth": "https://storeapi.kobo.com/v1/auth/device", "device_auth": "https://storeapi.kobo.com/v1/auth/device",
"device_refresh": "https://storeapi.kobo.com/v1/auth/refresh", "device_refresh": "https://storeapi.kobo.com/v1/auth/refresh",
"dictionary_host": "https://kbdownload1-a.akamaihd.net", "dictionary_host": "https://ereaderfiles.kobo.com",
"discovery_host": "https://discovery.kobobooks.com", "discovery_host": "https://discovery.kobobooks.com",
"ereaderdevices": "https://storeapi.kobo.com/v2/products/EReaderDeviceFeeds",
"eula_page": "https://www.kobo.com/termsofuse?style=onestore", "eula_page": "https://www.kobo.com/termsofuse?style=onestore",
"exchange_auth": "https://storeapi.kobo.com/v1/auth/exchange", "exchange_auth": "https://storeapi.kobo.com/v1/auth/exchange",
"external_book": "https://storeapi.kobo.com/v1/products/books/external/{Ids}", "external_book": "https://storeapi.kobo.com/v1/products/books/external/{Ids}",
"facebook_sso_page": "facebook_sso_page": "https://authorize.kobo.com/signin/provider/Facebook/login?returnUrl=http://kobo.com/",
"https://authorize.kobo.com/signin/provider/Facebook/login?returnUrl=http://store.kobobooks.com/",
"featured_list": "https://storeapi.kobo.com/v1/products/featured/{FeaturedListId}", "featured_list": "https://storeapi.kobo.com/v1/products/featured/{FeaturedListId}",
"featured_lists": "https://storeapi.kobo.com/v1/products/featured", "featured_lists": "https://storeapi.kobo.com/v1/products/featured",
"free_books_page": { "free_books_page": {
"EN": "https://www.kobo.com/{region}/{language}/p/free-ebooks", "EN": "https://www.kobo.com/{region}/{language}/p/free-ebooks",
"FR": "https://www.kobo.com/{region}/{language}/p/livres-gratuits", "FR": "https://www.kobo.com/{region}/{language}/p/livres-gratuits",
"IT": "https://www.kobo.com/{region}/{language}/p/libri-gratuiti", "IT": "https://www.kobo.com/{region}/{language}/p/libri-gratuiti",
"NL": "https://www.kobo.com/{region}/{language}/" "NL": "https://www.kobo.com/{region}/{language}/List/bekijk-het-overzicht-van-gratis-ebooks/QpkkVWnUw8sxmgjSlCbJRg",
"List/bekijk-het-overzicht-van-gratis-ebooks/QpkkVWnUw8sxmgjSlCbJRg", "PT": "https://www.kobo.com/{region}/{language}/p/livros-gratis"
"PT": "https://www.kobo.com/{region}/{language}/p/livros-gratis",
}, },
"fte_feedback": "https://storeapi.kobo.com/v1/products/ftefeedback", "fte_feedback": "https://storeapi.kobo.com/v1/products/ftefeedback",
"funnel_metrics": "https://storeapi.kobo.com/v1/funnelmetrics",
"get_download_keys": "https://storeapi.kobo.com/v1/library/downloadkeys",
"get_download_link": "https://storeapi.kobo.com/v1/library/downloadlink",
"get_tests_request": "https://storeapi.kobo.com/v1/analytics/gettests", "get_tests_request": "https://storeapi.kobo.com/v1/analytics/gettests",
"giftcard_epd_redeem_url": "https://www.kobo.com/{storefront}/{language}/redeem-ereader", "giftcard_epd_redeem_url": "https://www.kobo.com/{storefront}/{language}/redeem-ereader",
"giftcard_redeem_url": "https://www.kobo.com/{storefront}/{language}/redeem", "giftcard_redeem_url": "https://www.kobo.com/{storefront}/{language}/redeem",
"help_page": "https://www.kobo.com/help", "gpb_flow_enabled": "False",
"kobo_audiobooks_enabled": "False", "help_page": "http://www.kobo.com/help",
"image_host": "//cdn.kobo.com/book-images/",
"image_url_quality_template": "https://cdn.kobo.com/book-images/{ImageId}/{Width}/{Height}/{Quality}/{IsGreyscale}/image.jpg",
"image_url_template": "https://cdn.kobo.com/book-images/{ImageId}/{Width}/{Height}/false/image.jpg",
"kobo_audiobooks_credit_redemption": "False",
"kobo_audiobooks_enabled": "True",
"kobo_audiobooks_orange_deal_enabled": "False", "kobo_audiobooks_orange_deal_enabled": "False",
"kobo_audiobooks_subscriptions_enabled": "False", "kobo_audiobooks_subscriptions_enabled": "False",
"kobo_nativeborrow_enabled": "True", "kobo_display_price": "True",
"kobo_dropbox_link_account_enabled": "False",
"kobo_google_tax": "False",
"kobo_googledrive_link_account_enabled": "False",
"kobo_nativeborrow_enabled": "False",
"kobo_onedrive_link_account_enabled": "False",
"kobo_onestorelibrary_enabled": "False", "kobo_onestorelibrary_enabled": "False",
"kobo_privacyCentre_url": "https://www.kobo.com/privacy",
"kobo_redeem_enabled": "True", "kobo_redeem_enabled": "True",
"kobo_shelfie_enabled": "False", "kobo_shelfie_enabled": "False",
"kobo_subscriptions_enabled": "False", "kobo_subscriptions_enabled": "True",
"kobo_superpoints_enabled": "False", "kobo_superpoints_enabled": "True",
"kobo_wishlist_enabled": "True", "kobo_wishlist_enabled": "True",
"library_book": "https://storeapi.kobo.com/v1/user/library/books/{LibraryItemId}", "library_book": "https://storeapi.kobo.com/v1/user/library/books/{LibraryItemId}",
"library_items": "https://storeapi.kobo.com/v1/user/library", "library_items": "https://storeapi.kobo.com/v1/user/library",
"library_metadata": "https://storeapi.kobo.com/v1/library/{Ids}/metadata", "library_metadata": "https://storeapi.kobo.com/v1/library/{Ids}/metadata",
"library_prices": "https://storeapi.kobo.com/v1/user/library/previews/prices", "library_prices": "https://storeapi.kobo.com/v1/user/library/previews/prices",
"library_stack": "https://storeapi.kobo.com/v1/user/library/stacks/{LibraryItemId}", "library_search": "https://storeapi.kobo.com/v1/library/search",
"library_sync": "https://storeapi.kobo.com/v1/library/sync", "library_sync": "https://storeapi.kobo.com/v1/library/sync",
"love_dashboard_page": "https://store.kobobooks.com/{culture}/kobosuperpoints", "love_dashboard_page": "https://www.kobo.com/{region}/{language}/kobosuperpoints",
"love_points_redemption_page": "love_points_redemption_page": "https://www.kobo.com/{region}/{language}/KoboSuperPointsRedemption?productId={ProductId}",
"https://store.kobobooks.com/{culture}/KoboSuperPointsRedemption?productId={ProductId}", "magazine_landing_page": "https://www.kobo.com/emagazines",
"magazine_landing_page": "https://store.kobobooks.com/emagazines", "more_sign_in_options": "https://authorize.kobo.com/signin?returnUrl=http://kobo.com/#allProviders",
"notebooks": "https://storeapi.kobo.com/api/internal/notebooks",
"notifications_registration_issue": "https://storeapi.kobo.com/v1/notifications/registration", "notifications_registration_issue": "https://storeapi.kobo.com/v1/notifications/registration",
"oauth_host": "https://oauth.kobo.com", "oauth_host": "https://oauth.kobo.com",
"overdrive_account": "https://auth.overdrive.com/account", "password_retrieval_page": "https://www.kobo.com/passwordretrieval.html",
"overdrive_library": "https://{libraryKey}.auth.overdrive.com/library", "personalizedrecommendations": "https://storeapi.kobo.com/v2/users/personalizedrecommendations",
"overdrive_library_finder_host": "https://libraryfinder.api.overdrive.com", "pocket_link_account_start": "https://authorize.kobo.com/{region}/{language}/linkpocket",
"overdrive_thunder_host": "https://thunder.api.overdrive.com",
"password_retrieval_page": "https://www.kobobooks.com/passwordretrieval.html",
"post_analytics_event": "https://storeapi.kobo.com/v1/analytics/event", "post_analytics_event": "https://storeapi.kobo.com/v1/analytics/event",
"ppx_purchasing_url": "https://purchasing.kobo.com",
"privacy_page": "https://www.kobo.com/privacypolicy?style=onestore", "privacy_page": "https://www.kobo.com/privacypolicy?style=onestore",
"product_nextread": "https://storeapi.kobo.com/v1/products/{ProductIds}/nextread", "product_nextread": "https://storeapi.kobo.com/v1/products/{ProductIds}/nextread",
"product_prices": "https://storeapi.kobo.com/v1/products/{ProductIds}/prices", "product_prices": "https://storeapi.kobo.com/v1/products/{ProductIds}/prices",
"product_recommendations": "https://storeapi.kobo.com/v1/products/{ProductId}/recommendations", "product_recommendations": "https://storeapi.kobo.com/v1/products/{ProductId}/recommendations",
"product_reviews": "https://storeapi.kobo.com/v1/products/{ProductIds}/reviews", "product_reviews": "https://storeapi.kobo.com/v1/products/{ProductIds}/reviews",
"products": "https://storeapi.kobo.com/v1/products", "products": "https://storeapi.kobo.com/v1/products",
"provider_external_sign_in_page": "productsv2": "https://storeapi.kobo.com/v2/products",
"https://authorize.kobo.com/ExternalSignIn/{providerName}?returnUrl=http://store.kobobooks.com/", "provider_external_sign_in_page": "https://authorize.kobo.com/ExternalSignIn/{providerName}?returnUrl=http://kobo.com/",
"purchase_buy": "https://www.kobo.com/checkout/createpurchase/",
"purchase_buy_templated": "https://www.kobo.com/{culture}/checkout/createpurchase/{ProductId}",
"quickbuy_checkout": "https://storeapi.kobo.com/v1/store/quickbuy/{PurchaseId}/checkout", "quickbuy_checkout": "https://storeapi.kobo.com/v1/store/quickbuy/{PurchaseId}/checkout",
"quickbuy_create": "https://storeapi.kobo.com/v1/store/quickbuy/purchase", "quickbuy_create": "https://storeapi.kobo.com/v1/store/quickbuy/purchase",
"rakuten_token_exchange": "https://storeapi.kobo.com/v1/auth/rakuten_token_exchange",
"rating": "https://storeapi.kobo.com/v1/products/{ProductId}/rating/{Rating}", "rating": "https://storeapi.kobo.com/v1/products/{ProductId}/rating/{Rating}",
"reading_services_host": "https://readingservices.kobo.com",
"reading_state": "https://storeapi.kobo.com/v1/library/{Ids}/state", "reading_state": "https://storeapi.kobo.com/v1/library/{Ids}/state",
"redeem_interstitial_page": "https://store.kobobooks.com", "redeem_interstitial_page": "https://www.kobo.com",
"registration_page": "https://authorize.kobo.com/signup?returnUrl=http://store.kobobooks.com/", "registration_page": "https://authorize.kobo.com/signup?returnUrl=http://kobo.com/",
"related_items": "https://storeapi.kobo.com/v1/products/{Id}/related", "related_items": "https://storeapi.kobo.com/v1/products/{Id}/related",
"remaining_book_series": "https://storeapi.kobo.com/v1/products/books/series/{SeriesId}", "remaining_book_series": "https://storeapi.kobo.com/v1/products/books/series/{SeriesId}",
"rename_tag": "https://storeapi.kobo.com/v1/library/tags/{TagId}", "rename_tag": "https://storeapi.kobo.com/v1/library/tags/{TagId}",
"review": "https://storeapi.kobo.com/v1/products/reviews/{ReviewId}", "review": "https://storeapi.kobo.com/v1/products/reviews/{ReviewId}",
"review_sentiment": "https://storeapi.kobo.com/v1/products/reviews/{ReviewId}/sentiment/{Sentiment}", "review_sentiment": "https://storeapi.kobo.com/v1/products/reviews/{ReviewId}/sentiment/{Sentiment}",
"shelfie_recommendations": "https://storeapi.kobo.com/v1/user/recommendations/shelfie", "shelfie_recommendations": "https://storeapi.kobo.com/v1/user/recommendations/shelfie",
"sign_in_page": "https://authorize.kobo.com/signin?returnUrl=http://store.kobobooks.com/", "sign_in_page": "https://authorize.kobo.com/signin?returnUrl=http://kobo.com/",
"social_authorization_host": "https://social.kobobooks.com:8443", "social_authorization_host": "https://social.kobobooks.com:8443",
"social_host": "https://social.kobobooks.com", "social_host": "https://social.kobobooks.com",
"stacks_host_productId": "https://store.kobobooks.com/collections/byproductid/",
"store_home": "www.kobo.com/{region}/{language}", "store_home": "www.kobo.com/{region}/{language}",
"store_host": "store.kobobooks.com", "store_host": "www.kobo.com",
"store_newreleases": "https://store.kobobooks.com/{culture}/List/new-releases/961XUjtsU0qxkFItWOutGA", "store_newreleases": "https://www.kobo.com/{region}/{language}/List/new-releases/961XUjtsU0qxkFItWOutGA",
"store_search": "https://store.kobobooks.com/{culture}/Search?Query={query}", "store_search": "https://www.kobo.com/{region}/{language}/Search?Query={query}",
"store_top50": "https://store.kobobooks.com/{culture}/ebooks/Top", "store_top50": "https://www.kobo.com/{region}/{language}/ebooks/Top",
"subs_landing_page": "https://www.kobo.com/{region}/{language}/plus",
"subs_management_page": "https://www.kobo.com/{region}/{language}/account/subscriptions",
"subs_plans_page": "https://www.kobo.com/{region}/{language}/plus/plans",
"subs_purchase_buy_templated": "https://www.kobo.com/{region}/{language}/Checkoutoption/{ProductId}/{TierId}",
"tag_items": "https://storeapi.kobo.com/v1/library/tags/{TagId}/Items", "tag_items": "https://storeapi.kobo.com/v1/library/tags/{TagId}/Items",
"tags": "https://storeapi.kobo.com/v1/library/tags", "tags": "https://storeapi.kobo.com/v1/library/tags",
"taste_profile": "https://storeapi.kobo.com/v1/products/tasteprofile", "taste_profile": "https://storeapi.kobo.com/v1/products/tasteprofile",
"terms_of_sale_page": "https://authorize.kobo.com/{region}/{language}/terms/termsofsale",
"update_accessibility_to_preview": "https://storeapi.kobo.com/v1/library/{EntitlementIds}/preview", "update_accessibility_to_preview": "https://storeapi.kobo.com/v1/library/{EntitlementIds}/preview",
"use_one_store": "False", "use_one_store": "True",
"user_loyalty_benefits": "https://storeapi.kobo.com/v1/user/loyalty/benefits", "user_loyalty_benefits": "https://storeapi.kobo.com/v1/user/loyalty/benefits",
"user_platform": "https://storeapi.kobo.com/v1/user/platform", "user_platform": "https://storeapi.kobo.com/v1/user/platform",
"user_profile": "https://storeapi.kobo.com/v1/user/profile", "user_profile": "https://storeapi.kobo.com/v1/user/profile",
@@ -1241,6 +1282,6 @@ def NATIVE_KOBO_RESOURCES():
"user_recommendations": "https://storeapi.kobo.com/v1/user/recommendations", "user_recommendations": "https://storeapi.kobo.com/v1/user/recommendations",
"user_reviews": "https://storeapi.kobo.com/v1/user/reviews", "user_reviews": "https://storeapi.kobo.com/v1/user/reviews",
"user_wishlist": "https://storeapi.kobo.com/v1/user/wishlist", "user_wishlist": "https://storeapi.kobo.com/v1/user/wishlist",
"userguide_host": "https://kbdownload1-a.akamaihd.net", "userguide_host": "https://ereaderfiles.kobo.com",
"wishlist_page": "https://store.kobobooks.com/{region}/{language}/account/wishlist", "wishlist_page": "https://www.kobo.com/{region}/{language}/account/wishlist"
} }

View File

@@ -22,7 +22,7 @@
This module also includes research notes into the auth protocol used by Kobo devices. This module also includes research notes into the auth protocol used by Kobo devices.
Log-in: Log-in:
When first booting a Kobo device the user must sign into a Kobo (or affiliate) account. When first booting a Kobo device the user must log in to a Kobo (or affiliate) account.
Upon successful sign-in, the user is redirected to Upon successful sign-in, the user is redirected to
https://auth.kobobooks.com/CrossDomainSignIn?id=<some id> https://auth.kobobooks.com/CrossDomainSignIn?id=<some id>
which serves the following response: which serves the following response:
@@ -41,7 +41,7 @@ issue for a few years now https://www.mobileread.com/forums/showpost.php?p=34768
will still grant access given the userkey.) will still grant access given the userkey.)
Official Kobo Store Api authorization: Official Kobo Store Api authorization:
* For most of the endpoints we care about (sync, metadata, tags, etc), the userKey is * For most of the endpoints we care about (sync, metadata, tags, etc.), the userKey is
passed in the x-kobo-userkey header, and is sufficient to authorize the API call. passed in the x-kobo-userkey header, and is sufficient to authorize the API call.
* Some endpoints (e.g: AnnotationService) instead make use of Bearer tokens pass through * Some endpoints (e.g: AnnotationService) instead make use of Bearer tokens pass through
an authorization header. To get a BearerToken, the device makes a POST request to the an authorization header. To get a BearerToken, the device makes a POST request to the

View File

@@ -19,7 +19,7 @@
from .cw_login import current_user from .cw_login import current_user
from . import ub from . import ub
import datetime from datetime import datetime, timezone
from sqlalchemy.sql.expression import or_, and_, true from sqlalchemy.sql.expression import or_, and_, true
# from sqlalchemy import exc # from sqlalchemy import exc
@@ -58,7 +58,7 @@ def change_archived_books(book_id, state=None, message=None):
archived_book = ub.ArchivedBook(user_id=current_user.id, book_id=book_id) archived_book = ub.ArchivedBook(user_id=current_user.id, book_id=book_id)
archived_book.is_archived = state if state else not archived_book.is_archived archived_book.is_archived = state if state else not archived_book.is_archived
archived_book.last_modified = datetime.datetime.utcnow() # toDo. Check utc timestamp archived_book.last_modified = datetime.now(timezone.utc) # toDo. Check utc timestamp
ub.session.merge(archived_book) ub.session.merge(archived_book)
ub.session_commit(message) ub.session_commit(message)

View File

@@ -29,7 +29,7 @@ from .constants import CONFIG_DIR as _CONFIG_DIR
ACCESS_FORMATTER_GEVENT = Formatter("%(message)s") ACCESS_FORMATTER_GEVENT = Formatter("%(message)s")
ACCESS_FORMATTER_TORNADO = Formatter("[%(asctime)s] %(message)s") ACCESS_FORMATTER_TORNADO = Formatter("[%(asctime)s] %(message)s")
FORMATTER = Formatter("[%(asctime)s] %(levelname)5s {%(name)s:%(lineno)d} %(message)s") FORMATTER = Formatter("[%(asctime)s] %(levelname)5s {%(filename)s:%(lineno)d} %(message)s")
DEFAULT_LOG_LEVEL = logging.INFO DEFAULT_LOG_LEVEL = logging.INFO
DEFAULT_LOG_FILE = os.path.join(_CONFIG_DIR, "calibre-web.log") DEFAULT_LOG_FILE = os.path.join(_CONFIG_DIR, "calibre-web.log")
DEFAULT_ACCESS_LOG = os.path.join(_CONFIG_DIR, "access.log") DEFAULT_ACCESS_LOG = os.path.join(_CONFIG_DIR, "access.log")
@@ -42,18 +42,12 @@ logging.addLevelName(logging.CRITICAL, "CRIT")
class _Logger(logging.Logger): class _Logger(logging.Logger):
def error_or_exception(self, message, stacklevel=2, *args, **kwargs): def error_or_exception(self, message, stacklevel=1, *args, **kwargs):
is_debug = self.getEffectiveLevel() <= logging.DEBUG is_debug = self.getEffectiveLevel() <= logging.DEBUG
if sys.version_info > (3, 7): if not is_debug:
if is_debug: self.exception(message, stacklevel=stacklevel, *args, **kwargs)
self.exception(message, stacklevel=stacklevel, *args, **kwargs)
else:
self.error(message, stacklevel=stacklevel, *args, **kwargs)
else: else:
if is_debug: self.error(message, stacklevel=stacklevel, *args, **kwargs)
self.exception(message, stack_info=True, *args, **kwargs)
else:
self.error(message, *args, **kwargs)
def debug_no_auth(self, message, *args, **kwargs): def debug_no_auth(self, message, *args, **kwargs):
message = message.strip("\r\n") message = message.strip("\r\n")

View File

@@ -31,6 +31,7 @@ def main():
app = create_app() app = create_app()
from .web import web from .web import web
from .basic import basic
from .opds import opds from .opds import opds
from .admin import admi from .admin import admi
from .gdrive import gdrive from .gdrive import gdrive
@@ -64,6 +65,7 @@ def main():
app.register_blueprint(search) app.register_blueprint(search)
app.register_blueprint(tasks) app.register_blueprint(tasks)
app.register_blueprint(web) app.register_blueprint(web)
app.register_blueprint(basic)
app.register_blueprint(opds) app.register_blueprint(opds)
limiter.limit("3/minute", key_func=request_username)(opds) limiter.limit("3/minute", key_func=request_username)(opds)
app.register_blueprint(jinjia) app.register_blueprint(jinjia)

View File

@@ -38,14 +38,16 @@ class Amazon(Metadata):
__name__ = "Amazon" __name__ = "Amazon"
__id__ = "amazon" __id__ = "amazon"
headers = {'upgrade-insecure-requests': '1', headers = {'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36', 'user-agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:130.0) Gecko/20100101 Firefox/130.0',
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/png,image/svg+xml,*/*;q=0.8',
'sec-gpc': '1', 'Sec-Fetch-Site': 'same-origin',
'sec-fetch-site': 'none', 'Sec-Fetch-Mode': 'navigate',
'sec-fetch-mode': 'navigate', 'Sec-Fetch-User': '?1',
'sec-fetch-user': '?1', 'Sec-Fetch-Dest': 'document',
'sec-fetch-dest': 'document', 'Upgrade-Insecure-Requests': '1',
'accept-encoding': 'gzip, deflate, br', 'Alt-Used' : 'www.amazon.com',
'Priority' : 'u=0, i',
'accept-encoding': 'gzip, deflate, br, zstd',
'accept-language': 'en-US,en;q=0.9'} 'accept-language': 'en-US,en;q=0.9'}
session = requests.Session() session = requests.Session()
session.headers=headers session.headers=headers
@@ -53,7 +55,6 @@ class Amazon(Metadata):
def search( def search(
self, query: str, generic_cover: str = "", locale: str = "en" self, query: str, generic_cover: str = "", locale: str = "en"
) -> Optional[List[MetaRecord]]: ) -> Optional[List[MetaRecord]]:
#timer=time()
def inner(link, index) -> [dict, int]: def inner(link, index) -> [dict, int]:
with self.session as session: with self.session as session:
try: try:
@@ -61,11 +62,11 @@ class Amazon(Metadata):
r.raise_for_status() r.raise_for_status()
except Exception as ex: except Exception as ex:
log.warning(ex) log.warning(ex)
return None return []
long_soup = BS(r.text, "lxml") #~4sec :/ long_soup = BS(r.text, "lxml") #~4sec :/
soup2 = long_soup.find("div", attrs={"cel_widget_id": "dpx-books-ppd_csm_instrumentation_wrapper"}) soup2 = long_soup.find("div", attrs={"cel_widget_id": "dpx-ppd_csm_instrumentation_wrapper"})
if soup2 is None: if soup2 is None:
return None return []
try: try:
match = MetaRecord( match = MetaRecord(
title = "", title = "",
@@ -88,7 +89,7 @@ class Amazon(Metadata):
soup2.find("div", attrs={"data-feature-name": "bookDescription"}).stripped_strings)\ soup2.find("div", attrs={"data-feature-name": "bookDescription"}).stripped_strings)\
.replace("\xa0"," ")[:-9].strip().strip("\n") .replace("\xa0"," ")[:-9].strip().strip("\n")
except (AttributeError, TypeError): except (AttributeError, TypeError):
return None # if there is no description it is not a book and therefore should be ignored return [] # if there is no description it is not a book and therefore should be ignored
try: try:
match.title = soup2.find("span", attrs={"id": "productTitle"}).text match.title = soup2.find("span", attrs={"id": "productTitle"}).text
except (AttributeError, TypeError): except (AttributeError, TypeError):
@@ -107,13 +108,13 @@ class Amazon(Metadata):
except (AttributeError, ValueError): except (AttributeError, ValueError):
match.rating = 0 match.rating = 0
try: try:
match.cover = soup2.find("img", attrs={"class": "a-dynamic-image frontImage"})["src"] match.cover = soup2.find("img", attrs={"class": "a-dynamic-image"})["src"]
except (AttributeError, TypeError): except (AttributeError, TypeError):
match.cover = "" match.cover = ""
return match, index return match, index
except Exception as e: except Exception as e:
log.error_or_exception(e) log.error_or_exception(e)
return None return []
val = list() val = list()
if self.active: if self.active:
@@ -133,7 +134,7 @@ class Amazon(Metadata):
links_list = [next(filter(lambda i: "digital-text" in i["href"], x.findAll("a")))["href"] for x in links_list = [next(filter(lambda i: "digital-text" in i["href"], x.findAll("a")))["href"] for x in
soup.findAll("div", attrs={"data-component-type": "s-search-result"})] soup.findAll("div", attrs={"data-component-type": "s-search-result"})]
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor: with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
fut = {executor.submit(inner, link, index) for index, link in enumerate(links_list[:5])} fut = {executor.submit(inner, link, index) for index, link in enumerate(links_list[:3])}
val = list(map(lambda x : x.result() ,concurrent.futures.as_completed(fut))) val = list(map(lambda x : x.result(), concurrent.futures.as_completed(fut)))
result = list(filter(lambda x: x, val)) result = list(filter(lambda x: x, val))
return [x[0] for x in sorted(result, key=itemgetter(1))] #sort by amazons listing order for best relevance return [x[0] for x in sorted(result, key=itemgetter(1))] #sort by amazons listing order for best relevance

View File

@@ -54,7 +54,7 @@ class Google(Metadata):
results.raise_for_status() results.raise_for_status()
except Exception as e: except Exception as e:
log.warning(e) log.warning(e)
return None return []
for result in results.json().get("items", []): for result in results.json().get("items", []):
val.append( val.append(
self._parse_search_result( self._parse_search_result(

View File

@@ -32,7 +32,7 @@ class OAuthBackend(SQLAlchemyBackend):
Stores and retrieves OAuth tokens using a relational database through Stores and retrieves OAuth tokens using a relational database through
the `SQLAlchemy`_ ORM. the `SQLAlchemy`_ ORM.
.. _SQLAlchemy: https://www.sqlalchemy.org/ _SQLAlchemy: https://www.sqlalchemy.org/
""" """
def __init__(self, model, session, provider_id, def __init__(self, model, session, provider_id,
user=None, user_id=None, user_required=None, anon_user=None, user=None, user_id=None, user_required=None, anon_user=None,

View File

@@ -21,10 +21,10 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
import datetime import datetime
import json # import json
from urllib.parse import unquote_plus from urllib.parse import unquote_plus
from flask import Blueprint, request, render_template, make_response, abort, Response, g from flask import Blueprint, request, render_template, make_response, abort, g, jsonify
from flask_babel import get_locale from flask_babel import get_locale
from flask_babel import gettext as _ from flask_babel import gettext as _
@@ -424,11 +424,8 @@ def feed_shelf(book_id):
@requires_basic_auth_if_no_ano @requires_basic_auth_if_no_ano
def opds_download_link(book_id, book_format): def opds_download_link(book_id, book_format):
if not auth.current_user().role_download(): if not auth.current_user().role_download():
return abort(403) return abort(401)
if "Kobo" in request.headers.get('User-Agent'): client = "kobo" if "Kobo" in request.headers.get('User-Agent') else ""
client = "kobo"
else:
client = ""
return get_download_link(book_id, book_format.lower(), client) return get_download_link(book_id, book_format.lower(), client)
@@ -454,7 +451,7 @@ def get_database_stats():
stat['authors'] = calibre_db.session.query(db.Authors).count() stat['authors'] = calibre_db.session.query(db.Authors).count()
stat['categories'] = calibre_db.session.query(db.Tags).count() stat['categories'] = calibre_db.session.query(db.Tags).count()
stat['series'] = calibre_db.session.query(db.Series).count() stat['series'] = calibre_db.session.query(db.Series).count()
return Response(json.dumps(stat), mimetype="application/json") return make_response(jsonify(stat))
@opds.route("/opds/thumb_240_240/<book_id>") @opds.route("/opds/thumb_240_240/<book_id>")

View File

@@ -41,9 +41,9 @@ class ReverseProxied(object):
"""Wrap the application in this middleware and configure the """Wrap the application in this middleware and configure the
front-end server to add these headers, to let you quietly bind front-end server to add these headers, to let you quietly bind
this to a URL other than / and to an HTTP scheme that is this to a URL other than / and to an HTTP scheme that is
different than what is used locally. different from what is used locally.
Code courtesy of: http://flask.pocoo.org/snippets/35/ Code courtesy of: https://flask.pocoo.org/snippets/35/
In nginx: In nginx:
location /myprefix { location /myprefix {

View File

@@ -24,9 +24,9 @@ from flask_babel import format_date
from flask_babel import gettext as _ from flask_babel import gettext as _
from sqlalchemy.sql.expression import func, not_, and_, or_, text, true from sqlalchemy.sql.expression import func, not_, and_, or_, text, true
from sqlalchemy.sql.functions import coalesce from sqlalchemy.sql.functions import coalesce
from sqlalchemy import exists
from . import logger, db, calibre_db, config, ub from . import logger, db, calibre_db, config, ub
from .string_helper import strip_whitespaces
from .usermanagement import login_required_if_no_ano from .usermanagement import login_required_if_no_ano
from .render_template import render_title_template from .render_template import render_title_template
from .pagination import Pagination from .pagination import Pagination
@@ -244,7 +244,8 @@ def render_adv_search_results(term, offset=None, order=None, limit=None):
pagination = None pagination = None
cc = calibre_db.get_cc_columns(config, filter_config_custom_read=True) cc = calibre_db.get_cc_columns(config, filter_config_custom_read=True)
calibre_db.session.connection().connection.connection.create_function("lower", 1, db.lcase) calibre_db.create_functions()
# calibre_db.session.connection().connection.connection.create_function("lower", 1, db.lcase)
query = calibre_db.generate_linked_query(config.config_read_column, db.Books) query = calibre_db.generate_linked_query(config.config_read_column, db.Books)
q = query.outerjoin(db.books_series_link, db.Books.id == db.books_series_link.c.book)\ q = query.outerjoin(db.books_series_link, db.Books.id == db.books_series_link.c.book)\
.outerjoin(db.Series)\ .outerjoin(db.Series)\
@@ -257,21 +258,21 @@ def render_adv_search_results(term, offset=None, order=None, limit=None):
tags['include_' + element] = term.get('include_' + element) tags['include_' + element] = term.get('include_' + element)
tags['exclude_' + element] = term.get('exclude_' + element) tags['exclude_' + element] = term.get('exclude_' + element)
author_name = term.get("author_name") author_name = term.get("authors")
book_title = term.get("book_title") book_title = term.get("title")
publisher = term.get("publisher") publisher = term.get("publisher")
pub_start = term.get("publishstart") pub_start = term.get("publishstart")
pub_end = term.get("publishend") pub_end = term.get("publishend")
rating_low = term.get("ratinghigh") rating_low = term.get("ratinghigh")
rating_high = term.get("ratinglow") rating_high = term.get("ratinglow")
description = term.get("comment") description = term.get("comments")
read_status = term.get("read_status") read_status = term.get("read_status")
if author_name: if author_name:
author_name = author_name.strip().lower().replace(',', '|') author_name = strip_whitespaces(author_name).lower().replace(',', '|')
if book_title: if book_title:
book_title = book_title.strip().lower() book_title = strip_whitespaces(book_title).lower()
if publisher: if publisher:
publisher = publisher.strip().lower() publisher = strip_whitespaces(publisher).lower()
search_term = [] search_term = []
cc_present = False cc_present = False

View File

@@ -23,7 +23,7 @@ import json
import os import os
import sys import sys
from flask import Blueprint, Response, request, url_for from flask import Blueprint, request, url_for, make_response, jsonify
from .cw_login import current_user from .cw_login import current_user
from flask_babel import get_locale from flask_babel import get_locale
from sqlalchemy.exc import InvalidRequestError, OperationalError from sqlalchemy.exc import InvalidRequestError, OperationalError
@@ -33,7 +33,6 @@ from cps.services.Metadata import Metadata
from . import constants, logger, ub, web_server from . import constants, logger, ub, web_server
from .usermanagement import user_login_required from .usermanagement import user_login_required
# current_milli_time = lambda: int(round(time() * 1000))
meta = Blueprint("metadata", __name__) meta = Blueprint("metadata", __name__)
@@ -90,7 +89,7 @@ def metadata_provider():
provider.append( provider.append(
{"name": c.__name__, "active": ac, "initial": ac, "id": c.__id__} {"name": c.__name__, "active": ac, "initial": ac, "id": c.__id__}
) )
return Response(json.dumps(provider), mimetype="application/json") return make_response(jsonify(provider))
@meta.route("/metadata/provider", methods=["POST"]) @meta.route("/metadata/provider", methods=["POST"])
@@ -115,9 +114,7 @@ def metadata_change_active_provider(prov_name):
provider = next((c for c in cl if c.__id__ == prov_name), None) provider = next((c for c in cl if c.__id__ == prov_name), None)
if provider is not None: if provider is not None:
data = provider.search(new_state.get("query", "")) data = provider.search(new_state.get("query", ""))
return Response( return make_response(jsonify([asdict(x) for x in data]))
json.dumps([asdict(x) for x in data]), mimetype="application/json"
)
return "" return ""
@@ -130,7 +127,7 @@ def metadata_search():
locale = get_locale() locale = get_locale()
if query: if query:
static_cover = url_for("static", filename="generic_cover.jpg") static_cover = url_for("static", filename="generic_cover.jpg")
# start = current_milli_time() # ret = cl[0].search(query, static_cover, locale)
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor: with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
meta = { meta = {
executor.submit(c.search, query, static_cover, locale): c executor.submit(c.search, query, static_cover, locale): c
@@ -139,5 +136,4 @@ def metadata_search():
} }
for future in concurrent.futures.as_completed(meta): for future in concurrent.futures.as_completed(meta):
data.extend([asdict(x) for x in future.result() if x]) data.extend([asdict(x) for x in future.result() if x])
# log.info({'Time elapsed {}'.format(current_milli_time()-start)}) return make_response(jsonify(data))
return Response(json.dumps(data), mimetype="application/json")

View File

@@ -42,6 +42,7 @@ class BackgroundScheduler:
if cls._instance is None: if cls._instance is None:
cls._instance = super(BackgroundScheduler, cls).__new__(cls) cls._instance = super(BackgroundScheduler, cls).__new__(cls)
cls.log = logger.create() cls.log = logger.create()
logger.logging.getLogger('tzlocal').setLevel(logger.logging.WARNING)
cls.scheduler = BScheduler() cls.scheduler = BScheduler()
cls.scheduler.start() cls.scheduler.start()

View File

@@ -92,7 +92,7 @@ def send_messsage(token, msg):
if creds and creds.expired and creds.refresh_token: if creds and creds.expired and creds.refresh_token:
creds.refresh(Request()) creds.refresh(Request())
service = build('gmail', 'v1', credentials=creds) service = build('gmail', 'v1', credentials=creds)
message_as_bytes = msg.as_bytes() # the message should converted from string to bytes. message_as_bytes = msg.as_bytes() # the message should be converted from string to bytes.
message_as_base64 = base64.urlsafe_b64encode(message_as_bytes) # encode in base64 (printable letters coding) message_as_base64 = base64.urlsafe_b64encode(message_as_bytes) # encode in base64 (printable letters coding)
raw = message_as_base64.decode() # convert to something JSON serializable raw = message_as_base64.decode() # convert to something JSON serializable
body = {'raw': raw} body = {'raw': raw}

View File

@@ -117,7 +117,7 @@ def get_author_info(author_name):
def get_other_books(author_info, library_books=None): def get_other_books(author_info, library_books=None):
# Get all identifiers (ISBN, Goodreads, etc) and filter author's books by that list so we show fewer duplicates # Get all identifiers (ISBN, Goodreads, etc.) and filter author's books by that list so we show fewer duplicates
# Note: Not all images will be shown, even though they're available on Goodreads.com. # Note: Not all images will be shown, even though they're available on Goodreads.com.
# See https://www.goodreads.com/topic/show/18213769-goodreads-book-images # See https://www.goodreads.com/topic/show/18213769-goodreads-book-images

View File

@@ -199,7 +199,7 @@ class CalibreTask:
self.run(*args) self.run(*args)
except Exception as ex: except Exception as ex:
self._handleError(str(ex)) self._handleError(str(ex))
log.error_or_exception(ex) log.exception(ex)
self.end_time = datetime.now() self.end_time = datetime.now()
@@ -235,7 +235,7 @@ class CalibreTask:
@property @property
def dead(self): def dead(self):
"""Determines whether or not this task can be garbage collected """Determines whether this task can be garbage collected
We have a separate dictating this because there may be certain tasks that want to override this We have a separate dictating this because there may be certain tasks that want to override this
""" """

View File

@@ -21,7 +21,7 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
import sys import sys
from datetime import datetime from datetime import datetime, timezone
from flask import Blueprint, flash, redirect, request, url_for, abort from flask import Blueprint, flash, redirect, request, url_for, abort
from flask_babel import gettext as _ from flask_babel import gettext as _
@@ -80,7 +80,7 @@ def add_to_shelf(shelf_id, book_id):
return "%s is a invalid Book Id. Could not be added to Shelf" % book_id, 400 return "%s is a invalid Book Id. Could not be added to Shelf" % book_id, 400
shelf.books.append(ub.BookShelf(shelf=shelf.id, book_id=book_id, order=maxOrder + 1)) shelf.books.append(ub.BookShelf(shelf=shelf.id, book_id=book_id, order=maxOrder + 1))
shelf.last_modified = datetime.utcnow() shelf.last_modified = datetime.now(timezone.utc)
try: try:
ub.session.merge(shelf) ub.session.merge(shelf)
ub.session.commit() ub.session.commit()
@@ -139,7 +139,7 @@ def search_to_shelf(shelf_id):
for book in books_for_shelf: for book in books_for_shelf:
maxOrder += 1 maxOrder += 1
shelf.books.append(ub.BookShelf(shelf=shelf.id, book_id=book, order=maxOrder)) shelf.books.append(ub.BookShelf(shelf=shelf.id, book_id=book, order=maxOrder))
shelf.last_modified = datetime.utcnow() shelf.last_modified = datetime.now(timezone.utc)
try: try:
ub.session.merge(shelf) ub.session.merge(shelf)
ub.session.commit() ub.session.commit()
@@ -185,7 +185,7 @@ def remove_from_shelf(shelf_id, book_id):
try: try:
ub.session.delete(book_shelf) ub.session.delete(book_shelf)
shelf.last_modified = datetime.utcnow() shelf.last_modified = datetime.now(timezone.utc)
ub.session.commit() ub.session.commit()
except (OperationalError, InvalidRequestError) as e: except (OperationalError, InvalidRequestError) as e:
ub.session.rollback() ub.session.rollback()
@@ -250,7 +250,7 @@ def show_simpleshelf(shelf_id):
return render_show_shelf(2, shelf_id, 1, None) return render_show_shelf(2, shelf_id, 1, None)
@shelf.route("/shelf/<int:shelf_id>", defaults={"sort_param": "order", 'page': 1}) @shelf.route("/shelf/<int:shelf_id>", defaults={"sort_param": "stored", 'page': 1})
@shelf.route("/shelf/<int:shelf_id>/<sort_param>", defaults={'page': 1}) @shelf.route("/shelf/<int:shelf_id>/<sort_param>", defaults={'page': 1})
@shelf.route("/shelf/<int:shelf_id>/<sort_param>/<int:page>") @shelf.route("/shelf/<int:shelf_id>/<sort_param>/<int:page>")
@login_required_if_no_ano @login_required_if_no_ano
@@ -271,7 +271,7 @@ def order_shelf(shelf_id):
for book in books_in_shelf: for book in books_in_shelf:
setattr(book, 'order', to_save[str(book.book_id)]) setattr(book, 'order', to_save[str(book.book_id)])
counter += 1 counter += 1
# if order different from before -> shelf.last_modified = datetime.utcnow() # if order different from before -> shelf.last_modified = datetime.now(timezone.utc)
try: try:
ub.session.commit() ub.session.commit()
except (OperationalError, InvalidRequestError) as e: except (OperationalError, InvalidRequestError) as e:
@@ -418,29 +418,37 @@ def change_shelf_order(shelf_id, order):
def render_show_shelf(shelf_type, shelf_id, page_no, sort_param): def render_show_shelf(shelf_type, shelf_id, page_no, sort_param):
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first() shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
status = current_user.get_view_property("shelf", 'man')
# check user is allowed to access shelf # check user is allowed to access shelf
if shelf and check_shelf_view_permissions(shelf): if shelf and check_shelf_view_permissions(shelf):
if shelf_type == 1: if shelf_type == 1:
# order = [ub.BookShelf.order.asc()] if status != 'on':
if sort_param == 'pubnew': if sort_param == 'stored':
change_shelf_order(shelf_id, [db.Books.pubdate.desc()]) sort_param = current_user.get_view_property("shelf", 'stored')
if sort_param == 'pubold': else:
change_shelf_order(shelf_id, [db.Books.pubdate]) current_user.set_view_property("shelf", 'stored', sort_param)
if sort_param == 'abc': if sort_param == 'pubnew':
change_shelf_order(shelf_id, [db.Books.sort]) change_shelf_order(shelf_id, [db.Books.pubdate.desc()])
if sort_param == 'zyx': if sort_param == 'pubold':
change_shelf_order(shelf_id, [db.Books.sort.desc()]) change_shelf_order(shelf_id, [db.Books.pubdate])
if sort_param == 'new': if sort_param == 'shelfnew':
change_shelf_order(shelf_id, [db.Books.timestamp.desc()]) change_shelf_order(shelf_id, [ub.BookShelf.date_added.desc()])
if sort_param == 'old': if sort_param == 'shelfold':
change_shelf_order(shelf_id, [db.Books.timestamp]) change_shelf_order(shelf_id, [ub.BookShelf.date_added])
if sort_param == 'authaz': if sort_param == 'abc':
change_shelf_order(shelf_id, [db.Books.author_sort.asc(), db.Series.name, db.Books.series_index]) change_shelf_order(shelf_id, [db.Books.sort])
if sort_param == 'authza': if sort_param == 'zyx':
change_shelf_order(shelf_id, [db.Books.author_sort.desc(), change_shelf_order(shelf_id, [db.Books.sort.desc()])
db.Series.name.desc(), if sort_param == 'new':
db.Books.series_index.desc()]) change_shelf_order(shelf_id, [db.Books.timestamp.desc()])
if sort_param == 'old':
change_shelf_order(shelf_id, [db.Books.timestamp])
if sort_param == 'authaz':
change_shelf_order(shelf_id, [db.Books.author_sort.asc(), db.Series.name, db.Books.series_index])
if sort_param == 'authza':
change_shelf_order(shelf_id, [db.Books.author_sort.desc(),
db.Series.name.desc(),
db.Books.series_index.desc()])
page = "shelf.html" page = "shelf.html"
pagesize = 0 pagesize = 0
else: else:
@@ -453,7 +461,7 @@ def render_show_shelf(shelf_type, shelf_id, page_no, sort_param):
[ub.BookShelf.order.asc()], [ub.BookShelf.order.asc()],
True, config.config_read_column, True, config.config_read_column,
ub.BookShelf, ub.BookShelf.book_id == db.Books.id) ub.BookShelf, ub.BookShelf.book_id == db.Books.id)
# delete chelf entries where book is not existent anymore, can happen if book is deleted outside calibre-web # delete shelf entries where book is not existent anymore, can happen if book is deleted outside calibre-web
wrong_entries = calibre_db.session.query(ub.BookShelf) \ wrong_entries = calibre_db.session.query(ub.BookShelf) \
.join(db.Books, ub.BookShelf.book_id == db.Books.id, isouter=True) \ .join(db.Books, ub.BookShelf.book_id == db.Books.id, isouter=True) \
.filter(db.Books.id == None).all() .filter(db.Books.id == None).all()
@@ -472,7 +480,9 @@ def render_show_shelf(shelf_type, shelf_id, page_no, sort_param):
pagination=pagination, pagination=pagination,
title=_("Shelf: '%(name)s'", name=shelf.name), title=_("Shelf: '%(name)s'", name=shelf.name),
shelf=shelf, shelf=shelf,
page="shelf") page="shelf",
status=status,
order=sort_param)
else: else:
flash(_("Error opening shelf. Shelf does not exist or is not accessible"), category="error") flash(_("Error opening shelf. Shelf does not exist or is not accessible"), category="error")
return redirect(url_for("web.index")) return redirect(url_for("web.index"))

108
cps/static/css/basic.css Normal file
View File

@@ -0,0 +1,108 @@
body {
margin: 0;
}
nav {
height: 75px;
padding: 5px 20px;
width: 100%;
display: table;
box-sizing: border-box;
border-bottom: 1px solid black;
}
nav > * {
display: inline-block;
display: table-cell;
vertical-align: middle;
float: none;
text-align: center;
width: auto;
}
nav > *:first-child {
text-align: left;
width: 1%;
}
.theme{
text-align: center;
margin: 10px;
}
nav > *:last-child {
text-align: right;
width: 1%;
}
nav > a {
color: black;
margin: 0 20px;
}
.search {
margin: auto auto;
}
form > input {
width: 18ch;
padding-left: 4px;
}
form > * {
height: 50px;
background-color: white;
border-radius: 0;
border: 1px solid #ccc;
padding: 0;
margin: 0;
display: inline-block;
vertical-align: top;
}
form > span {
margin-left: -5px;
}
button {
border: none;
padding: 0 10px;
margin: 0;
width: 160px;
height: 100%;
background-color: white;
}
.body {
padding: 5px 20px;
}
a {
color: black;
}
img {
width: 150px;
height: 250px;
object-fit: cover;
}
.listing {
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
margin-right: 20px;
}
.pagination {
padding: 10px 0;
height: 20px;
font-weight: 700;
}
.pagination > div {
float: left;
}
.pagination > div:last-child {
float: right;
}

View File

@@ -3268,6 +3268,10 @@ div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] > div.
left: auto !important left: auto !important
} }
ul.dropdown-menu.offscreen {
margin: 2px -160px 0
}
div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] .dropdown-menu.offscreen { div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] .dropdown-menu.offscreen {
position: fixed; position: fixed;
top: 120px; top: 120px;
@@ -4333,6 +4337,7 @@ body.advanced_search > div.container-fluid > div.row-fluid > div.col-sm-10 > div
.navbar-right > li > ul.dropdown-menu.offscreen { .navbar-right > li > ul.dropdown-menu.offscreen {
right: -10px right: -10px
} }
.login .plexBack, body.login > div.container-fluid > div.row-fluid > div.col-sm-2, body.login > div.navbar.navbar-default.navbar-static-top > div > form { .login .plexBack, body.login > div.container-fluid > div.row-fluid > div.col-sm-2, body.login > div.navbar.navbar-default.navbar-static-top > div > form {
@@ -7148,12 +7153,11 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.
} }
body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3, body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3 { body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3, body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3 {
max-width: 130px; position: relative;
width: 130px; max-width: unset;
height: 180px; width: 100%;
margin: 0; height: unset;
padding: 15px; padding: 15px;
position: absolute
} }
body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9, body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9 { body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9, body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9 {
@@ -7162,10 +7166,6 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.
width: 100% width: 100%
} }
body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9 > .form-group:nth-child(1), body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9 > .form-group:nth-child(2), body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9 > .form-group:nth-child(1), body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9 > .form-group:nth-child(2) {
padding-left: 120px
}
#deleteButton, body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3 > div.text-center > #delete, body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3 > div.text-center > #delete { #deleteButton, body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3 > div.text-center > #delete, body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3 > div.text-center > #delete {
top: 48px; top: 48px;
height: 42px height: 42px
@@ -7951,3 +7951,5 @@ div.comments[data-readmore] {
transition: height 300ms; transition: height 300ms;
overflow: hidden overflow: hidden
} }
.dropdown-menu > .offscreen

View File

@@ -15,5 +15,5 @@
.blackTheme { .blackTheme {
background: #000; background: #000;
color: #fff color: #fff;
} }

View File

@@ -157,7 +157,6 @@ fieldset[disabled] .twitter-typeahead .tt-input {
list-style: none; list-style: none;
font-size: 14px; font-size: 14px;
background-color: #ffffff; background-color: #ffffff;
border: 1px solid #cccccc;
border: 1px solid rgba(0, 0, 0, 0.15); border: 1px solid rgba(0, 0, 0, 0.15);
border-radius: 4px; border-radius: 4px;
-webkit-box-shadow: 0 6px 12px rgba(0, 0, 0, 0.175); -webkit-box-shadow: 0 6px 12px rgba(0, 0, 0, 0.175);

View File

@@ -77,7 +77,6 @@ body {
} }
#panels a { #panels a {
visibility: hidden;
width: 18px; width: 18px;
height: 20px; height: 20px;
overflow: hidden; overflow: hidden;
@@ -512,12 +511,12 @@ input:-moz-placeholder { color: #454545; }
position: fixed; position: fixed;
top: 50%; top: 50%;
left: 50%; left: 50%;
width: 630px; transform: translate(-50%, -50%);
width: 100vw;
height: auto; height: auto;
max-width: 630px;
z-index: 2000; z-index: 2000;
visibility: hidden; visibility: hidden;
margin-left: -320px;
margin-top: -160px;
} }
.overlay { .overlay {

0
cps/static/js/caliBlur.js Executable file → Normal file
View File

View File

@@ -3,9 +3,9 @@
*/ */
/* global Bloodhound, language, Modernizr, tinymce, getPath */ /* global Bloodhound, language, Modernizr, tinymce, getPath */
if ($("#description").length) { if ($("#comments").length && typeof tinymce !== "undefined") {
tinymce.init({ tinymce.init({
selector: "#description", selector: "#comments",
plugins: 'code', plugins: 'code',
branding: false, branding: false,
menubar: "edit view format", menubar: "edit view format",
@@ -93,7 +93,7 @@ var authors = new Bloodhound({
}, },
}); });
$(".form-group #bookAuthor").typeahead( $(".form-group #authors").typeahead(
{ {
highlight: true, highlight: true,
minLength: 1, minLength: 1,
@@ -243,13 +243,13 @@ $("#search").on("change input.typeahead:selected", function(event) {
}); });
}); });
$("#btn-upload-format").on("change", function () { /*$("#btn-upload-format").on("change", function () {
var filename = $(this).val(); var filename = $(this).val();
if (filename.substring(3, 11) === "fakepath") { if (filename.substring(3, 11) === "fakepath") {
filename = filename.substring(12); filename = filename.substring(12);
} // Remove c:\fake at beginning from localhost chrome } // Remove c:\fake at beginning from localhost chrome
$("#upload-format").text(filename); $("#upload-format").text(filename);
}); });*/
$("#btn-upload-cover").on("change", function () { $("#btn-upload-cover").on("change", function () {
var filename = $(this).val(); var filename = $(this).val();
@@ -261,8 +261,8 @@ $("#btn-upload-cover").on("change", function () {
$("#xchange").click(function () { $("#xchange").click(function () {
this.blur(); this.blur();
var title = $("#book_title").val(); var title = $("#title").val();
$("#book_title").val($("#bookAuthor").val()); $("#title").val($("#authors").val());
$("#bookAuthor").val(title); $("#authors").val(title);
}); });

View File

@@ -38,12 +38,12 @@ $(function () {
} }
function populateForm (book) { function populateForm (book) {
tinymce.get("description").setContent(book.description); tinymce.get("comments").setContent(book.description);
var uniqueTags = getUniqueValues('tags', book) var uniqueTags = getUniqueValues('tags', book)
var uniqueLanguages = getUniqueValues('languages', book) var uniqueLanguages = getUniqueValues('languages', book)
var ampSeparatedAuthors = (book.authors || []).join(" & "); var ampSeparatedAuthors = (book.authors || []).join(" & ");
$("#bookAuthor").val(ampSeparatedAuthors); $("#authors").val(ampSeparatedAuthors);
$("#book_title").val(book.title); $("#title").val(book.title);
$("#tags").val(uniqueTags.join(", ")); $("#tags").val(uniqueTags.join(", "));
$("#languages").val(uniqueLanguages.join(", ")); $("#languages").val(uniqueLanguages.join(", "));
$("#rating").data("rating").setValue(Math.round(book.rating)); $("#rating").data("rating").setValue(Math.round(book.rating));
@@ -172,7 +172,7 @@ $(function () {
$("#get_meta").click(function () { $("#get_meta").click(function () {
populate_provider(); populate_provider();
var bookTitle = $("#book_title").val(); var bookTitle = $("#title").val();
$("#keyword").val(bookTitle); $("#keyword").val(bookTitle);
keyword = bookTitle; keyword = bookTitle;
doSearch(bookTitle); doSearch(bookTitle);

View File

@@ -0,0 +1 @@
!function(a){a.fn.datepicker.dates.sl={days:["Nedelja","Ponedeljek","Torek","Sreda","Četrtek","Petek","Sobota"],daysShort:["Ned","Pon","Tor","Sre","Čet","Pet","Sob"],daysMin:["Ne","Po","To","Sr","Če","Pe","So"],months:["Januar","Februar","Marec","April","Maj","Junij","Julij","Avgust","September","Oktober","November","December"],monthsShort:["Jan","Feb","Mar","Apr","Maj","Jun","Jul","Avg","Sep","Okt","Nov","Dec"],today:"Danes",weekStart:1}}(jQuery);

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@@ -183,7 +183,7 @@ try {
]; ];
$.each(sequences, function(ignore, sequence) { $.each(sequences, function(ignore, sequence) {
for (j = 0; j < word.length - 2; j += 1) { for (j = 0; j < word.length - 2; j += 1) {
// iterate the word trough a sliding window of size 3: // iterate the word through a sliding window of size 3:
if ( if (
sequence.indexOf( sequence.indexOf(
word.toLowerCase().substring(j, j + 3) word.toLowerCase().substring(j, j + 3)

View File

@@ -2,7 +2,7 @@
* @licstart The following is the entire license notice for the * @licstart The following is the entire license notice for the
* JavaScript code in this page * JavaScript code in this page
* *
* Copyright 2023 Mozilla Foundation * Copyright 2024 Mozilla Foundation
* *
* Licensed under the Apache License, Version 2.0 (the "License"); * Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License. * you may not use this file except in compliance with the License.
@@ -446,7 +446,7 @@ class ProgressBar {
} }
} }
setDisableAutoFetch(delay = 5000) { setDisableAutoFetch(delay = 5000) {
if (isNaN(this.#percent)) { if (this.#percent === 100 || isNaN(this.#percent)) {
return; return;
} }
if (this.#disableAutoFetchTimeout) { if (this.#disableAutoFetchTimeout) {
@@ -535,15 +535,20 @@ function toggleExpandedBtn(button, toggle, view = null) {
;// CONCATENATED MODULE: ./web/app_options.js ;// CONCATENATED MODULE: ./web/app_options.js
{ {
var compatibilityParams = Object.create(null); var compatParams = new Map();
const userAgent = navigator.userAgent || ""; const userAgent = navigator.userAgent || "";
const platform = navigator.platform || ""; const platform = navigator.platform || "";
const maxTouchPoints = navigator.maxTouchPoints || 1; const maxTouchPoints = navigator.maxTouchPoints || 1;
const isAndroid = /Android/.test(userAgent); const isAndroid = /Android/.test(userAgent);
const isIOS = /\b(iPad|iPhone|iPod)(?=;)/.test(userAgent) || platform === "MacIntel" && maxTouchPoints > 1; const isIOS = /\b(iPad|iPhone|iPod)(?=;)/.test(userAgent) || platform === "MacIntel" && maxTouchPoints > 1;
(function checkCanvasSizeLimitation() { (function () {
if (isIOS || isAndroid) { if (isIOS || isAndroid) {
compatibilityParams.maxCanvasPixels = 5242880; compatParams.set("maxCanvasPixels", 5242880);
}
})();
(function () {
if (isAndroid) {
compatParams.set("useSystemFonts", false);
} }
})(); })();
} }
@@ -552,9 +557,21 @@ const OptionKind = {
VIEWER: 0x02, VIEWER: 0x02,
API: 0x04, API: 0x04,
WORKER: 0x08, WORKER: 0x08,
EVENT_DISPATCH: 0x10,
PREFERENCE: 0x80 PREFERENCE: 0x80
}; };
const Type = {
BOOLEAN: 0x01,
NUMBER: 0x02,
OBJECT: 0x04,
STRING: 0x08,
UNDEFINED: 0x10
};
const defaultOptions = { const defaultOptions = {
allowedGlobalEvents: {
value: null,
kind: OptionKind.BROWSER
},
canvasMaxAreaInBytes: { canvasMaxAreaInBytes: {
value: -1, value: -1,
kind: OptionKind.BROWSER + OptionKind.API kind: OptionKind.BROWSER + OptionKind.API
@@ -563,6 +580,16 @@ const defaultOptions = {
value: false, value: false,
kind: OptionKind.BROWSER kind: OptionKind.BROWSER
}, },
localeProperties: {
value: {
lang: navigator.language || "en-US"
},
kind: OptionKind.BROWSER
},
nimbusDataStr: {
value: "",
kind: OptionKind.BROWSER
},
supportsCaretBrowsingMode: { supportsCaretBrowsingMode: {
value: false, value: false,
kind: OptionKind.BROWSER kind: OptionKind.BROWSER
@@ -587,6 +614,14 @@ const defaultOptions = {
value: true, value: true,
kind: OptionKind.BROWSER kind: OptionKind.BROWSER
}, },
toolbarDensity: {
value: 0,
kind: OptionKind.BROWSER + OptionKind.EVENT_DISPATCH
},
altTextLearnMoreUrl: {
value: "",
kind: OptionKind.VIEWER + OptionKind.PREFERENCE
},
annotationEditorMode: { annotationEditorMode: {
value: 0, value: 0,
kind: OptionKind.VIEWER + OptionKind.PREFERENCE kind: OptionKind.VIEWER + OptionKind.PREFERENCE
@@ -619,6 +654,14 @@ const defaultOptions = {
value: false, value: false,
kind: OptionKind.VIEWER + OptionKind.PREFERENCE kind: OptionKind.VIEWER + OptionKind.PREFERENCE
}, },
enableAltText: {
value: false,
kind: OptionKind.VIEWER + OptionKind.PREFERENCE
},
enableGuessAltText: {
value: true,
kind: OptionKind.VIEWER + OptionKind.PREFERENCE
},
enableHighlightEditor: { enableHighlightEditor: {
value: false, value: false,
kind: OptionKind.VIEWER + OptionKind.PREFERENCE kind: OptionKind.VIEWER + OptionKind.PREFERENCE
@@ -627,10 +670,6 @@ const defaultOptions = {
value: false, value: false,
kind: OptionKind.VIEWER + OptionKind.PREFERENCE kind: OptionKind.VIEWER + OptionKind.PREFERENCE
}, },
enableML: {
value: false,
kind: OptionKind.VIEWER + OptionKind.PREFERENCE
},
enablePermissions: { enablePermissions: {
value: false, value: false,
kind: OptionKind.VIEWER + OptionKind.PREFERENCE kind: OptionKind.VIEWER + OptionKind.PREFERENCE
@@ -643,8 +682,8 @@ const defaultOptions = {
value: true, value: true,
kind: OptionKind.VIEWER + OptionKind.PREFERENCE kind: OptionKind.VIEWER + OptionKind.PREFERENCE
}, },
enableStampEditor: { enableUpdatedAddImage: {
value: true, value: false,
kind: OptionKind.VIEWER + OptionKind.PREFERENCE kind: OptionKind.VIEWER + OptionKind.PREFERENCE
}, },
externalLinkRel: { externalLinkRel: {
@@ -775,6 +814,11 @@ const defaultOptions = {
value: "../web/standard_fonts/", value: "../web/standard_fonts/",
kind: OptionKind.API kind: OptionKind.API
}, },
useSystemFonts: {
value: undefined,
kind: OptionKind.API,
type: Type.BOOLEAN + Type.UNDEFINED
},
verbosity: { verbosity: {
value: 1, value: 1,
kind: OptionKind.API kind: OptionKind.API
@@ -784,7 +828,7 @@ const defaultOptions = {
kind: OptionKind.WORKER kind: OptionKind.WORKER
}, },
workerSrc: { workerSrc: {
value: "../build/pdf.worker.mjs", value: "../build/pdf.worker.js",
kind: OptionKind.WORKER kind: OptionKind.WORKER
} }
}; };
@@ -807,62 +851,80 @@ const defaultOptions = {
value: false, value: false,
kind: OptionKind.VIEWER kind: OptionKind.VIEWER
}; };
defaultOptions.locale = {
value: navigator.language || "en-US",
kind: OptionKind.VIEWER
};
} }
const userOptions = Object.create(null); const userOptions = new Map();
{ {
for (const name in compatibilityParams) { for (const [name, value] of compatParams) {
userOptions[name] = compatibilityParams[name]; userOptions.set(name, value);
} }
} }
class AppOptions { class AppOptions {
static eventBus;
constructor() { constructor() {
throw new Error("Cannot initialize AppOptions."); throw new Error("Cannot initialize AppOptions.");
} }
static get(name) { static get(name) {
return userOptions[name] ?? defaultOptions[name]?.value ?? undefined; return userOptions.has(name) ? userOptions.get(name) : defaultOptions[name]?.value;
} }
static getAll(kind = null, defaultOnly = false) { static getAll(kind = null, defaultOnly = false) {
const options = Object.create(null); const options = Object.create(null);
for (const name in defaultOptions) { for (const name in defaultOptions) {
const defaultOption = defaultOptions[name]; const defaultOpt = defaultOptions[name];
if (kind && !(kind & defaultOption.kind)) { if (kind && !(kind & defaultOpt.kind)) {
continue; continue;
} }
options[name] = defaultOnly ? defaultOption.value : userOptions[name] ?? defaultOption.value; options[name] = !defaultOnly && userOptions.has(name) ? userOptions.get(name) : defaultOpt.value;
} }
return options; return options;
} }
static set(name, value) { static set(name, value) {
userOptions[name] = value; this.setAll({
[name]: value
});
} }
static setAll(options, init = false) { static setAll(options, prefs = false) {
if (init) { let events;
if (this.get("disablePreferences")) {
return;
}
for (const name in userOptions) {
if (compatibilityParams[name] !== undefined) {
continue;
}
console.warn("setAll: The Preferences may override manually set AppOptions; " + 'please use the "disablePreferences"-option in order to prevent that.');
break;
}
}
for (const name in options) { for (const name in options) {
userOptions[name] = options[name]; const defaultOpt = defaultOptions[name],
userOpt = options[name];
if (!defaultOpt || !(typeof userOpt === typeof defaultOpt.value || Type[(typeof userOpt).toUpperCase()] & defaultOpt.type)) {
continue;
}
const {
kind
} = defaultOpt;
if (prefs && !(kind & OptionKind.BROWSER || kind & OptionKind.PREFERENCE)) {
continue;
}
if (this.eventBus && kind & OptionKind.EVENT_DISPATCH) {
(events ||= new Map()).set(name, userOpt);
}
userOptions.set(name, userOpt);
}
if (events) {
for (const [name, value] of events) {
this.eventBus.dispatch(name.toLowerCase(), {
source: this,
value
});
}
} }
} }
static remove(name) { }
delete userOptions[name]; {
const val = compatibilityParams[name]; AppOptions._checkDisablePreferences = () => {
if (val !== undefined) { if (AppOptions.get("disablePreferences")) {
userOptions[name] = val; return true;
} }
} for (const [name] of userOptions) {
if (compatParams.has(name)) {
continue;
}
console.warn("The Preferences may override manually set AppOptions; " + 'please use the "disablePreferences"-option to prevent that.');
break;
}
return false;
};
} }
;// CONCATENATED MODULE: ./web/pdf_link_service.js ;// CONCATENATED MODULE: ./web/pdf_link_service.js
@@ -1171,26 +1233,27 @@ class PDFLinkService {
if (!(typeof zoom === "object" && typeof zoom?.name === "string")) { if (!(typeof zoom === "object" && typeof zoom?.name === "string")) {
return false; return false;
} }
const argsLen = args.length;
let allowNull = true; let allowNull = true;
switch (zoom.name) { switch (zoom.name) {
case "XYZ": case "XYZ":
if (args.length !== 3) { if (argsLen < 2 || argsLen > 3) {
return false; return false;
} }
break; break;
case "Fit": case "Fit":
case "FitB": case "FitB":
return args.length === 0; return argsLen === 0;
case "FitH": case "FitH":
case "FitBH": case "FitBH":
case "FitV": case "FitV":
case "FitBV": case "FitBV":
if (args.length !== 1) { if (argsLen > 1) {
return false; return false;
} }
break; break;
case "FitR": case "FitR":
if (args.length !== 4) { if (argsLen !== 4) {
return false; return false;
} }
allowNull = false; allowNull = false;
@@ -1240,7 +1303,6 @@ const {
noContextMenu, noContextMenu,
normalizeUnicode, normalizeUnicode,
OPS, OPS,
Outliner,
PasswordResponses, PasswordResponses,
PDFDataRangeTransport, PDFDataRangeTransport,
PDFDateString, PDFDateString,
@@ -1248,12 +1310,10 @@ const {
PermissionFlag, PermissionFlag,
PixelsPerInch, PixelsPerInch,
RenderingCancelledException, RenderingCancelledException,
renderTextLayer,
setLayerDimensions, setLayerDimensions,
shadow, shadow,
TextLayer, TextLayer,
UnexpectedResponseException, UnexpectedResponseException,
updateTextLayer,
Util, Util,
VerbosityLevel, VerbosityLevel,
version, version,
@@ -1401,40 +1461,28 @@ class BaseExternalServices {
updateEditorStates(data) { updateEditorStates(data) {
throw new Error("Not implemented: updateEditorStates"); throw new Error("Not implemented: updateEditorStates");
} }
async getNimbusExperimentData() {}
async getGlobalEventNames() {
return null;
}
dispatchGlobalEvent(_event) {} dispatchGlobalEvent(_event) {}
} }
;// CONCATENATED MODULE: ./web/preferences.js ;// CONCATENATED MODULE: ./web/preferences.js
class BasePreferences { class BasePreferences {
#browserDefaults = Object.freeze({
canvasMaxAreaInBytes: -1,
isInAutomation: false,
supportsCaretBrowsingMode: false,
supportsDocumentFonts: true,
supportsIntegratedFind: false,
supportsMouseWheelZoomCtrlKey: true,
supportsMouseWheelZoomMetaKey: true,
supportsPinchToZoom: true
});
#defaults = Object.freeze({ #defaults = Object.freeze({
altTextLearnMoreUrl: "",
annotationEditorMode: 0, annotationEditorMode: 0,
annotationMode: 2, annotationMode: 2,
cursorToolOnLoad: 0, cursorToolOnLoad: 0,
defaultZoomDelay: 400, defaultZoomDelay: 400,
defaultZoomValue: "", defaultZoomValue: "",
disablePageLabels: false, disablePageLabels: false,
enableAltText: false,
enableGuessAltText: true,
enableHighlightEditor: false, enableHighlightEditor: false,
enableHighlightFloatingButton: false, enableHighlightFloatingButton: false,
enableML: false,
enablePermissions: false, enablePermissions: false,
enablePrintAutoRotate: true, enablePrintAutoRotate: true,
enableScripting: true, enableScripting: true,
enableStampEditor: true, enableUpdatedAddImage: false,
externalLinkTarget: 0, externalLinkTarget: 0,
highlightEditorColors: "yellow=#FFFF98,green=#53FFBC,blue=#80EBFF,pink=#FFCBE6,red=#FF4F5F", highlightEditorColors: "yellow=#FFFF98,green=#53FFBC,blue=#80EBFF,pink=#FFCBE6,red=#FF4F5F",
historyUpdateUrl: false, historyUpdateUrl: false,
@@ -1456,7 +1504,6 @@ class BasePreferences {
enableXfa: true, enableXfa: true,
viewerCssTheme: 0 viewerCssTheme: 0
}); });
#prefs = Object.create(null);
#initializedPromise = null; #initializedPromise = null;
constructor() { constructor() {
if (this.constructor === BasePreferences) { if (this.constructor === BasePreferences) {
@@ -1466,16 +1513,13 @@ class BasePreferences {
browserPrefs, browserPrefs,
prefs prefs
}) => { }) => {
const options = Object.create(null); if (AppOptions._checkDisablePreferences()) {
for (const [name, val] of Object.entries(this.#browserDefaults)) { return;
const prefVal = browserPrefs?.[name];
options[name] = typeof prefVal === typeof val ? prefVal : val;
} }
for (const [name, val] of Object.entries(this.#defaults)) { AppOptions.setAll({
const prefVal = prefs?.[name]; ...browserPrefs,
options[name] = this.#prefs[name] = typeof prefVal === typeof val ? prefVal : val; ...prefs
} }, true);
AppOptions.setAll(options, true);
}); });
} }
async _writeToStorage(prefObj) { async _writeToStorage(prefObj) {
@@ -1484,58 +1528,21 @@ class BasePreferences {
async _readFromStorage(prefObj) { async _readFromStorage(prefObj) {
throw new Error("Not implemented: _readFromStorage"); throw new Error("Not implemented: _readFromStorage");
} }
#updatePref({
name,
value
}) {
throw new Error("Not implemented: #updatePref");
}
async reset() { async reset() {
await this.#initializedPromise; await this.#initializedPromise;
const oldPrefs = structuredClone(this.#prefs); AppOptions.setAll(this.#defaults, true);
this.#prefs = Object.create(null); await this._writeToStorage(this.#defaults);
try {
await this._writeToStorage(this.#defaults);
} catch (reason) {
this.#prefs = oldPrefs;
throw reason;
}
} }
async set(name, value) { async set(name, value) {
await this.#initializedPromise; await this.#initializedPromise;
const defaultValue = this.#defaults[name], AppOptions.setAll({
oldPrefs = structuredClone(this.#prefs); [name]: value
if (defaultValue === undefined) { }, true);
throw new Error(`Set preference: "${name}" is undefined.`); await this._writeToStorage(AppOptions.getAll(OptionKind.PREFERENCE));
} else if (value === undefined) {
throw new Error("Set preference: no value is specified.");
}
const valueType = typeof value,
defaultType = typeof defaultValue;
if (valueType !== defaultType) {
if (valueType === "number" && defaultType === "string") {
value = value.toString();
} else {
throw new Error(`Set preference: "${value}" is a ${valueType}, expected a ${defaultType}.`);
}
} else if (valueType === "number" && !Number.isInteger(value)) {
throw new Error(`Set preference: "${value}" must be an integer.`);
}
this.#prefs[name] = value;
try {
await this._writeToStorage(this.#prefs);
} catch (reason) {
this.#prefs = oldPrefs;
throw reason;
}
} }
async get(name) { async get(name) {
await this.#initializedPromise; await this.#initializedPromise;
const defaultValue = this.#defaults[name]; return AppOptions.get(name);
if (defaultValue === undefined) {
throw new Error(`Get preference: "${name}" is undefined.`);
}
return this.#prefs[name] ?? defaultValue;
} }
get initializedPromise() { get initializedPromise() {
return this.#initializedPromise; return this.#initializedPromise;
@@ -3098,13 +3105,19 @@ class Preferences extends BasePreferences {
} }
class ExternalServices extends BaseExternalServices { class ExternalServices extends BaseExternalServices {
async createL10n() { async createL10n() {
return new genericl10n_GenericL10n(AppOptions.get("locale")); return new genericl10n_GenericL10n(AppOptions.get("localeProperties")?.lang);
} }
createScripting() { createScripting() {
return new GenericScripting(AppOptions.get("sandboxBundleSrc")); return new GenericScripting(AppOptions.get("sandboxBundleSrc"));
} }
} }
class MLManager { class MLManager {
async isEnabledFor(_name) {
return false;
}
async deleteModel(_service) {
return null;
}
async guess() { async guess() {
return null; return null;
} }
@@ -8411,6 +8424,9 @@ class AnnotationLayerBuilder {
} }
this.div.hidden = true; this.div.hidden = true;
} }
hasEditableAnnotations() {
return !!this.annotationLayer?.hasEditableAnnotations();
}
#updatePresentationModeState(state) { #updatePresentationModeState(state) {
if (!this.div) { if (!this.div) {
return; return;
@@ -9142,6 +9158,7 @@ class PDFPageView {
#annotationMode = AnnotationMode.ENABLE_FORMS; #annotationMode = AnnotationMode.ENABLE_FORMS;
#enableHWA = false; #enableHWA = false;
#hasRestrictedScaling = false; #hasRestrictedScaling = false;
#isEditing = false;
#layerProperties = null; #layerProperties = null;
#loadingId = null; #loadingId = null;
#previousRotation = null; #previousRotation = null;
@@ -9296,6 +9313,9 @@ class PDFPageView {
this.reset(); this.reset();
this.pdfPage?.cleanup(); this.pdfPage?.cleanup();
} }
hasEditableAnnotations() {
return !!this.annotationLayer?.hasEditableAnnotations();
}
get _textHighlighter() { get _textHighlighter() {
return shadow(this, "_textHighlighter", new TextHighlighter({ return shadow(this, "_textHighlighter", new TextHighlighter({
pageIndex: this.id - 1, pageIndex: this.id - 1,
@@ -9472,6 +9492,19 @@ class PDFPageView {
this._resetZoomLayer(); this._resetZoomLayer();
} }
} }
toggleEditingMode(isEditing) {
if (!this.hasEditableAnnotations()) {
return;
}
this.#isEditing = isEditing;
this.reset({
keepZoomLayer: true,
keepAnnotationLayer: true,
keepAnnotationEditorLayer: true,
keepXfaLayer: true,
keepTextLayer: true
});
}
update({ update({
scale = 0, scale = 0,
rotation = null, rotation = null,
@@ -9822,7 +9855,8 @@ class PDFPageView {
annotationMode: this.#annotationMode, annotationMode: this.#annotationMode,
optionalContentConfigPromise: this._optionalContentConfigPromise, optionalContentConfigPromise: this._optionalContentConfigPromise,
annotationCanvasMap: this._annotationCanvasMap, annotationCanvasMap: this._annotationCanvasMap,
pageColors pageColors,
isEditing: this.#isEditing
}; };
const renderTask = this.renderTask = pdfPage.render(renderContext); const renderTask = this.renderTask = pdfPage.render(renderContext);
renderTask.onContinue = renderContinueCallback; renderTask.onContinue = renderContinueCallback;
@@ -9982,8 +10016,11 @@ class PDFViewer {
#enableHWA = false; #enableHWA = false;
#enableHighlightFloatingButton = false; #enableHighlightFloatingButton = false;
#enablePermissions = false; #enablePermissions = false;
#enableUpdatedAddImage = false;
#eventAbortController = null; #eventAbortController = null;
#mlManager = null; #mlManager = null;
#onPageRenderedCallback = null;
#switchAnnotationEditorModeTimeoutId = null;
#getAllTextInProgress = false; #getAllTextInProgress = false;
#hiddenCopyElement = null; #hiddenCopyElement = null;
#interruptCopyCondition = false; #interruptCopyCondition = false;
@@ -9993,7 +10030,7 @@ class PDFViewer {
#scaleTimeoutId = null; #scaleTimeoutId = null;
#textLayerMode = TextLayerMode.ENABLE; #textLayerMode = TextLayerMode.ENABLE;
constructor(options) { constructor(options) {
const viewerVersion = "4.4.168"; const viewerVersion = "4.5.136";
if (version !== viewerVersion) { if (version !== viewerVersion) {
throw new Error(`The API version "${version}" does not match the Viewer version "${viewerVersion}".`); throw new Error(`The API version "${version}" does not match the Viewer version "${viewerVersion}".`);
} }
@@ -10020,6 +10057,7 @@ class PDFViewer {
this.#annotationEditorMode = options.annotationEditorMode ?? AnnotationEditorType.NONE; this.#annotationEditorMode = options.annotationEditorMode ?? AnnotationEditorType.NONE;
this.#annotationEditorHighlightColors = options.annotationEditorHighlightColors || null; this.#annotationEditorHighlightColors = options.annotationEditorHighlightColors || null;
this.#enableHighlightFloatingButton = options.enableHighlightFloatingButton === true; this.#enableHighlightFloatingButton = options.enableHighlightFloatingButton === true;
this.#enableUpdatedAddImage = options.enableUpdatedAddImage === true;
this.imageResourcesPath = options.imageResourcesPath || ""; this.imageResourcesPath = options.imageResourcesPath || "";
this.enablePrintAutoRotate = options.enablePrintAutoRotate || false; this.enablePrintAutoRotate = options.enablePrintAutoRotate || false;
this.removePageBorders = options.removePageBorders || false; this.removePageBorders = options.removePageBorders || false;
@@ -10425,7 +10463,7 @@ class PDFViewer {
if (pdfDocument.isPureXfa) { if (pdfDocument.isPureXfa) {
console.warn("Warning: XFA-editing is not implemented."); console.warn("Warning: XFA-editing is not implemented.");
} else if (isValidAnnotationEditorMode(mode)) { } else if (isValidAnnotationEditorMode(mode)) {
this.#annotationEditorUIManager = new AnnotationEditorUIManager(this.container, viewer, this.#altTextManager, eventBus, pdfDocument, pageColors, this.#annotationEditorHighlightColors, this.#enableHighlightFloatingButton, this.#mlManager); this.#annotationEditorUIManager = new AnnotationEditorUIManager(this.container, viewer, this.#altTextManager, eventBus, pdfDocument, pageColors, this.#annotationEditorHighlightColors, this.#enableHighlightFloatingButton, this.#enableUpdatedAddImage, this.#mlManager);
eventBus.dispatch("annotationeditoruimanager", { eventBus.dispatch("annotationeditoruimanager", {
source: this, source: this,
uiManager: this.#annotationEditorUIManager uiManager: this.#annotationEditorUIManager
@@ -10584,6 +10622,7 @@ class PDFViewer {
this.viewer.removeAttribute("lang"); this.viewer.removeAttribute("lang");
this.#hiddenCopyElement?.remove(); this.#hiddenCopyElement?.remove();
this.#hiddenCopyElement = null; this.#hiddenCopyElement = null;
this.#cleanupSwitchAnnotationEditorMode();
} }
#ensurePageViewVisible() { #ensurePageViewVisible() {
if (this._scrollMode !== ScrollMode.PAGE) { if (this._scrollMode !== ScrollMode.PAGE) {
@@ -10956,6 +10995,34 @@ class PDFViewer {
location: this._location location: this._location
}); });
} }
#switchToEditAnnotationMode() {
const visible = this._getVisiblePages();
const pagesToRefresh = [];
const {
ids,
views
} = visible;
for (const page of views) {
const {
view
} = page;
if (!view.hasEditableAnnotations()) {
ids.delete(view.id);
continue;
}
pagesToRefresh.push(page);
}
if (pagesToRefresh.length === 0) {
return null;
}
this.renderingQueue.renderHighestPriority({
first: pagesToRefresh[0],
last: pagesToRefresh.at(-1),
views: pagesToRefresh,
ids
});
return ids;
}
containsElement(element) { containsElement(element) {
return this.container.contains(element); return this.container.contains(element);
} }
@@ -11388,6 +11455,16 @@ class PDFViewer {
get containerTopLeft() { get containerTopLeft() {
return this.#containerTopLeft ||= [this.container.offsetTop, this.container.offsetLeft]; return this.#containerTopLeft ||= [this.container.offsetTop, this.container.offsetLeft];
} }
#cleanupSwitchAnnotationEditorMode() {
if (this.#onPageRenderedCallback) {
this.eventBus._off("pagerendered", this.#onPageRenderedCallback);
this.#onPageRenderedCallback = null;
}
if (this.#switchAnnotationEditorModeTimeoutId !== null) {
clearTimeout(this.#switchAnnotationEditorModeTimeoutId);
this.#switchAnnotationEditorModeTimeoutId = null;
}
}
get annotationEditorMode() { get annotationEditorMode() {
return this.#annotationEditorUIManager ? this.#annotationEditorMode : AnnotationEditorType.DISABLE; return this.#annotationEditorUIManager ? this.#annotationEditorMode : AnnotationEditorType.DISABLE;
} }
@@ -11408,12 +11485,47 @@ class PDFViewer {
if (!this.pdfDocument) { if (!this.pdfDocument) {
return; return;
} }
this.#annotationEditorMode = mode; const {
this.eventBus.dispatch("annotationeditormodechanged", { eventBus
source: this, } = this;
mode const updater = () => {
}); this.#cleanupSwitchAnnotationEditorMode();
this.#annotationEditorUIManager.updateMode(mode, editId, isFromKeyboard); this.#annotationEditorMode = mode;
this.#annotationEditorUIManager.updateMode(mode, editId, isFromKeyboard);
eventBus.dispatch("annotationeditormodechanged", {
source: this,
mode
});
};
if (mode === AnnotationEditorType.NONE || this.#annotationEditorMode === AnnotationEditorType.NONE) {
const isEditing = mode !== AnnotationEditorType.NONE;
if (!isEditing) {
this.pdfDocument.annotationStorage.resetModifiedIds();
}
for (const pageView of this._pages) {
pageView.toggleEditingMode(isEditing);
}
const idsToRefresh = this.#switchToEditAnnotationMode();
if (isEditing && idsToRefresh) {
this.#cleanupSwitchAnnotationEditorMode();
this.#onPageRenderedCallback = ({
pageNumber
}) => {
idsToRefresh.delete(pageNumber);
if (idsToRefresh.size === 0) {
this.#switchAnnotationEditorModeTimeoutId = setTimeout(updater, 0);
}
};
const {
signal
} = this.#eventAbortController;
eventBus._on("pagerendered", this.#onPageRenderedCallback, {
signal
});
return;
}
}
updater();
} }
set annotationEditorParams({ set annotationEditorParams({
type, type,
@@ -11721,7 +11833,7 @@ class SecondaryToolbar {
class Toolbar { class Toolbar {
#opts; #opts;
constructor(options, eventBus) { constructor(options, eventBus, toolbarDensity = 0) {
this.#opts = options; this.#opts = options;
this.eventBus = eventBus; this.eventBus = eventBus;
const buttons = [{ const buttons = [{
@@ -11806,8 +11918,13 @@ class Toolbar {
break; break;
} }
}); });
eventBus._on("toolbardensity", this.#updateToolbarDensity.bind(this));
this.#updateToolbarDensity({
value: toolbarDensity
});
this.reset(); this.reset();
} }
#updateToolbarDensity() {}
#setAnnotationEditorUIManager(uiManager, parentContainer) { #setAnnotationEditorUIManager(uiManager, parentContainer) {
const colorPicker = new ColorPicker({ const colorPicker = new ColorPicker({
uiManager uiManager
@@ -12078,7 +12195,6 @@ class ViewHistory {
const FORCE_PAGES_LOADED_TIMEOUT = 10000; const FORCE_PAGES_LOADED_TIMEOUT = 10000;
const WHEEL_ZOOM_DISABLED_TIMEOUT = 1000;
const ViewOnLoad = { const ViewOnLoad = {
UNKNOWN: -1, UNKNOWN: -1,
PREVIOUS: 0, PREVIOUS: 0,
@@ -12110,18 +12226,17 @@ const PDFViewerApplication = {
store: null, store: null,
downloadManager: null, downloadManager: null,
overlayManager: null, overlayManager: null,
preferences: null, preferences: new Preferences(),
toolbar: null, toolbar: null,
secondaryToolbar: null, secondaryToolbar: null,
eventBus: null, eventBus: null,
l10n: null, l10n: null,
annotationEditorParams: null, annotationEditorParams: null,
isInitialViewSet: false, isInitialViewSet: false,
downloadComplete: false,
isViewerEmbedded: window.parent !== window, isViewerEmbedded: window.parent !== window,
url: "", url: "",
baseUrl: "", baseUrl: "",
_allowedGlobalEventsPromise: null, mlManager: null,
_downloadUrl: "", _downloadUrl: "",
_eventBusAbortController: null, _eventBusAbortController: null,
_windowAbortController: null, _windowAbortController: null,
@@ -12141,11 +12256,9 @@ const PDFViewerApplication = {
_printAnnotationStoragePromise: null, _printAnnotationStoragePromise: null,
_touchInfo: null, _touchInfo: null,
_isCtrlKeyDown: false, _isCtrlKeyDown: false,
_nimbusDataPromise: null,
_caretBrowsing: null, _caretBrowsing: null,
_isScrolling: false, _isScrolling: false,
async initialize(appConfig) { async initialize(appConfig) {
let l10nPromise;
this.appConfig = appConfig; this.appConfig = appConfig;
try { try {
await this.preferences.initializedPromise; await this.preferences.initializedPromise;
@@ -12167,8 +12280,7 @@ const PDFViewerApplication = {
if (mode) { if (mode) {
document.documentElement.classList.add(mode); document.documentElement.classList.add(mode);
} }
l10nPromise = this.externalServices.createL10n(); this.l10n = await this.externalServices.createL10n();
this.l10n = await l10nPromise;
document.getElementsByTagName("html")[0].dir = this.l10n.getDirection(); document.getElementsByTagName("html")[0].dir = this.l10n.getDirection();
this.l10n.translate(appConfig.appContainer || document.documentElement); this.l10n.translate(appConfig.appContainer || document.documentElement);
if (this.isViewerEmbedded && AppOptions.get("externalLinkTarget") === LinkTarget.NONE) { if (this.isViewerEmbedded && AppOptions.get("externalLinkTarget") === LinkTarget.NONE) {
@@ -12257,7 +12369,9 @@ const PDFViewerApplication = {
} }
} }
if (params.has("locale")) { if (params.has("locale")) {
AppOptions.set("locale", params.get("locale")); AppOptions.set("localeProperties", {
lang: params.get("locale")
});
} }
}, },
async _initializeViewerComponents() { async _initializeViewerComponents() {
@@ -12318,6 +12432,7 @@ const PDFViewerApplication = {
annotationEditorMode, annotationEditorMode,
annotationEditorHighlightColors: AppOptions.get("highlightEditorColors"), annotationEditorHighlightColors: AppOptions.get("highlightEditorColors"),
enableHighlightFloatingButton: AppOptions.get("enableHighlightFloatingButton"), enableHighlightFloatingButton: AppOptions.get("enableHighlightFloatingButton"),
enableUpdatedAddImage: AppOptions.get("enableUpdatedAddImage"),
imageResourcesPath: AppOptions.get("imageResourcesPath"), imageResourcesPath: AppOptions.get("imageResourcesPath"),
enablePrintAutoRotate: AppOptions.get("enablePrintAutoRotate"), enablePrintAutoRotate: AppOptions.get("enablePrintAutoRotate"),
maxCanvasPixels: AppOptions.get("maxCanvasPixels"), maxCanvasPixels: AppOptions.get("maxCanvasPixels"),
@@ -12355,9 +12470,6 @@ const PDFViewerApplication = {
} }
if (appConfig.annotationEditorParams) { if (appConfig.annotationEditorParams) {
if (annotationEditorMode !== AnnotationEditorType.DISABLE) { if (annotationEditorMode !== AnnotationEditorType.DISABLE) {
if (AppOptions.get("enableStampEditor")) {
appConfig.toolbar?.editorStampButton?.classList.remove("hidden");
}
const editorHighlightButton = appConfig.toolbar?.editorHighlightButton; const editorHighlightButton = appConfig.toolbar?.editorHighlightButton;
if (editorHighlightButton && AppOptions.get("enableHighlightEditor")) { if (editorHighlightButton && AppOptions.get("enableHighlightEditor")) {
editorHighlightButton.hidden = false; editorHighlightButton.hidden = false;
@@ -12380,7 +12492,7 @@ const PDFViewerApplication = {
}); });
} }
if (appConfig.toolbar) { if (appConfig.toolbar) {
this.toolbar = new Toolbar(appConfig.toolbar, eventBus); this.toolbar = new Toolbar(appConfig.toolbar, eventBus, AppOptions.get("toolbarDensity"));
} }
if (appConfig.secondaryToolbar) { if (appConfig.secondaryToolbar) {
this.secondaryToolbar = new SecondaryToolbar(appConfig.secondaryToolbar, eventBus); this.secondaryToolbar = new SecondaryToolbar(appConfig.secondaryToolbar, eventBus);
@@ -12437,7 +12549,6 @@ const PDFViewerApplication = {
} }
}, },
async run(config) { async run(config) {
this.preferences = new Preferences();
await this.initialize(config); await this.initialize(config);
const { const {
appConfig, appConfig,
@@ -12514,9 +12625,6 @@ const PDFViewerApplication = {
get externalServices() { get externalServices() {
return shadow(this, "externalServices", new ExternalServices()); return shadow(this, "externalServices", new ExternalServices());
}, },
get mlManager() {
return shadow(this, "mlManager", AppOptions.get("enableML") === true ? new MLManager() : null);
},
get initialized() { get initialized() {
return this._initializedCapability.settled; return this._initializedCapability.settled;
}, },
@@ -12597,12 +12705,10 @@ const PDFViewerApplication = {
let title = pdfjs_getPdfFilenameFromUrl(url, ""); let title = pdfjs_getPdfFilenameFromUrl(url, "");
if (!title) { if (!title) {
try { try {
title = decodeURIComponent(getFilenameFromUrl(url)) || url; title = decodeURIComponent(getFilenameFromUrl(url));
} catch { } catch {}
title = url;
}
} }
this.setTitle(title); this.setTitle(title || url);
}, },
setTitle(title = this._title) { setTitle(title = this._title) {
this._title = title; this._title = title;
@@ -12648,7 +12754,6 @@ const PDFViewerApplication = {
this.pdfLinkService.externalLinkEnabled = true; this.pdfLinkService.externalLinkEnabled = true;
this.store = null; this.store = null;
this.isInitialViewSet = false; this.isInitialViewSet = false;
this.downloadComplete = false;
this.url = ""; this.url = "";
this.baseUrl = ""; this.baseUrl = "";
this._downloadUrl = ""; this._downloadUrl = "";
@@ -12724,9 +12829,7 @@ const PDFViewerApplication = {
async download(options = {}) { async download(options = {}) {
let data; let data;
try { try {
if (this.downloadComplete) { data = await this.pdfDocument.getData();
data = await this.pdfDocument.getData();
}
} catch {} } catch {}
this.downloadManager.download(data, this._downloadUrl, this._docFilename, options); this.downloadManager.download(data, this._downloadUrl, this._docFilename, options);
}, },
@@ -12793,11 +12896,8 @@ const PDFViewerApplication = {
return message; return message;
}, },
progress(level) { progress(level) {
if (!this.loadingBar || this.downloadComplete) {
return;
}
const percent = Math.round(level * 100); const percent = Math.round(level * 100);
if (percent <= this.loadingBar.percent) { if (!this.loadingBar || percent <= this.loadingBar.percent) {
return; return;
} }
this.loadingBar.percent = percent; this.loadingBar.percent = percent;
@@ -12811,7 +12911,6 @@ const PDFViewerApplication = {
length length
}) => { }) => {
this._contentLength = length; this._contentLength = length;
this.downloadComplete = true;
this.loadingBar?.hide(); this.loadingBar?.hide();
firstPagePromise.then(() => { firstPagePromise.then(() => {
this.eventBus.dispatch("documentloaded", { this.eventBus.dispatch("documentloaded", {
@@ -13413,9 +13512,6 @@ const PDFViewerApplication = {
}); });
} }
addWindowResolutionChange(); addWindowResolutionChange();
window.addEventListener("visibilitychange", webViewerVisibilityChange, {
signal
});
window.addEventListener("wheel", webViewerWheel, { window.addEventListener("wheel", webViewerWheel, {
passive: false, passive: false,
signal signal
@@ -13730,7 +13826,7 @@ function webViewerHashchange(evt) {
} }
} }
{ {
/*var webViewerFileInputChange = function (evt) { var webViewerFileInputChange = function (evt) {
if (PDFViewerApplication.pdfViewer?.isInPresentationMode) { if (PDFViewerApplication.pdfViewer?.isInPresentationMode) {
return; return;
} }
@@ -13742,7 +13838,7 @@ function webViewerHashchange(evt) {
}; };
var webViewerOpenFile = function (evt) { var webViewerOpenFile = function (evt) {
PDFViewerApplication._openFileInput?.click(); PDFViewerApplication._openFileInput?.click();
};*/ };
} }
function webViewerPresentationMode() { function webViewerPresentationMode() {
PDFViewerApplication.requestPresentationMode(); PDFViewerApplication.requestPresentationMode();
@@ -13876,20 +13972,6 @@ function webViewerPageChanging({
function webViewerResolutionChange(evt) { function webViewerResolutionChange(evt) {
PDFViewerApplication.pdfViewer.refresh(); PDFViewerApplication.pdfViewer.refresh();
} }
function webViewerVisibilityChange(evt) {
if (document.visibilityState === "visible") {
setZoomDisabledTimeout();
}
}
let zoomDisabledTimeout = null;
function setZoomDisabledTimeout() {
if (zoomDisabledTimeout) {
clearTimeout(zoomDisabledTimeout);
}
zoomDisabledTimeout = setTimeout(function () {
zoomDisabledTimeout = null;
}, WHEEL_ZOOM_DISABLED_TIMEOUT);
}
function webViewerWheel(evt) { function webViewerWheel(evt) {
const { const {
pdfViewer, pdfViewer,
@@ -13907,7 +13989,7 @@ function webViewerWheel(evt) {
const origin = [evt.clientX, evt.clientY]; const origin = [evt.clientX, evt.clientY];
if (isPinchToZoom || evt.ctrlKey && supportsMouseWheelZoomCtrlKey || evt.metaKey && supportsMouseWheelZoomMetaKey) { if (isPinchToZoom || evt.ctrlKey && supportsMouseWheelZoomCtrlKey || evt.metaKey && supportsMouseWheelZoomMetaKey) {
evt.preventDefault(); evt.preventDefault();
if (PDFViewerApplication._isScrolling || zoomDisabledTimeout || document.visibilityState === "hidden" || PDFViewerApplication.overlayManager.active) { if (PDFViewerApplication._isScrolling || document.visibilityState === "hidden" || PDFViewerApplication.overlayManager.active) {
return; return;
} }
if (isPinchToZoom && supportsPinchToZoom) { if (isPinchToZoom && supportsPinchToZoom) {
@@ -14335,14 +14417,20 @@ function webViewerReportTelemetry({
}) { }) {
PDFViewerApplication.externalServices.reportTelemetry(details); PDFViewerApplication.externalServices.reportTelemetry(details);
} }
function webViewerSetPreference({
name,
value
}) {
PDFViewerApplication.preferences.set(name, value);
}
;// CONCATENATED MODULE: ./web/viewer.js ;// CONCATENATED MODULE: ./web/viewer.js
const pdfjsVersion = "4.4.168"; const pdfjsVersion = "4.5.136";
const pdfjsBuild = "19fbc8998"; const pdfjsBuild = "3a21f03b0";
const AppConstants = { const AppConstants = {
LinkTarget: LinkTarget, LinkTarget: LinkTarget,
RenderingStates: RenderingStates, RenderingStates: RenderingStates,

View File

@@ -49,7 +49,7 @@ function elementSorter(a, b) {
return 0; return 0;
} }
// Generic control/related handler to show/hide fields based on a checkbox' value // Generic control/related handler to show/hide fields based on a 'checkbox' value
// e.g. // e.g.
// <input type="checkbox" data-control="stuff-to-show"> // <input type="checkbox" data-control="stuff-to-show">
// <div data-related="stuff-to-show">...</div> // <div data-related="stuff-to-show">...</div>
@@ -63,7 +63,7 @@ $(document).on("change", "input[type=\"checkbox\"][data-control]", function () {
}); });
}); });
// Generic control/related handler to show/hide fields based on a select' value // Generic control/related handler to show/hide fields based on a 'select' value
$(document).on("change", "select[data-control]", function() { $(document).on("change", "select[data-control]", function() {
var $this = $(this); var $this = $(this);
var name = $this.data("control"); var name = $this.data("control");
@@ -79,7 +79,7 @@ $(document).on("change", "select[data-control]", function() {
} }
}); });
// Generic control/related handler to show/hide fields based on a select' value // Generic control/related handler to show/hide fields based on a 'select' value
// this one is made to show all values if select value is not 0 // this one is made to show all values if select value is not 0
$(document).on("change", "select[data-controlall]", function() { $(document).on("change", "select[data-controlall]", function() {
var $this = $(this); var $this = $(this);
@@ -130,8 +130,13 @@ $(".container-fluid").bind('drop', function (e) {
} }
}); });
if (dt.files.length) { if (dt.files.length) {
$("#btn-upload")[0].files = dt.files; if($("#btn-upload-format").length) {
$("#form-upload").submit(); $("#btn-upload-format")[0].files = dt.files;
$("#form-upload-format").submit();
} else {
$("#btn-upload")[0].files = dt.files;
$("#form-upload").submit();
}
} }
} }
}); });
@@ -140,12 +145,25 @@ $("#btn-upload").change(function() {
$("#form-upload").submit(); $("#form-upload").submit();
}); });
$("#btn-upload-format").change(function() {
$("#form-upload-format").submit();
});
$("#form-upload").uploadprogress({ $("#form-upload").uploadprogress({
redirect_url: getPath() + "/", //"{{ url_for('web.index')}}", redirect_url: getPath() + "/",
uploadedMsg: $("#form-upload").data("message"), //"{{_('Upload done, processing, please wait...')}}", uploadedMsg: $("#form-upload").data("message"),
modalTitle: $("#form-upload").data("title"), //"{{_('Uploading...')}}", modalTitle: $("#form-upload").data("title"),
modalFooter: $("#form-upload").data("footer"), //"{{_('Close')}}", modalFooter: $("#form-upload").data("footer"),
modalTitleFailed: $("#form-upload").data("failed") //"{{_('Error')}}" modalTitleFailed: $("#form-upload").data("failed")
});
$("#form-upload-format").uploadprogress({
redirect_url: getPath() + "/",
uploadedMsg: $("#form-upload-format").data("message"),
modalTitle: $("#form-upload-format").data("title"),
modalFooter: $("#form-upload-format").data("footer"),
modalTitleFailed: $("#form-upload-format").data("failed")
}); });
$(document).ready(function() { $(document).ready(function() {
@@ -604,6 +622,7 @@ $(function() {
}); });
$("#toggle_order_shelf").click(function() { $("#toggle_order_shelf").click(function() {
$("#toggle_order_shelf").toggleClass("dummy");
$("#new").toggleClass("disabled"); $("#new").toggleClass("disabled");
$("#old").toggleClass("disabled"); $("#old").toggleClass("disabled");
$("#asc").toggleClass("disabled"); $("#asc").toggleClass("disabled");
@@ -612,9 +631,20 @@ $(function() {
$("#auth_za").toggleClass("disabled"); $("#auth_za").toggleClass("disabled");
$("#pub_new").toggleClass("disabled"); $("#pub_new").toggleClass("disabled");
$("#pub_old").toggleClass("disabled"); $("#pub_old").toggleClass("disabled");
$("#shelf_new").toggleClass("disabled");
$("#shelf_old").toggleClass("disabled");
var alternative_text = $("#toggle_order_shelf").data('alt-text'); var alternative_text = $("#toggle_order_shelf").data('alt-text');
var status = $("#toggle_order_shelf").hasClass("dummy") ? "on" : "off";
$("#toggle_order_shelf").data('alt-text', $("#toggle_order_shelf").html()); $("#toggle_order_shelf").data('alt-text', $("#toggle_order_shelf").html());
$("#toggle_order_shelf").html(alternative_text); $("#toggle_order_shelf").html(alternative_text);
$.ajax({
method:"post",
contentType: "application/json; charset=utf-8",
dataType: "json",
url: getPath() + "/ajax/view",
data: "{\"shelf\": {\"man\": \"" + status + "\"}}",
});
}); });
$("#btndeluser").click(function() { $("#btndeluser").click(function() {
@@ -696,20 +726,20 @@ $(function() {
url: getPath() + "/ajax/simulatedbchange", url: getPath() + "/ajax/simulatedbchange",
data: {config_calibre_dir: $("#config_calibre_dir").val(), csrf_token: $("input[name='csrf_token']").val()}, data: {config_calibre_dir: $("#config_calibre_dir").val(), csrf_token: $("input[name='csrf_token']").val()},
success: function success(data) { success: function success(data) {
if ( data.change ) { if ( !data.valid ) {
if ( data.valid ) { $("#InvalidDialog").modal('show');
}
else{
if ( data.change ) {
confirmDialog( confirmDialog(
"db_submit", "db_submit",
"GeneralChangeModal", "GeneralChangeModal",
0, 0,
changeDbSettings changeDbSettings
); );
} } else {
else {
$("#InvalidDialog").modal('show');
}
} else {
changeDbSettings(); changeDbSettings();
}
} }
} }
}); });

View File

@@ -1,54 +0,0 @@
/**
* waits until queue is finished, meaning the book is done loading
* @param callback
*/
function qFinished(callback){
let timeout=setInterval(()=>{
if(reader.rendition.q.running===undefined)
clearInterval(timeout);
callback();
},300
)
}
function calculateProgress(){
let data=reader.rendition.location.end;
return Math.round(epub.locations.percentageFromCfi(data.cfi)*100);
}
// register new event emitter locationchange that fires on urlchange
// source: https://stackoverflow.com/a/52809105/21941129
(() => {
let oldPushState = history.pushState;
history.pushState = function pushState() {
let ret = oldPushState.apply(this, arguments);
window.dispatchEvent(new Event('locationchange'));
return ret;
};
let oldReplaceState = history.replaceState;
history.replaceState = function replaceState() {
let ret = oldReplaceState.apply(this, arguments);
window.dispatchEvent(new Event('locationchange'));
return ret;
};
window.addEventListener('popstate', () => {
window.dispatchEvent(new Event('locationchange'));
});
})();
window.addEventListener('locationchange',()=>{
let newPos=calculateProgress();
progressDiv.textContent=newPos+"%";
});
var epub=ePub(calibre.bookUrl)
let progressDiv=document.getElementById("progress");
qFinished(()=>{
epub.locations.generate().then(()=> {
window.dispatchEvent(new Event('locationchange'))
});
})

View File

@@ -52,6 +52,32 @@ var reader;
} }
}); });
// Update progress percentage
let progressDiv = document.getElementById("progress");
reader.book.ready.then((()=>{
let locations_key = reader.book.key()+'-locations';
let stored_locations = localStorage.getItem(locations_key);
let make_locations, save_locations;
if (stored_locations) {
make_locations = Promise.resolve(reader.book.locations.load(stored_locations));
// No-op because locations are already saved
save_locations = ()=>{};
} else {
make_locations = reader.book.locations.generate();
save_locations = ()=>{
localStorage.setItem(locations_key, reader.book.locations.save());
};
}
make_locations.then(()=>{
reader.rendition.on('relocated', (location)=>{
let percentage = Math.round(location.end.percentage*100);
progressDiv.textContent=percentage+"%";
});
reader.rendition.reportLocation();
progressDiv.style.visibility = "visible";
}).then(save_locations);
}));
/** /**
* @param {string} action - Add or remove bookmark * @param {string} action - Add or remove bookmark
* @param {string|int} location - Location or zero * @param {string|int} location - Location or zero

View File

@@ -0,0 +1,21 @@
// register new event emitter locationchange that fires on urlchange
// source: https://stackoverflow.com/a/52809105/21941129
(() => {
let oldPushState = history.pushState;
history.pushState = function pushState() {
let ret = oldPushState.apply(this, arguments);
window.dispatchEvent(new Event('locationchange'));
return ret;
};
let oldReplaceState = history.replaceState;
history.replaceState = function replaceState() {
let ret = oldReplaceState.apply(this, arguments);
window.dispatchEvent(new Event('locationchange'));
return ret;
};
window.addEventListener('popstate', () => {
window.dispatchEvent(new Event('locationchange'));
});
})();

View File

@@ -685,6 +685,17 @@ function ratingFormatter(value, row) {
return (value/2); return (value/2);
} }
function seriesIndexFormatter(value, row) {
if (!value) {
return value;
}
formated_value = Number(value).toFixed(2);
if (formated_value.endsWith(".00")) {
formated_value = parseInt(formated_value).toString();
}
return formated_value;
}
/* Do some hiding disabling after user list is loaded */ /* Do some hiding disabling after user list is loaded */
function loadSuccess() { function loadSuccess() {
@@ -849,6 +860,7 @@ function BookCheckboxChange(checkbox, userId, field) {
}, },
success: handleListServerResponse success: handleListServerResponse
}); });
console.log("test");
} }

View File

@@ -1,8 +1,7 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web) # This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2019 decentral1se # Copyright (C) 2024 OzzieIsaacs
# #
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by # it under the terms of the GNU General Public License as published by
@@ -16,28 +15,9 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# """Calibre-web distribution package setuptools installer."""
from setuptools import setup
import os
import re import re
import codecs
here = os.path.abspath(os.path.dirname(__file__))
def read(*parts): def strip_whitespaces(text):
with codecs.open(os.path.join(here, *parts), 'r') as fp: return re.sub(r"(^[\s\u200B-\u200D\ufeff]+)|([\s\u200B-\u200D\ufeff]+$)","", text)
return fp.read()
def find_version(*file_paths):
version_file = read(*file_paths)
version_match = re.search(r"^STABLE_VERSION\s+=\s+{['\"]version['\"]:\s*['\"](.*)['\"]}",
version_file, re.M)
if version_match:
return version_match.group(1)
raise RuntimeError("Unable to find version string.")
setup(
version=find_version("src", "calibreweb", "cps", "constants.py")
)

View File

@@ -1,339 +1,355 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web) # This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2020 pwr # Copyright (C) 2020 pwr
# #
# This program is free software: you can redistribute it and/or modify # This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by # it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or # the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version. # (at your option) any later version.
# #
# This program is distributed in the hope that it will be useful, # This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of # but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details. # GNU General Public License for more details.
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
import os import os
import re import re
from glob import glob import glob
from shutil import copyfile, copyfileobj from shutil import copyfile, copyfileobj
from markupsafe import escape from markupsafe import escape
from time import time from time import time
from uuid import uuid4 from uuid import uuid4
from sqlalchemy.exc import SQLAlchemyError from sqlalchemy.exc import SQLAlchemyError
from flask_babel import lazy_gettext as N_ from flask_babel import lazy_gettext as N_
from cps.services.worker import CalibreTask from cps.services.worker import CalibreTask
from cps import db from cps import db, app
from cps import logger, config from cps import logger, config
from cps.subproc_wrapper import process_open from cps.subproc_wrapper import process_open
from flask_babel import gettext as _ from flask_babel import gettext as _
from cps.kobo_sync_status import remove_synced_book from cps.kobo_sync_status import remove_synced_book
from cps.ub import init_db_thread from cps.ub import init_db_thread
from cps.file_helper import get_temp_dir from cps.file_helper import get_temp_dir
from cps.tasks.mail import TaskEmail from cps.tasks.mail import TaskEmail
from cps import gdriveutils, helper from cps import gdriveutils, helper
from cps.constants import SUPPORTED_CALIBRE_BINARIES from cps.constants import SUPPORTED_CALIBRE_BINARIES
from cps.string_helper import strip_whitespaces
log = logger.create()
log = logger.create()
current_milli_time = lambda: int(round(time() * 1000))
current_milli_time = lambda: int(round(time() * 1000))
class TaskConvert(CalibreTask):
def __init__(self, file_path, book_id, task_message, settings, ereader_mail, user=None): class TaskConvert(CalibreTask):
super(TaskConvert, self).__init__(task_message) def __init__(self, file_path, book_id, task_message, settings, ereader_mail, user=None):
self.worker_thread = None super(TaskConvert, self).__init__(task_message)
self.file_path = file_path self.worker_thread = None
self.book_id = book_id self.file_path = file_path
self.title = "" self.book_id = book_id
self.settings = settings self.title = ""
self.ereader_mail = ereader_mail self.settings = settings
self.user = user self.ereader_mail = ereader_mail
self.user = user
self.results = dict()
self.results = dict()
def run(self, worker_thread):
self.worker_thread = worker_thread def run(self, worker_thread):
if config.config_use_google_drive: df_cover = None
worker_db = db.CalibreDB(expire_on_commit=False, init=True) cur_book = None
cur_book = worker_db.get_book(self.book_id) self.worker_thread = worker_thread
self.title = cur_book.title if config.config_use_google_drive:
data = worker_db.get_book_format(self.book_id, self.settings['old_book_format']) with app.app_context():
df = gdriveutils.getFileFromEbooksFolder(cur_book.path, worker_db = db.CalibreDB(app)
data.name + "." + self.settings['old_book_format'].lower()) cur_book = worker_db.get_book(self.book_id)
df_cover = gdriveutils.getFileFromEbooksFolder(cur_book.path, "cover.jpg") self.title = cur_book.title
if df: data = worker_db.get_book_format(self.book_id, self.settings['old_book_format'])
datafile_cover = None df = gdriveutils.getFileFromEbooksFolder(cur_book.path,
datafile = os.path.join(config.get_book_path(), data.name + "." + self.settings['old_book_format'].lower())
cur_book.path, df_cover = gdriveutils.getFileFromEbooksFolder(cur_book.path, "cover.jpg")
data.name + "." + self.settings['old_book_format'].lower()) if df:
if df_cover: datafile_cover = None
datafile_cover = os.path.join(config.get_book_path(), datafile = os.path.join(config.get_book_path(),
cur_book.path, "cover.jpg") cur_book.path,
if not os.path.exists(os.path.join(config.get_book_path(), cur_book.path)): data.name + "." + self.settings['old_book_format'].lower())
os.makedirs(os.path.join(config.get_book_path(), cur_book.path)) if df_cover:
df.GetContentFile(datafile) datafile_cover = os.path.join(config.get_book_path(),
if df_cover: cur_book.path, "cover.jpg")
df_cover.GetContentFile(datafile_cover) if not os.path.exists(os.path.join(config.get_book_path(), cur_book.path)):
worker_db.session.close() os.makedirs(os.path.join(config.get_book_path(), cur_book.path))
else: df.GetContentFile(datafile)
# ToDo Include cover in error handling if df_cover:
error_message = _("%(format)s not found on Google Drive: %(fn)s", df_cover.GetContentFile(datafile_cover)
format=self.settings['old_book_format'], # worker_db.session.close()
fn=data.name + "." + self.settings['old_book_format'].lower()) else:
worker_db.session.close() # ToDo Include cover in error handling
return self._handleError(error_message) error_message = _("%(format)s not found on Google Drive: %(fn)s",
format=self.settings['old_book_format'],
filename = self._convert_ebook_format() fn=data.name + "." + self.settings['old_book_format'].lower())
if config.config_use_google_drive: # worker_db.session.close()
os.remove(self.file_path + '.' + self.settings['old_book_format'].lower()) return self._handleError(error_message)
if df_cover:
os.remove(os.path.join(config.config_calibre_dir, cur_book.path, "cover.jpg")) filename = self._convert_ebook_format()
if config.config_use_google_drive:
if filename: os.remove(self.file_path + '.' + self.settings['old_book_format'].lower())
if config.config_use_google_drive: if df_cover:
# Upload files to gdrive os.remove(os.path.join(config.config_calibre_dir, cur_book.path, "cover.jpg"))
gdriveutils.updateGdriveCalibreFromLocal()
self._handleSuccess() if filename:
if self.ereader_mail: if config.config_use_google_drive:
# if we're sending to E-Reader after converting, create a one-off task and run it immediately # Upload files to gdrive
# todo: figure out how to incorporate this into the progress gdriveutils.updateGdriveCalibreFromLocal()
try: self._handleSuccess()
EmailText = N_(u"%(book)s send to E-Reader", book=escape(self.title)) if self.ereader_mail:
for email in self.ereader_mail.split(','): # if we're sending to E-Reader after converting, create a one-off task and run it immediately
email = email.strip() # todo: figure out how to incorporate this into the progress
worker_thread.add(self.user, TaskEmail(self.settings['subject'], try:
self.results["path"], EmailText = N_(u"%(book)s send to E-Reader", book=escape(self.title))
filename, for email in self.ereader_mail.split(','):
self.settings, email = strip_whitespaces(email)
email, worker_thread.add(self.user, TaskEmail(self.settings['subject'],
EmailText, self.results["path"],
self.settings['body'], filename,
id=self.book_id, self.settings,
internal=True) email,
) EmailText,
except Exception as ex: self.settings['body'],
return self._handleError(str(ex)) id=self.book_id,
internal=True)
def _convert_ebook_format(self): )
error_message = None except Exception as ex:
local_db = db.CalibreDB(expire_on_commit=False, init=True) return self._handleError(str(ex))
file_path = self.file_path
book_id = self.book_id def _convert_ebook_format(self):
format_old_ext = '.' + self.settings['old_book_format'].lower() error_message = None
format_new_ext = '.' + self.settings['new_book_format'].lower() with app.app_context():
local_db = db.CalibreDB(app)
# check to see if destination format already exists - or if book is in database file_path = self.file_path
# if it does - mark the conversion task as complete and return a success book_id = self.book_id
# this will allow to send to E-Reader workflow to continue to work format_old_ext = '.' + self.settings['old_book_format'].lower()
if os.path.isfile(file_path + format_new_ext) or\ format_new_ext = '.' + self.settings['new_book_format'].lower()
local_db.get_book_format(self.book_id, self.settings['new_book_format']):
log.info("Book id %d already converted to %s", book_id, format_new_ext) # check to see if destination format already exists - or if book is in database
cur_book = local_db.get_book(book_id) # if it does - mark the conversion task as complete and return a success
self.title = cur_book.title # this will allow to send to E-Reader workflow to continue to work
self.results['path'] = cur_book.path if os.path.isfile(file_path + format_new_ext) or\
self.results['title'] = self.title local_db.get_book_format(self.book_id, self.settings['new_book_format']):
new_format = local_db.session.query(db.Data).filter(db.Data.book == book_id)\ log.info("Book id %d already converted to %s", book_id, format_new_ext)
.filter(db.Data.format == self.settings['new_book_format'].upper()).one_or_none() cur_book = local_db.get_book(book_id)
if not new_format: self.title = cur_book.title
new_format = db.Data(name=os.path.basename(file_path), self.results['path'] = cur_book.path
book_format=self.settings['new_book_format'].upper(), self.results['title'] = self.title
book=book_id, uncompressed_size=os.path.getsize(file_path + format_new_ext)) new_format = local_db.session.query(db.Data).filter(db.Data.book == book_id)\
try: .filter(db.Data.format == self.settings['new_book_format'].upper()).one_or_none()
local_db.session.merge(new_format) if not new_format:
local_db.session.commit() new_format = db.Data(name=os.path.basename(file_path),
except SQLAlchemyError as e: book_format=self.settings['new_book_format'].upper(),
local_db.session.rollback() book=book_id, uncompressed_size=os.path.getsize(file_path + format_new_ext))
log.error("Database error: %s", e) try:
local_db.session.close() local_db.session.merge(new_format)
self._handleError(N_("Oops! Database Error: %(error)s.", error=e)) local_db.session.commit()
return except SQLAlchemyError as e:
self._handleSuccess() local_db.session.rollback()
local_db.session.close() log.error("Database error: %s", e)
return os.path.basename(file_path + format_new_ext) local_db.session.close()
else: self._handleError(N_("Oops! Database Error: %(error)s.", error=e))
log.info("Book id %d - target format of %s does not exist. Moving forward with convert.", return
book_id, self._handleSuccess()
format_new_ext) local_db.session.close()
return os.path.basename(file_path + format_new_ext)
if config.config_kepubifypath and format_old_ext == '.epub' and format_new_ext == '.kepub': else:
check, error_message = self._convert_kepubify(file_path, log.info("Book id %d - target format of %s does not exist. Moving forward with convert.",
format_old_ext, book_id,
format_new_ext) format_new_ext)
else:
# check if calibre converter-executable is existing if config.config_kepubifypath and format_old_ext == '.epub' and format_new_ext == '.kepub':
if not os.path.exists(config.config_converterpath): check, error_message = self._convert_kepubify(file_path,
self._handleError(N_("Calibre ebook-convert %(tool)s not found", tool=config.config_converterpath)) format_old_ext,
return format_new_ext)
has_cover = local_db.get_book(book_id).has_cover else:
check, error_message = self._convert_calibre(file_path, format_old_ext, format_new_ext, has_cover) # check if calibre converter-executable is existing
if not os.path.exists(config.config_converterpath):
if check == 0: self._handleError(N_("Calibre ebook-convert %(tool)s not found", tool=config.config_converterpath))
cur_book = local_db.get_book(book_id) return
if os.path.isfile(file_path + format_new_ext): has_cover = local_db.get_book(book_id).has_cover
new_format = local_db.session.query(db.Data).filter(db.Data.book == book_id) \ check, error_message = self._convert_calibre(file_path, format_old_ext, format_new_ext, has_cover)
.filter(db.Data.format == self.settings['new_book_format'].upper()).one_or_none()
if not new_format: if check == 0:
new_format = db.Data(name=cur_book.data[0].name, cur_book = local_db.get_book(book_id)
book_format=self.settings['new_book_format'].upper(), if os.path.isfile(file_path + format_new_ext):
book=book_id, uncompressed_size=os.path.getsize(file_path + format_new_ext)) new_format = local_db.session.query(db.Data).filter(db.Data.book == book_id) \
try: .filter(db.Data.format == self.settings['new_book_format'].upper()).one_or_none()
local_db.session.merge(new_format) if not new_format:
local_db.session.commit() new_format = db.Data(name=cur_book.data[0].name,
if self.settings['new_book_format'].upper() in ['KEPUB', 'EPUB', 'EPUB3']: book_format=self.settings['new_book_format'].upper(),
ub_session = init_db_thread() book=book_id, uncompressed_size=os.path.getsize(file_path + format_new_ext))
remove_synced_book(book_id, True, ub_session) try:
ub_session.close() local_db.session.merge(new_format)
except SQLAlchemyError as e: local_db.session.commit()
local_db.session.rollback() if self.settings['new_book_format'].upper() in ['KEPUB', 'EPUB', 'EPUB3']:
log.error("Database error: %s", e) ub_session = init_db_thread()
local_db.session.close() remove_synced_book(book_id, True, ub_session)
self._handleError(error_message) ub_session.close()
return except SQLAlchemyError as e:
self.results['path'] = cur_book.path local_db.session.rollback()
self.title = cur_book.title log.error("Database error: %s", e)
self.results['title'] = self.title local_db.session.close()
if not config.config_use_google_drive: self._handleError(error_message)
self._handleSuccess() return
return os.path.basename(file_path + format_new_ext) self.results['path'] = cur_book.path
else: self.title = cur_book.title
error_message = N_('%(format)s format not found on disk', format=format_new_ext.upper()) self.results['title'] = self.title
local_db.session.close() if not config.config_use_google_drive:
log.info("ebook converter failed with error while converting book") self._handleSuccess()
if not error_message: return os.path.basename(file_path + format_new_ext)
error_message = N_('Ebook converter failed with unknown error') else:
else: error_message = N_('%(format)s format not found on disk', format=format_new_ext.upper())
log.error(error_message) local_db.session.close()
self._handleError(error_message) log.info("ebook converter failed with error while converting book")
return if not error_message:
error_message = N_('Ebook converter failed with unknown error')
def _convert_kepubify(self, file_path, format_old_ext, format_new_ext): else:
if config.config_embed_metadata and config.config_binariesdir: log.error(error_message)
tmp_dir, temp_file_name = helper.do_calibre_export(self.book_id, format_old_ext[1:]) self._handleError(error_message)
filename = os.path.join(tmp_dir, temp_file_name + format_old_ext) return
temp_file_path = tmp_dir
else: def _convert_kepubify(self, file_path, format_old_ext, format_new_ext):
filename = file_path + format_old_ext if config.config_embed_metadata and config.config_binariesdir:
temp_file_path = os.path.dirname(file_path) tmp_dir, temp_file_name = helper.do_calibre_export(self.book_id, format_old_ext[1:])
quotes = [1, 3] filename = os.path.join(tmp_dir, temp_file_name + format_old_ext)
command = [config.config_kepubifypath, filename, '-o', temp_file_path, '-i'] temp_file_path = tmp_dir
try: else:
p = process_open(command, quotes) filename = file_path + format_old_ext
except OSError as e: temp_file_path = os.path.dirname(file_path)
return 1, N_("Kepubify-converter failed: %(error)s", error=e) quotes = [1, 3]
self.progress = 0.01 command = [config.config_kepubifypath, filename, '-o', temp_file_path, '-i']
while True: try:
nextline = p.stdout.readlines() p = process_open(command, quotes)
nextline = [x.strip('\n') for x in nextline if x != '\n'] except OSError as e:
for line in nextline: return 1, N_("Kepubify-converter failed: %(error)s", error=e)
log.debug(line) self.progress = 0.01
if p.poll() is not None: while True:
break nextline = p.stdout.readlines()
nextline = [x.strip('\n') for x in nextline if x != '\n']
# process returncode for line in nextline:
check = p.returncode log.debug(line)
if p.poll() is not None:
# move file break
if check == 0:
converted_file = glob(os.path.splitext(filename)[0] + "*.kepub.epub") # process returncode
if len(converted_file) == 1: check = p.returncode
copyfile(converted_file[0], (file_path + format_new_ext))
os.unlink(converted_file[0]) # move file
else: if check == 0:
return 1, N_("Converted file not found or more than one file in folder %(folder)s", converted_file = glob.glob(glob.escape(os.path.splitext(filename)[0]) + "*.kepub.epub")
folder=os.path.dirname(file_path)) if len(converted_file) == 1:
return check, None copyfile(converted_file[0], (file_path + format_new_ext))
os.unlink(converted_file[0])
def _convert_calibre(self, file_path, format_old_ext, format_new_ext, has_cover): else:
path_tmp_opf = None return 1, N_("Converted file not found or more than one file in folder %(folder)s",
try: folder=os.path.dirname(file_path))
# path_tmp_opf = self._embed_metadata() return check, None
if config.config_embed_metadata:
quotes = [5] def _convert_calibre(self, file_path, format_old_ext, format_new_ext, has_cover):
tmp_dir = get_temp_dir() path_tmp_opf = None
calibredb_binarypath = os.path.join(config.config_binariesdir, SUPPORTED_CALIBRE_BINARIES["calibredb"]) try:
my_env = os.environ.copy() # path_tmp_opf = self._embed_metadata()
if config.config_calibre_split: if config.config_embed_metadata:
my_env['CALIBRE_OVERRIDE_DATABASE_PATH'] = os.path.join(config.config_calibre_dir, "metadata.db") quotes = [5]
library_path = config.config_calibre_split_dir tmp_dir = get_temp_dir()
else: calibredb_binarypath = os.path.join(config.config_binariesdir, SUPPORTED_CALIBRE_BINARIES["calibredb"])
library_path = config.config_calibre_dir my_env = os.environ.copy()
if config.config_calibre_split:
opf_command = [calibredb_binarypath, 'show_metadata', '--as-opf', str(self.book_id), my_env['CALIBRE_OVERRIDE_DATABASE_PATH'] = os.path.join(config.config_calibre_dir, "metadata.db")
'--with-library', library_path] library_path = config.config_calibre_split_dir
p = process_open(opf_command, quotes, my_env) else:
p.wait() library_path = config.config_calibre_dir
check = p.returncode
calibre_traceback = p.stderr.readlines() opf_command = [calibredb_binarypath, 'show_metadata', '--as-opf', str(self.book_id),
if check == 0: '--with-library', library_path]
path_tmp_opf = os.path.join(tmp_dir, "metadata_" + str(uuid4()) + ".opf") p = process_open(opf_command, quotes, my_env, newlines=False)
with open(path_tmp_opf, 'w') as fd: lines = list()
copyfileobj(p.stdout, fd) while p.poll() is None:
else: lines.append(p.stdout.readline())
error_message = "" check = p.returncode
for ele in calibre_traceback: calibre_traceback = p.stderr.readlines()
if not ele.startswith('Traceback') and not ele.startswith(' File'): if check == 0:
error_message = N_("Calibre failed with error: %(error)s", error=ele) path_tmp_opf = os.path.join(tmp_dir, "metadata_" + str(uuid4()) + ".opf")
return check, error_message with open(path_tmp_opf, 'wb') as fd:
quotes = [1, 2, 4, 6] fd.write(b''.join(lines))
command = [config.config_converterpath, (file_path + format_old_ext), else:
(file_path + format_new_ext)] error_message = ""
if config.config_embed_metadata: for ele in calibre_traceback:
command.extend(['--from-opf', path_tmp_opf]) if not ele.startswith('Traceback') and not ele.startswith(' File'):
if has_cover: error_message = N_("Calibre failed with error: %(error)s", error=ele)
command.extend(['--cover', os.path.join(os.path.dirname(file_path), 'cover.jpg')]) return check, error_message
quotes_index = 3 quotes = [1, 2]
if config.config_calibre: quotes_index = 3
parameters = config.config_calibre.split(" ") command = [config.config_converterpath, (file_path + format_old_ext),
for param in parameters: (file_path + format_new_ext)]
command.append(param) if config.config_embed_metadata:
quotes.append(quotes_index) quotes.append(4)
quotes_index += 1 quotes_index = 5
command.extend(['--from-opf', path_tmp_opf])
p = process_open(command, quotes, newlines=False) if has_cover:
except OSError as e: quotes.append(6)
return 1, N_("Ebook-converter failed: %(error)s", error=e) command.extend(['--cover', os.path.join(os.path.dirname(file_path), 'cover.jpg')])
quotes_index = 7
while p.poll() is None: if config.config_calibre:
nextline = p.stdout.readline() parameters = re.findall(r"(--[\w-]+)(?:(\s(?:(\".+\")|(?:.+?)))(?:\s|$))?",
if isinstance(nextline, bytes): config.config_calibre, re.IGNORECASE | re.UNICODE)
nextline = nextline.decode('utf-8', errors="ignore").strip('\r\n') if parameters:
if nextline: for param in parameters:
log.debug(nextline) command.append(strip_whitespaces(param[0]))
# parse progress string from calibre-converter quotes_index += 1
progress = re.search(r"(\d+)%\s.*", nextline) if param[1] != "":
if progress: parsed = strip_whitespaces(param[1]).strip("\"")
self.progress = int(progress.group(1)) / 100 command.append(parsed)
if config.config_use_google_drive: quotes.append(quotes_index)
self.progress *= 0.9 quotes_index += 1
p = process_open(command, quotes, newlines=False)
# process returncode except OSError as e:
check = p.returncode return 1, N_("Ebook-converter failed: %(error)s", error=e)
calibre_traceback = p.stderr.readlines()
error_message = "" while p.poll() is None:
for ele in calibre_traceback: nextline = p.stdout.readline()
ele = ele.decode('utf-8', errors="ignore").strip('\n') if isinstance(nextline, bytes):
log.debug(ele) nextline = nextline.decode('utf-8', errors="ignore").strip('\r\n')
if not ele.startswith('Traceback') and not ele.startswith(' File'): if nextline:
error_message = N_("Calibre failed with error: %(error)s", error=ele) log.debug(nextline)
return check, error_message # parse progress string from calibre-converter
progress = re.search(r"(\d+)%\s.*", nextline)
@property if progress:
def name(self): self.progress = int(progress.group(1)) / 100
return N_("Convert") if config.config_use_google_drive:
self.progress *= 0.9
def __str__(self):
if self.ereader_mail: # process returncode
return "Convert Book {} and mail it to {}".format(self.book_id, self.ereader_mail) check = p.returncode
else: calibre_traceback = p.stderr.readlines()
return "Convert Book {}".format(self.book_id) error_message = ""
for ele in calibre_traceback:
@property ele = ele.decode('utf-8', errors="ignore").strip('\n')
def is_cancellable(self): log.debug(ele)
return False if not ele.startswith('Traceback') and not ele.startswith(' File'):
error_message = N_("Calibre failed with error: %(error)s", error=ele)
return check, error_message
@property
def name(self):
return N_("Convert")
def __str__(self):
if self.ereader_mail:
return "Convert Book {} and mail it to {}".format(self.book_id, self.ereader_mail)
else:
return "Convert Book {}".format(self.book_id)
@property
def is_cancellable(self):
return False

View File

@@ -18,7 +18,7 @@
from flask_babel import lazy_gettext as N_ from flask_babel import lazy_gettext as N_
from cps import config, logger, db, ub from cps import config, logger, db, ub, app
from cps.services.worker import CalibreTask from cps.services.worker import CalibreTask
@@ -26,11 +26,13 @@ class TaskReconnectDatabase(CalibreTask):
def __init__(self, task_message=N_('Reconnecting Calibre database')): def __init__(self, task_message=N_('Reconnecting Calibre database')):
super(TaskReconnectDatabase, self).__init__(task_message) super(TaskReconnectDatabase, self).__init__(task_message)
self.log = logger.create() self.log = logger.create()
self.calibre_db = db.CalibreDB(expire_on_commit=False, init=True) # self.calibre_db = db.CalibreDB(expire_on_commit=False, init=True)
def run(self, worker_thread): def run(self, worker_thread):
self.calibre_db.reconnect_db(config, ub.app_DB_path) with app.app_context():
self.calibre_db.session.close() calibre_db = db.CalibreDB(app)
calibre_db.reconnect_db(config, ub.app_DB_path)
# self.calibre_db.session.close()
self._handleSuccess() self._handleSuccess()
@property @property

View File

@@ -25,7 +25,7 @@ import mimetypes
from io import StringIO from io import StringIO
from email.message import EmailMessage from email.message import EmailMessage
from email.utils import formatdate, parseaddr from email.utils import formatdate, parseaddr, make_msgid
from email.generator import Generator from email.generator import Generator
from flask_babel import lazy_gettext as N_ from flask_babel import lazy_gettext as N_
@@ -34,7 +34,8 @@ from cps.services import gmail
from cps.embed_helper import do_calibre_export from cps.embed_helper import do_calibre_export
from cps import logger, config from cps import logger, config
from cps import gdriveutils from cps import gdriveutils
import uuid from cps.string_helper import strip_whitespaces
log = logger.create() log = logger.create()
@@ -54,8 +55,8 @@ class EmailBase:
return (code, resp) return (code, resp)
def send(self, strg): def send(self, strg):
"""Send `strg' to the server.""" """Send 'strg' to the server."""
log.debug_no_auth('send: {}'.format(strg[:300])) log.debug_no_auth('send: {}'.format(strg[:300]), stacklevel=2)
if hasattr(self, 'sock') and self.sock: if hasattr(self, 'sock') and self.sock:
try: try:
if self.transferSize: if self.transferSize:
@@ -101,7 +102,7 @@ class Email(EmailBase, smtplib.SMTP):
smtplib.SMTP.__init__(self, *args, **kwargs) smtplib.SMTP.__init__(self, *args, **kwargs)
# Class for sending ssl encrypted email with ability to get current progress, , derived from emailbase class # Class for sending ssl encrypted email with ability to get current progress, derived from emailbase class
class EmailSSL(EmailBase, smtplib.SMTP_SSL): class EmailSSL(EmailBase, smtplib.SMTP_SSL):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
@@ -127,9 +128,9 @@ class TaskEmail(CalibreTask):
try: try:
# Parse out the address from the From line, and then the domain from that # Parse out the address from the From line, and then the domain from that
from_email = parseaddr(self.settings["mail_from"])[1] from_email = parseaddr(self.settings["mail_from"])[1]
msgid_domain = from_email.partition('@')[2].strip() msgid_domain = strip_whitespaces(from_email.partition('@')[2])
# This can sometimes sneak through parseaddr if the input is malformed # This can sometimes sneak through parseaddr if the input is malformed
msgid_domain = msgid_domain.rstrip('>').strip() msgid_domain = strip_whitespaces(msgid_domain.rstrip('>'))
except Exception: except Exception:
msgid_domain = '' msgid_domain = ''
return msgid_domain or 'calibre-web.com' return msgid_domain or 'calibre-web.com'
@@ -141,7 +142,7 @@ class TaskEmail(CalibreTask):
message['To'] = self.recipient message['To'] = self.recipient
message['Subject'] = self.subject message['Subject'] = self.subject
message['Date'] = formatdate(localtime=True) message['Date'] = formatdate(localtime=True)
message['Message-Id'] = "{}@{}".format(uuid.uuid4(), self.get_msgid_domain()) message['Message-ID'] = make_msgid(domain=self.get_msgid_domain())
message.set_content(self.text.encode('UTF-8'), "text", "plain") message.set_content(self.text.encode('UTF-8'), "text", "plain")
if self.attachment: if self.attachment:
data = self._get_attachment(self.filepath, self.attachment) data = self._get_attachment(self.filepath, self.attachment)
@@ -168,10 +169,14 @@ class TaskEmail(CalibreTask):
else: else:
self.send_gmail_email(msg) self.send_gmail_email(msg)
except MemoryError as e: except MemoryError as e:
log.error_or_exception(e, stacklevel=3) log.error_or_exception(e, stacklevel=2)
self._handleError('MemoryError sending e-mail: {}'.format(str(e))) self._handleError('MemoryError sending e-mail: {}'.format(str(e)))
except (smtplib.SMTPRecipientsRefused) as e:
log.error_or_exception(e, stacklevel=2)
self._handleError('Smtplib Error sending e-mail: {}'.format(
(list(e.args[0].values())[0][1]).decode('utf-8)').replace("\n", '. ')))
except (smtplib.SMTPException, smtplib.SMTPAuthenticationError) as e: except (smtplib.SMTPException, smtplib.SMTPAuthenticationError) as e:
log.error_or_exception(e, stacklevel=3) log.error_or_exception(e, stacklevel=2)
if hasattr(e, "smtp_error"): if hasattr(e, "smtp_error"):
text = e.smtp_error.decode('utf-8').replace("\n", '. ') text = e.smtp_error.decode('utf-8').replace("\n", '. ')
elif hasattr(e, "message"): elif hasattr(e, "message"):
@@ -182,10 +187,10 @@ class TaskEmail(CalibreTask):
text = '' text = ''
self._handleError('Smtplib Error sending e-mail: {}'.format(text)) self._handleError('Smtplib Error sending e-mail: {}'.format(text))
except (socket.error) as e: except (socket.error) as e:
log.error_or_exception(e, stacklevel=3) log.error_or_exception(e, stacklevel=2)
self._handleError('Socket Error sending e-mail: {}'.format(e.strerror)) self._handleError('Socket Error sending e-mail: {}'.format(e.strerror))
except Exception as ex: except Exception as ex:
log.error_or_exception(ex, stacklevel=3) log.error_or_exception(ex, stacklevel=2)
self._handleError('Error sending e-mail: {}'.format(ex)) self._handleError('Error sending e-mail: {}'.format(ex))
def send_standard_email(self, msg): def send_standard_email(self, msg):
@@ -268,7 +273,7 @@ class TaskEmail(CalibreTask):
if config.config_binariesdir and config.config_embed_metadata: if config.config_binariesdir and config.config_embed_metadata:
os.remove(datafile) os.remove(datafile)
except IOError as e: except IOError as e:
log.error_or_exception(e, stacklevel=3) log.error_or_exception(e, stacklevel=2)
log.error('The requested file could not be read. Maybe wrong permissions?') log.error('The requested file could not be read. Maybe wrong permissions?')
return None return None
return data return data

View File

@@ -19,7 +19,7 @@
import os import os
from lxml import etree from lxml import etree
from cps import config, db, gdriveutils, logger from cps import config, db, gdriveutils, logger, app
from cps.services.worker import CalibreTask from cps.services.worker import CalibreTask
from flask_babel import lazy_gettext as N_ from flask_babel import lazy_gettext as N_
@@ -34,7 +34,7 @@ class TaskBackupMetadata(CalibreTask):
task_message=N_('Backing up Metadata')): task_message=N_('Backing up Metadata')):
super(TaskBackupMetadata, self).__init__(task_message) super(TaskBackupMetadata, self).__init__(task_message)
self.log = logger.create() self.log = logger.create()
self.calibre_db = db.CalibreDB(expire_on_commit=False, init=True) # self.calibre_db = db.CalibreDB(expire_on_commit=False, init=True)
self.export_language = export_language self.export_language = export_language
self.translated_title = translated_title self.translated_title = translated_title
self.set_dirty = set_dirty self.set_dirty = set_dirty
@@ -46,47 +46,51 @@ class TaskBackupMetadata(CalibreTask):
self.backup_metadata() self.backup_metadata()
def set_all_books_dirty(self): def set_all_books_dirty(self):
try: with app.app_context():
books = self.calibre_db.session.query(db.Books).all() calibre_dbb = db.CalibreDB(app)
for book in books: try:
self.calibre_db.set_metadata_dirty(book.id) books = calibre_dbb.session.query(db.Books).all()
self.calibre_db.session.commit() for book in books:
self._handleSuccess() calibre_dbb.set_metadata_dirty(book.id)
except Exception as ex: calibre_dbb.session.commit()
self.log.debug('Error adding book for backup: ' + str(ex)) self._handleSuccess()
self._handleError('Error adding book for backup: ' + str(ex)) except Exception as ex:
self.calibre_db.session.rollback() self.log.debug('Error adding book for backup: ' + str(ex))
self.calibre_db.session.close() self._handleError('Error adding book for backup: ' + str(ex))
calibre_dbb.session.rollback()
# self.calibre_db.session.close()
def backup_metadata(self): def backup_metadata(self):
try: with app.app_context():
metadata_backup = self.calibre_db.session.query(db.Metadata_Dirtied).all() try:
custom_columns = (self.calibre_db.session.query(db.CustomColumns) calibre_dbb = db.CalibreDB(app)
.filter(db.CustomColumns.mark_for_delete == 0) metadata_backup = calibre_dbb.session.query(db.Metadata_Dirtied).all()
.filter(db.CustomColumns.datatype.notin_(db.cc_exceptions)) custom_columns = (calibre_dbb.session.query(db.CustomColumns)
.order_by(db.CustomColumns.label).all()) .filter(db.CustomColumns.mark_for_delete == 0)
count = len(metadata_backup) .filter(db.CustomColumns.datatype.notin_(db.cc_exceptions))
i = 0 .order_by(db.CustomColumns.label).all())
for backup in metadata_backup: count = len(metadata_backup)
book = self.calibre_db.session.query(db.Books).filter(db.Books.id == backup.book).one_or_none() i = 0
self.calibre_db.session.query(db.Metadata_Dirtied).filter( for backup in metadata_backup:
db.Metadata_Dirtied.book == backup.book).delete() book = calibre_dbb.session.query(db.Books).filter(db.Books.id == backup.book).one_or_none()
self.calibre_db.session.commit() calibre_dbb.session.query(db.Metadata_Dirtied).filter(
if book: db.Metadata_Dirtied.book == backup.book).delete()
self.open_metadata(book, custom_columns) calibre_dbb.session.commit()
else: if book:
self.log.error("Book {} not found in database".format(backup.book)) self.open_metadata(book, custom_columns)
i += 1 else:
self.progress = (1.0 / count) * i self.log.error("Book {} not found in database".format(backup.book))
self._handleSuccess() i += 1
self.calibre_db.session.close() self.progress = (1.0 / count) * i
self._handleSuccess()
# self.calibre_db.session.close()
except Exception as ex: except Exception as ex:
b = "NaN" if not hasattr(book, 'id') else book.id b = "NaN" if not hasattr(book, 'id') else book.id
self.log.debug('Error creating metadata backup for book {}: '.format(b) + str(ex)) self.log.debug('Error creating metadata backup for book {}: '.format(b) + str(ex))
self._handleError('Error creating metadata backup: ' + str(ex)) self._handleError('Error creating metadata backup: ' + str(ex))
self.calibre_db.session.rollback() calibre_dbb.session.rollback()
self.calibre_db.session.close() # self.calibre_db.session.close()
def open_metadata(self, book, custom_columns): def open_metadata(self, book, custom_columns):
# package = self.create_new_metadata_backup(book, custom_columns) # package = self.create_new_metadata_backup(book, custom_columns)

View File

@@ -20,11 +20,11 @@ import os
from shutil import copyfile, copyfileobj from shutil import copyfile, copyfileobj
from urllib.request import urlopen from urllib.request import urlopen
from io import BytesIO from io import BytesIO
from datetime import datetime, timezone
from .. import constants from .. import constants
from cps import config, db, fs, gdriveutils, logger, ub from cps import config, db, fs, gdriveutils, logger, ub, app
from cps.services.worker import CalibreTask, STAT_CANCELLED, STAT_ENDED from cps.services.worker import CalibreTask, STAT_CANCELLED, STAT_ENDED
from datetime import datetime
from sqlalchemy import func, text, or_ from sqlalchemy import func, text, or_
from flask_babel import lazy_gettext as N_ from flask_babel import lazy_gettext as N_
@@ -36,7 +36,7 @@ except (ImportError, RuntimeError) as e:
def get_resize_height(resolution): def get_resize_height(resolution):
return int(225 * resolution) return int(255 * resolution)
def get_resize_width(resolution, original_width, original_height): def get_resize_width(resolution, original_width, original_height):
@@ -73,7 +73,8 @@ class TaskGenerateCoverThumbnails(CalibreTask):
self.cache = fs.FileSystem() self.cache = fs.FileSystem()
self.resolutions = [ self.resolutions = [
constants.COVER_THUMBNAIL_SMALL, constants.COVER_THUMBNAIL_SMALL,
constants.COVER_THUMBNAIL_MEDIUM constants.COVER_THUMBNAIL_MEDIUM,
constants.COVER_THUMBNAIL_LARGE
] ]
def run(self, worker_thread): def run(self, worker_thread):
@@ -113,9 +114,10 @@ class TaskGenerateCoverThumbnails(CalibreTask):
@staticmethod @staticmethod
def get_books_with_covers(book_id=-1): def get_books_with_covers(book_id=-1):
filter_exp = (db.Books.id == book_id) if book_id != -1 else True filter_exp = (db.Books.id == book_id) if book_id != -1 else True
calibre_db = db.CalibreDB(expire_on_commit=False, init=True) with app.app_context():
books_cover = calibre_db.session.query(db.Books).filter(db.Books.has_cover == 1).filter(filter_exp).all() calibre_db = db.CalibreDB(app) #, expire_on_commit=False, init=True)
calibre_db.session.close() books_cover = calibre_db.session.query(db.Books).filter(db.Books.has_cover == 1).filter(filter_exp).all()
# calibre_db.session.close()
return books_cover return books_cover
def get_book_cover_thumbnails(self, book_id): def get_book_cover_thumbnails(self, book_id):
@@ -123,7 +125,7 @@ class TaskGenerateCoverThumbnails(CalibreTask):
.query(ub.Thumbnail) \ .query(ub.Thumbnail) \
.filter(ub.Thumbnail.type == constants.THUMBNAIL_TYPE_COVER) \ .filter(ub.Thumbnail.type == constants.THUMBNAIL_TYPE_COVER) \
.filter(ub.Thumbnail.entity_id == book_id) \ .filter(ub.Thumbnail.entity_id == book_id) \
.filter(or_(ub.Thumbnail.expiration.is_(None), ub.Thumbnail.expiration > datetime.utcnow())) \ .filter(or_(ub.Thumbnail.expiration.is_(None), ub.Thumbnail.expiration > datetime.now(timezone.utc))) \
.all() .all()
def create_book_cover_thumbnails(self, book): def create_book_cover_thumbnails(self, book):
@@ -165,7 +167,7 @@ class TaskGenerateCoverThumbnails(CalibreTask):
self.app_db_session.rollback() self.app_db_session.rollback()
def update_book_cover_thumbnail(self, book, thumbnail): def update_book_cover_thumbnail(self, book, thumbnail):
thumbnail.generated_at = datetime.utcnow() thumbnail.generated_at = datetime.now(timezone.utc)
try: try:
self.app_db_session.commit() self.app_db_session.commit()
@@ -197,9 +199,11 @@ class TaskGenerateCoverThumbnails(CalibreTask):
img.format = thumbnail.format img.format = thumbnail.format
img.save(filename=filename) img.save(filename=filename)
else: else:
with open(filename, 'rb') as fd: stream.seek(0)
with open(filename, 'wb') as fd:
copyfileobj(stream, fd) copyfileobj(stream, fd)
except Exception as ex: except Exception as ex:
# Bubble exception to calling function # Bubble exception to calling function
self.log.debug('Error generating thumbnail file: ' + str(ex)) self.log.debug('Error generating thumbnail file: ' + str(ex))
@@ -244,7 +248,7 @@ class TaskGenerateSeriesThumbnails(CalibreTask):
super(TaskGenerateSeriesThumbnails, self).__init__(task_message) super(TaskGenerateSeriesThumbnails, self).__init__(task_message)
self.log = logger.create() self.log = logger.create()
self.app_db_session = ub.get_new_session_instance() self.app_db_session = ub.get_new_session_instance()
self.calibre_db = db.CalibreDB(expire_on_commit=False, init=True) # self.calibre_db = db.CalibreDB(expire_on_commit=False, init=True)
self.cache = fs.FileSystem() self.cache = fs.FileSystem()
self.resolutions = [ self.resolutions = [
constants.COVER_THUMBNAIL_SMALL, constants.COVER_THUMBNAIL_SMALL,
@@ -252,58 +256,60 @@ class TaskGenerateSeriesThumbnails(CalibreTask):
] ]
def run(self, worker_thread): def run(self, worker_thread):
if self.calibre_db.session and use_IM and self.stat != STAT_CANCELLED and self.stat != STAT_ENDED: with app.app_context():
self.message = 'Scanning Series' calibre_db = db.CalibreDB(app)
all_series = self.get_series_with_four_plus_books() if calibre_db.session and use_IM and self.stat != STAT_CANCELLED and self.stat != STAT_ENDED:
count = len(all_series) self.message = 'Scanning Series'
all_series = self.get_series_with_four_plus_books(calibre_db)
count = len(all_series)
total_generated = 0 total_generated = 0
for i, series in enumerate(all_series): for i, series in enumerate(all_series):
generated = 0 generated = 0
series_thumbnails = self.get_series_thumbnails(series.id) series_thumbnails = self.get_series_thumbnails(series.id)
series_books = self.get_series_books(series.id) series_books = self.get_series_books(series.id, calibre_db)
# Generate new thumbnails for missing covers # Generate new thumbnails for missing covers
resolutions = list(map(lambda t: t.resolution, series_thumbnails)) resolutions = list(map(lambda t: t.resolution, series_thumbnails))
missing_resolutions = list(set(self.resolutions).difference(resolutions)) missing_resolutions = list(set(self.resolutions).difference(resolutions))
for resolution in missing_resolutions: for resolution in missing_resolutions:
generated += 1
self.create_series_thumbnail(series, series_books, resolution)
# Replace outdated or missing thumbnails
for thumbnail in series_thumbnails:
if any(book.last_modified > thumbnail.generated_at for book in series_books):
generated += 1 generated += 1
self.update_series_thumbnail(series_books, thumbnail) self.create_series_thumbnail(series, series_books, resolution)
elif not self.cache.get_cache_file_exists(thumbnail.filename, constants.CACHE_TYPE_THUMBNAILS): # Replace outdated or missing thumbnails
generated += 1 for thumbnail in series_thumbnails:
self.update_series_thumbnail(series_books, thumbnail) if any(book.last_modified > thumbnail.generated_at for book in series_books):
generated += 1
self.update_series_thumbnail(series_books, thumbnail)
# Increment the progress elif not self.cache.get_cache_file_exists(thumbnail.filename, constants.CACHE_TYPE_THUMBNAILS):
self.progress = (1.0 / count) * i generated += 1
self.update_series_thumbnail(series_books, thumbnail)
if generated > 0: # Increment the progress
total_generated += generated self.progress = (1.0 / count) * i
self.message = N_('Generated {0} series thumbnails').format(total_generated)
# Check if job has been cancelled or ended if generated > 0:
if self.stat == STAT_CANCELLED: total_generated += generated
self.log.info(f'GenerateSeriesThumbnails task has been cancelled.') self.message = N_('Generated {0} series thumbnails').format(total_generated)
return
if self.stat == STAT_ENDED: # Check if job has been cancelled or ended
self.log.info(f'GenerateSeriesThumbnails task has been ended.') if self.stat == STAT_CANCELLED:
return self.log.info(f'GenerateSeriesThumbnails task has been cancelled.')
return
if total_generated == 0: if self.stat == STAT_ENDED:
self.self_cleanup = True self.log.info(f'GenerateSeriesThumbnails task has been ended.')
return
self._handleSuccess() if total_generated == 0:
self.app_db_session.remove() self.self_cleanup = True
def get_series_with_four_plus_books(self): self._handleSuccess()
return self.calibre_db.session \ self.app_db_session.remove()
def get_series_with_four_plus_books(self, calibre_db):
return calibre_db.session \
.query(db.Series) \ .query(db.Series) \
.join(db.books_series_link) \ .join(db.books_series_link) \
.join(db.Books) \ .join(db.Books) \
@@ -312,8 +318,8 @@ class TaskGenerateSeriesThumbnails(CalibreTask):
.having(func.count('book_series_link') > 3) \ .having(func.count('book_series_link') > 3) \
.all() .all()
def get_series_books(self, series_id): def get_series_books(self, series_id, calibre_db):
return self.calibre_db.session \ return calibre_db.session \
.query(db.Books) \ .query(db.Books) \
.join(db.books_series_link) \ .join(db.books_series_link) \
.join(db.Series) \ .join(db.Series) \
@@ -322,12 +328,12 @@ class TaskGenerateSeriesThumbnails(CalibreTask):
.all() .all()
def get_series_thumbnails(self, series_id): def get_series_thumbnails(self, series_id):
return self.app_db_session \ return (self.app_db_session
.query(ub.Thumbnail) \ .query(ub.Thumbnail)
.filter(ub.Thumbnail.type == constants.THUMBNAIL_TYPE_SERIES) \ .filter(ub.Thumbnail.type == constants.THUMBNAIL_TYPE_SERIES)
.filter(ub.Thumbnail.entity_id == series_id) \ .filter(ub.Thumbnail.entity_id == series_id)
.filter(or_(ub.Thumbnail.expiration.is_(None), ub.Thumbnail.expiration > datetime.utcnow())) \ .filter(or_(ub.Thumbnail.expiration.is_(None), ub.Thumbnail.expiration > datetime.now(timezone.utc)))
.all() .all())
def create_series_thumbnail(self, series, series_books, resolution): def create_series_thumbnail(self, series, series_books, resolution):
thumbnail = ub.Thumbnail() thumbnail = ub.Thumbnail()
@@ -346,7 +352,7 @@ class TaskGenerateSeriesThumbnails(CalibreTask):
self.app_db_session.rollback() self.app_db_session.rollback()
def update_series_thumbnail(self, series_books, thumbnail): def update_series_thumbnail(self, series_books, thumbnail):
thumbnail.generated_at = datetime.utcnow() thumbnail.generated_at = datetime.now(timezone.utc)
try: try:
self.app_db_session.commit() self.app_db_session.commit()
@@ -459,13 +465,15 @@ class TaskClearCoverThumbnailCache(CalibreTask):
def run(self, worker_thread): def run(self, worker_thread):
if self.app_db_session: if self.app_db_session:
if self.book_id == 0: # delete superfluous thumbnails # delete superfluous thumbnails
calibre_db = db.CalibreDB(expire_on_commit=False, init=True) if self.book_id == 0:
thumbnails = (calibre_db.session.query(ub.Thumbnail) with app.app_context():
.join(db.Books, ub.Thumbnail.entity_id == db.Books.id, isouter=True) calibre_db = db.CalibreDB(app)
.filter(db.Books.id==None) thumbnails = (calibre_db.session.query(ub.Thumbnail)
.all()) .join(db.Books, ub.Thumbnail.entity_id == db.Books.id, isouter=True)
calibre_db.session.close() .filter(db.Books.id==None)
.all())
# calibre_db.session.close()
elif self.book_id > 0: # make sure single book is selected elif self.book_id > 0: # make sure single book is selected
thumbnails = self.get_thumbnails_for_book(self.book_id) thumbnails = self.get_thumbnails_for_book(self.book_id)
if self.book_id < 0: if self.book_id < 0:

View File

@@ -62,18 +62,16 @@
<a class="author-name" href="{{url_for('web.books_list', data='author', sort_param='stored', book_id=author.id) }}">{{author.name.replace('|',',')|shortentitle(30)}}</a> <a class="author-name" href="{{url_for('web.books_list', data='author', sort_param='stored', book_id=author.id) }}">{{author.name.replace('|',',')|shortentitle(30)}}</a>
{% endif %} {% endif %}
{% endfor %} {% endfor %}
{% for format in entry.Books.data %} {% if entry.Books.data|music %}
{% if format.format|lower in g.constants.EXTENSIONS_AUDIO %}
<span class="glyphicon glyphicon-music"></span> <span class="glyphicon glyphicon-music"></span>
{% endif %} {% endif %}
{% endfor %}
</p> </p>
{% if entry.Books.series.__len__() > 0 %} {% if entry.Books.series.__len__() > 0 %}
<p class="series"> <p class="series">
<a href="{{url_for('web.books_list', data='series', sort_param='stored', book_id=entry.Books.series[0].id )}}"> <a href="{{url_for('web.books_list', data='series', sort_param='stored', book_id=entry.Books.series[0].id )}}">
{{entry.Books.series[0].name}} {{entry.Books.series[0].name}}
</a> </a>
({{entry.Books.series_index|formatseriesindex}}) ({{entry.Books.series_index|formatfloat(2)}})
</p> </p>
{% endif %} {% endif %}
{% if entry.Books.ratings.__len__() > 0 %} {% if entry.Books.ratings.__len__() > 0 %}
@@ -124,7 +122,7 @@
<a href="{{url_for('web.books_list', data='series', sort_param='stored', book_id=entry.series[0].id )}}"> <a href="{{url_for('web.books_list', data='series', sort_param='stored', book_id=entry.series[0].id )}}">
{{entry.series[0].name}} {{entry.series[0].name}}
</a> </a>
({{entry.series_index|formatseriesindex}}) ({{entry.series_index|formatfloat(2)}})
</p> </p>
{% endif %} {% endif %}
<div class="rating"> <div class="rating">

View File

@@ -0,0 +1,81 @@
{% extends "basic_layout.html" %}
{% block body %}
<div>
<h2 id="title">{{ entry.title }}</h2>
<div>
{% for author in entry.ordered_authors %}
<p>{{ author.name.replace("|",",") }}</p>
{% endfor %}
</div>
<div class="cover">
<img title="{{ entry.title }}" src="{{ url_for('web.get_cover', book_id=entry.id, resolution='og', c=entry|last_modified) }}"/>
</div>
{% if current_user.role_download() %}
{% if entry.data|length %}
<div>
<h2>Download</h2>
{% for format in entry.data %}
<p>
<a href="{{ url_for('web.download_link', book_id=entry.id, book_format=format.format|lower, anyname=entry.id|string+'.'+format.format|lower) }}">
{{ format.format }} ({{ format.uncompressed_size|filesizeformat }})</a>
</p>
{% endfor %}
</div>
{% endif %}
{% endif %}
<h2>Details</h2>
{% if entry.series|length > 0 %}
<p>{{ _("Book %(index)s of %(range)s", index=entry.series_index | formatfloat(2), range=(entry.series[0].name)|safe) }}</p>
{% endif %}
{% if entry.languages|length > 0 %}
<div>
<p>
<span>
{{_('Language')}}: {% for language in entry.languages %}{{language.language_name}}{% if not loop.last %}, {% endif %}{% endfor %}
</span>
</p>
</div>
{% endif %}
{% if entry.identifiers|length > 0 %}
<div>
<p>
<span></span>
{% for identifier in entry.identifiers %}
<p>{{ identifier.format_type() }}: {{ identifier|escape }}</p>
{% endfor %}
</p>
</div>
{% endif %}
{% if entry.publishers|length > 0 %}
<div>
<p>
<span>{{ _('Publisher') }}:
<span>{{ entry.publishers[0].name }}</span>
</span>
</p>
</div>
{% endif %}
{% if (entry.pubdate|string)[:10] != '0101-01-01' %}
<div>
<p>{{ _('Published') }}: {{ entry.pubdate|formatdate }} </p>
</div>
{% endif %}
{% if entry.comments|length > 0 and entry.comments[0].text|length > 0 %}
<div>
<h2 id="decription">{{ _('Description:') }}</h2>
{{ entry.comments[0].text|safe }}
</div>
{% endif %}
</div>
{% endblock %}

View File

@@ -0,0 +1,32 @@
{% extends "basic_layout.html" %}
{% block body %}
<div class="pagination">
<div>
{% if pagination.has_prev %}
<a href="{{ (pagination.page - 1)|url_for_other_page }}">&laquo; {{_('Previous')}}</a>
{% endif %}
</div>
<div>
{% if pagination.has_next %}
<a href="{{ (pagination.page + 1)|url_for_other_page }}">{{_('Next')}} &raquo;</a>
{% endif %}
</div>
</div>
{% if entries|length < 1 %}
<p>{{_('No Results Found')}}</p>
{% endif %}
{% for entry in entries %}
{% if entry.Books.authors %}
{% set author = entry.Books.authors[0].name.replace('|',',')|shortentitle(30) %}
{% else %}
{% set author = '' %}
{% endif %}
<a href="{{ url_for('basic.show_book', book_id=entry.Books.id) }}">
<p class="listing" title="{{entry.Books.title}}">{{ author }} - {{entry.Books.title|shortentitle}}</p>
</a>
{% endfor %}
{% endblock %}

View File

@@ -0,0 +1,47 @@
<!DOCTYPE html>
<html lang="{{ current_user.locale }}">
<head>
<title>{{instance}} | {{title}}</title>
<meta charset="utf-8">
<meta name='viewport' content='initial-scale=1,maximum-scale=5,user-scalable=no' />
<link href="{{ url_for('static', filename='css/basic.css') }}" rel="stylesheet" media="screen">
</head>
<body>
<div>
<div>
{% if current_user.is_authenticated or g.allow_anonymous %}
<nav>
<a href="{{url_for('basic.index')}}">
<span><h1>{{_('Home')}}</h1></span>
</a>
<div class="search">
<form role="search" action="{{url_for('basic.index')}}" method="GET">
<input type="text" id="query" name="query" placeholder="{{_('Search Library')}}" value="{{ searchterm }}">
<span>
<button type="submit" id="query_submit">{{_('Search')}}</button>
</span>
</form>
</div>
{% if not current_user.is_anonymous %}
<a href="{{url_for('web.logout')}}">
<span>{{_('Logout')}}</span>
</a>
{% endif %}
</nav>
<div class="theme">
<a href="{{url_for('web.index')}}">
<span>{{_('Normal Theme')}}</span>
</a>
</div>
{% endif %}
</div>
</div>
<div class="body">
{% block body %}
{% endblock %}
</div>
</body>
</html>

View File

@@ -47,42 +47,37 @@
</form> </form>
</div> </div>
{% endif %} {% endif %}
{% if current_user.role_upload() and g.allow_upload %}
<div class="text-center more-stuff"><!--h4 aria-label="Upload new book format"></h4-->
<form id="form-upload-format" action="{{ url_for('edit-book.upload') }}" data-title="{{_('Uploading...')}}" data-footer="{{_('Close')}}" data-failed="{{_('Error')}}" data-message="{{_('Upload done, processing, please wait...')}}" method="post" enctype="multipart/form-data">
<div class="text-center">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<input type="hidden" name="book_id" value="{{ book.id }}">
<div role="group" aria-label="Upload new book format">
<label class="btn btn-primary btn-file" for="btn-upload-format">{{ _('Upload Format') }}</label>
<div class="upload-format-input-text" id="upload-format"></div>
<input id="btn-upload-format" name="btn-upload-format" type="file" accept="{% for format in accept %}.{% if format != ''%}{{format}}{% else %}*{% endif %}{{ ',' if not loop.last }}{% endfor %}" multiple>
</div>
</div>
</form>
</div> </div>
{% endif %}
</div>
<form role="form" action="{{ url_for('edit-book.edit_book', book_id=book.id) }}" method="post" enctype="multipart/form-data" id="book_edit_frm"> <form role="form" action="{{ url_for('edit-book.edit_book', book_id=book.id) }}" method="post" enctype="multipart/form-data" id="book_edit_frm">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<div class="col-sm-9 col-xs-12"> <div class="col-sm-9 col-xs-12">
<div class="form-group"> <div class="form-group">
<label for="book_title">{{_('Book Title')}}</label> <label for="title">{{_('Book Title')}}</label>
<input type="text" class="form-control" name="book_title" id="book_title" value="{{book.title}}"> <input type="text" class="form-control" name="title" id="title" value="{{book.title}}">
</div> </div>
<div class="text-center"> <div class="text-center">
<button type="button" class="btn btn-default" id="xchange" ><span class="glyphicon glyphicon-arrow-up"></span><span class="glyphicon glyphicon-arrow-down"></span></button> <button type="button" class="btn btn-default" id="xchange" ><span class="glyphicon glyphicon-arrow-up"></span><span class="glyphicon glyphicon-arrow-down"></span></button>
</div> </div>
<div id="author_div" class="form-group"> <div id="author_div" class="form-group">
<label for="bookAuthor">{{_('Author')}}</label> <label for="bookAuthor">{{_('Author')}}</label>
<input type="text" class="form-control typeahead" autocomplete="off" name="author_name" id="bookAuthor" value="{{' & '.join(authors)}}"> <input type="text" class="form-control typeahead" autocomplete="off" name="authors" id="authors" value="{{' & '.join(authors)}}">
</div> </div>
<div class="form-group">
<label for="description">{{_('Description')}}</label>
<textarea class="form-control" name="description" id="description" rows="7">{% if book.comments %}{{book.comments[0].text}}{%endif%}</textarea>
</div>
<div class="form-group">
<label>{{_('Identifiers')}}</label>
<table class="table" id="identifier-table">
{% for identifier in book.identifiers %}
<tr>
<td><input type="text" class="form-control" name="identifier-type-{{identifier.type}}" value="{{identifier.type}}" required="required" placeholder="{{_('Identifier Type')}}"></td>
<td><input type="text" class="form-control" name="identifier-val-{{identifier.type}}" value="{{identifier.val}}" required="required" placeholder="{{_('Identifier Value')}}"></td>
<td><a class="btn btn-default" onclick="removeIdentifierLine(this)">{{_('Remove')}}</a></td>
</tr>
{% endfor %}
</table>
<a id="add-identifier-line" class="btn btn-default">{{_('Add Identifier')}}</a>
</div>
<div class="form-group"> <div class="form-group">
<label for="tags">{{_('Tags')}}</label> <label for="tags">{{_('Tags')}}</label>
<input type="text" class="form-control typeahead" autocomplete="off" name="tags" id="tags" value="{% for tag in book.tags %}{{tag.name.strip()}}{% if not loop.last %}, {% endif %}{% endfor %}"> <input type="text" class="form-control typeahead" autocomplete="off" name="tags" id="tags" value="{% for tag in book.tags %}{{tag.name.strip()}}{% if not loop.last %}, {% endif %}{% endfor %}">
@@ -93,23 +88,8 @@
</div> </div>
<div class="form-group"> <div class="form-group">
<label for="series_index">{{_('Series ID')}}</label> <label for="series_index">{{_('Series ID')}}</label>
<input type="number" step="0.01" min="0" placeholder="1" class="form-control" name="series_index" id="series_index" value="{{book.series_index}}"> <input type="number" step="0.01" min="0" placeholder="1" class="form-control" name="series_index" id="series_index" value="{{book.series_index|formatfloat(2)}}">
</div> </div>
<div class="form-group">
<label for="rating">{{_('Rating')}}</label>
<input type="number" name="rating" id="rating" class="rating input-lg" data-clearable="" value="{% if book.ratings %}{{(book.ratings[0].rating / 2)|int}}{% endif %}">
</div>
{% if current_user.role_upload() and g.allow_upload %}
<div class="form-group">
<label for="cover_url">{{_('Fetch Cover from URL (JPEG - Image will be downloaded and stored in database)')}}</label>
<input type="text" class="form-control" name="cover_url" id="cover_url" value="">
</div>
<div class="form-group" aria-label="Upload cover from local drive">
<label class="btn btn-primary btn-file" for="btn-upload-cover">{{ _('Upload Cover from Local Disk') }}</label>
<div class="upload-cover-input-text" id="upload-cover"></div>
<input id="btn-upload-cover" name="btn-upload-cover" type="file" accept=".jpg, .jpeg, .png, .webp">
</div>
{% endif %}
<label for="pubdate">{{_('Published Date')}}</label> <label for="pubdate">{{_('Published Date')}}</label>
<div class="form-group input-group"> <div class="form-group input-group">
<input type="text" class="datepicker form-control" name="pubdate" id="pubdate" value="{% if book.pubdate %}{{book.pubdate|formatdateinput}}{% endif %}"> <input type="text" class="datepicker form-control" name="pubdate" id="pubdate" value="{% if book.pubdate %}{{book.pubdate|formatdateinput}}{% endif %}">
@@ -126,6 +106,39 @@
<label for="languages">{{_('Language')}}</label> <label for="languages">{{_('Language')}}</label>
<input type="text" class="form-control typeahead" autocomplete="off" name="languages" id="languages" value="{% for language in book.languages %}{{language.language_name.strip()}}{% if not loop.last %}, {% endif %}{% endfor %}"> <input type="text" class="form-control typeahead" autocomplete="off" name="languages" id="languages" value="{% for language in book.languages %}{{language.language_name.strip()}}{% if not loop.last %}, {% endif %}{% endfor %}">
</div> </div>
<div class="form-group">
<label for="rating">{{_('Rating')}}</label>
<input type="number" name="rating" id="rating" class="rating input-lg" data-clearable="" value="{% if book.ratings %}{{(book.ratings[0].rating / 2)|int}}{% endif %}">
</div>
<div class="form-group">
<label for="comments">{{_('Description')}}</label>
<textarea class="form-control" name="comments" id="comments" rows="7">{% if book.comments %}{{book.comments[0].text}}{%endif%}</textarea>
</div>
<div class="form-group">
<label>{{_('Identifiers')}}</label>
<table class="table" id="identifier-table"><tbody>
{% for identifier in book.identifiers %}
<tr>
<td><input type="text" class="form-control" name="identifier-type-{{identifier.type}}" value="{{identifier.type}}" required="required" placeholder="{{_('Identifier Type')}}"></td>
<td><input type="text" class="form-control" name="identifier-val-{{identifier.type}}" value="{{identifier.val}}" required="required" placeholder="{{_('Identifier Value')}}"></td>
<td><a class="btn btn-default" onclick="removeIdentifierLine(this)">{{_('Remove')}}</a></td>
</tr>
{% endfor %}
</tbody>
</table>
<a id="add-identifier-line" class="btn btn-default">{{_('Add Identifier')}}</a>
</div>
{% if current_user.role_upload() and g.allow_upload %}
<div class="form-group">
<label for="cover_url">{{_('Fetch Cover from URL (JPEG - Image will be downloaded and stored in database)')}}</label>
<input type="text" class="form-control" name="cover_url" id="cover_url" value="">
</div>
<div class="form-group" aria-label="Upload cover from local drive">
<label class="btn btn-primary btn-file" for="btn-upload-cover">{{ _('Upload Cover from Local Disk') }}</label>
<div class="upload-cover-input-text" id="upload-cover"></div>
<input id="btn-upload-cover" name="btn-upload-cover" type="file" accept=".jpg, .jpeg, .png, .webp">
</div>
{% endif %}
{% if cc|length > 0 %} {% if cc|length > 0 %}
{% for c in cc %} {% for c in cc %}
<div class="form-group"> <div class="form-group">
@@ -196,13 +209,6 @@
</div> </div>
{% endfor %} {% endfor %}
{% endif %} {% endif %}
{% if current_user.role_upload() and g.allow_upload %}
<div role="group" aria-label="Upload new book format">
<label class="btn btn-primary btn-file" for="btn-upload-format">{{ _('Upload Format') }}</label>
<div class="upload-format-input-text" id="upload-format"></div>
<input id="btn-upload-format" name="btn-upload-format" type="file">
</div>
{% endif %}
<div class="checkbox"> <div class="checkbox">
<label> <label>
@@ -288,7 +294,7 @@
'no_result': {{_('No Result(s) found! Please try another keyword.')|safe|tojson}}, 'no_result': {{_('No Result(s) found! Please try another keyword.')|safe|tojson}},
'author': {{_('Author')|safe|tojson}}, 'author': {{_('Author')|safe|tojson}},
'publisher': {{_('Publisher')|safe|tojson}}, 'publisher': {{_('Publisher')|safe|tojson}},
'description': {{_('Description')|safe|tojson}}, 'comments': {{_('Description')|safe|tojson}},
'source': {{_('Source')|safe|tojson}}, 'source': {{_('Source')|safe|tojson}},
}; };
var language = '{{ current_user.locale }}'; var language = '{{ current_user.locale }}';

View File

@@ -66,7 +66,7 @@
{{ text_table_row('authors', _('Enter Authors'),_('Authors'), true, true) }} {{ text_table_row('authors', _('Enter Authors'),_('Authors'), true, true) }}
{{ text_table_row('tags', _('Enter Categories'),_('Categories'), false, true) }} {{ text_table_row('tags', _('Enter Categories'),_('Categories'), false, true) }}
{{ text_table_row('series', _('Enter Series'),_('Series'), false, true) }} {{ text_table_row('series', _('Enter Series'),_('Series'), false, true) }}
<th data-field="series_index" id="series_index" data-visible="{{visiblility.get('series_index')}}" data-edit-validate="{{ _('This Field is Required') }}" data-sortable="true" {% if current_user.role_edit() %} data-editable-type="number" data-editable-placeholder="1" data-editable-step="0.01" data-editable-min="0" data-editable-url="{{ url_for('edit-book.edit_list_book', param='series_index')}}" data-edit="true" data-editable-title="{{_('Enter Title')}}"{% endif %}>{{_('Series Index')}}</th> <th data-field="series_index" id="series_index" data-visible="{{visiblility.get('series_index')}}" data-formatter="seriesIndexFormatter" data-edit-validate="{{ _('This Field is Required') }}" data-sortable="true" {% if current_user.role_edit() %} data-editable-type="number" data-editable-placeholder="1" data-editable-step="0.01" data-editable-min="0" data-editable-url="{{ url_for('edit-book.edit_list_book', param='series_index')}}" data-edit="true" data-editable-title="{{_('Enter Title')}}"{% endif %}>{{_('Series Index')}}</th>
{{ text_table_row('languages', _('Enter Languages'),_('Languages'), false, true) }} {{ text_table_row('languages', _('Enter Languages'),_('Languages'), false, true) }}
<!--th data-field="pubdate" data-type="date" data-visible="{{visiblility.get('pubdate')}}" data-viewformat="dd.mm.yyyy" id="pubdate" data-sortable="true">{{_('Publishing Date')}}</th--> <!--th data-field="pubdate" data-type="date" data-visible="{{visiblility.get('pubdate')}}" data-viewformat="dd.mm.yyyy" id="pubdate" data-sortable="true">{{_('Publishing Date')}}</th-->
{{ text_table_row('publishers', _('Enter Publishers'),_('Publishers'), false, true) }} {{ text_table_row('publishers', _('Enter Publishers'),_('Publishers'), false, true) }}

View File

@@ -62,7 +62,7 @@
{% endif %} {% endif %}
{% endif %} {% endif %}
<div class="col-sm-12"> <div class="col-sm-12">
<div id="db_submit" name="submit" class="btn btn-default">{{_('Save')}}</div> <button id="db_submit" type="submit" class="btn btn-default">{{_('Save')}}</button>
<a href="{{ url_for('admin.admin') }}" id="config_back" class="btn btn-default">{{_('Cancel')}}</a> <a href="{{ url_for('admin.admin') }}" id="config_back" class="btn btn-default">{{_('Cancel')}}</a>
</div> </div>
</form> </form>

2
cps/templates/config_edit.html Executable file → Normal file
View File

@@ -411,7 +411,7 @@
</div> </div>
<div class="form-group" style="margin-left:10px;"> <div class="form-group" style="margin-left:10px;">
<input type="checkbox" id="config_password_character" name="config_password_character" {% if config.config_password_character %}checked{% endif %}> <input type="checkbox" id="config_password_character" name="config_password_character" {% if config.config_password_character %}checked{% endif %}>
<label for="config_password_lower">{{_('Enforce characters (needed For Chinese/Japanese/Korean Characters)')}}</label> <label for="config_password_character">{{_('Enforce characters (needed For Chinese/Japanese/Korean Characters)')}}</label>
</div> </div>
<div class="form-group" style="margin-left:10px;"> <div class="form-group" style="margin-left:10px;">
<input type="checkbox" id="config_password_special" name="config_password_special" {% if config.config_password_special %}checked{% endif %}> <input type="checkbox" id="config_password_special" name="config_password_special" {% if config.config_password_special %}checked{% endif %}>

View File

@@ -1,4 +1,12 @@
{% extends is_xhr|yesno("fragment.html", "layout.html") %} {% extends is_xhr|yesno("fragment.html", "layout.html") %}
{% block header %}
<meta property="og:type" content="book" />
<meta property="og:title" content="{{ entry.title|truncate(35) }}" />
{% if entry.comments|length > 0 and entry.comments[0].text|length > 0 %}
<meta property="og:description" content="{{ entry.comments[0].text|striptags|truncate(65) }}" />
<meta property="og:image" content="{{url_for('web.get_cover', book_id=entry.id, resolution='og', c=entry|last_modified)}}" />
{% endif %}
{% endblock %}
{% block body %} {% block body %}
<div class="single"> <div class="single">
<div class="row"> <div class="row">
@@ -20,7 +28,7 @@
{{ _('Download') }} : {{ _('Download') }} :
</button> </button>
{% for format in entry.data %} {% for format in entry.data %}
<a href="{{ url_for('web.download_link', book_id=entry.id, book_format=format.format|lower, anyname=entry.id|string+'.'+format.format|lower) }}" <a href="{{ url_for('web.download_link', book_id=entry.id, book_format=format.format|lower, anyname=entry.id|string+'.'+format.format|lower|replace('kepub', 'kepub.epub')) }}"
id="btnGroupDrop1{{ format.format|lower }}" class="btn btn-primary" id="btnGroupDrop1{{ format.format|lower }}" class="btn btn-primary"
role="button"> role="button">
<span class="glyphicon glyphicon-download"></span>{{ format.format }} <span class="glyphicon glyphicon-download"></span>{{ format.format }}
@@ -36,7 +44,7 @@
<ul class="dropdown-menu" aria-labelledby="btnGroupDrop1"> <ul class="dropdown-menu" aria-labelledby="btnGroupDrop1">
{% for format in entry.data %} {% for format in entry.data %}
<li> <li>
<a href="{{ url_for('web.download_link', book_id=entry.id, book_format=format.format|lower, anyname=entry.id|string+'.'+format.format|lower) }}">{{ format.format }} <a href="{{ url_for('web.download_link', book_id=entry.id, book_format=format.format|lower, anyname=entry.id|string+'.'+format.format|lower|replace('kepub', 'kepub.epub')) }}">{{ format.format }}
({{ format.uncompressed_size|filesizeformat }})</a></li> ({{ format.uncompressed_size|filesizeformat }})</a></li>
{% endfor %} {% endfor %}
</ul> </ul>
@@ -147,7 +155,7 @@
</div> </div>
{% endif %} {% endif %}
{% if entry.series|length > 0 %} {% if entry.series|length > 0 %}
<p>{{ _("Book %(index)s of %(range)s", index=entry.series_index | formatfloat(2), range=(url_for('web.books_list', data='series', sort_param='stored', book_id=entry.series[0].id)|escapedlink(entry.series[0].name))|safe) }}</p> <p>{{ _("Book %(index)s of %(range)s", index=entry.series_index|formatfloat(2), range=(url_for('web.books_list', data='series', sort_param='stored', book_id=entry.series[0].id)|escapedlink(entry.series[0].name))|safe) }}</p>
{% endif %} {% endif %}

View File

@@ -70,7 +70,7 @@
{% endif %} {% endif %}
{% for format in entry.Books.data %} {% for format in entry.Books.data %}
<link rel="http://opds-spec.org/acquisition" href="{{ url_for('opds.opds_download_link', book_id=entry.Books.id, book_format=format.format|lower)}}" <link rel="http://opds-spec.org/acquisition" href="{{ url_for('opds.opds_download_link', book_id=entry.Books.id, book_format=format.format|lower)}}"
length="{{format.uncompressed_size}}" mtime="{{entry.Books.atom_timestamp}}" type="{{format.format|lower|mimetype}}"/> length="{{format.uncompressed_size}}" title="{{format.format}}" mtime="{{entry.Books.atom_timestamp}}" type="{{format.format|lower|mimetype}}"/>
{% endfor %} {% endfor %}
</entry> </entry>
{% endfor %} {% endfor %}

View File

@@ -48,7 +48,11 @@
<div class="row"> <div class="row">
<div class="col errorlink"> <div class="col errorlink">
{% if not unconfigured %} {% if not unconfigured %}
<a href="{{url_for('web.index')}}" title="{{ _('Return to Home') }}">{{_('Return to Home')}}</a> {% if goto_admin %}
<a href="{{url_for('admin.db_configuration')}}" title="{{ _('Return to Database config') }}">{{_('Return to Database config')}}</a>
{% else %}
<a href="{{url_for('web.index')}}" title="{{ _('Return to Home') }}">{{_('Return to Home')}}</a>
{% endif %}
{% else %} {% else %}
<a href="{{url_for('web.logout')}}" title="{{ _('Logout User') }}">{{ _('Logout User') }}</a> <a href="{{url_for('web.logout')}}" title="{{ _('Logout User') }}">{{ _('Logout User') }}</a>
{% endif %} {% endif %}

View File

@@ -16,7 +16,7 @@
<img <img
srcset="{{ srcset }}" srcset="{{ srcset }}"
src="{{ url_for('web.get_series_cover', series_id=series.id, resolution='og', c='day'|cache_timestamp) }}" src="{{ url_for('web.get_series_cover', series_id=series.id, resolution='og', c='day'|cache_timestamp) }}"
alt="{{ book_title }}" alt="{{ title }}"
loading="lazy" loading="lazy"
/> />
{%- endmacro %} {%- endmacro %}

View File

@@ -42,7 +42,7 @@
<a href="{{url_for('web.books_list', data='series', sort_param='stored', book_id=entry.Books.series[0].id )}}"> <a href="{{url_for('web.books_list', data='series', sort_param='stored', book_id=entry.Books.series[0].id )}}">
{{entry.Books.series[0].name}} {{entry.Books.series[0].name}}
</a> </a>
({{entry.Books.series_index|formatseriesindex}}) ({{entry.Books.series_index|formatfloat(2)}})
</p> </p>
{% endif %} {% endif %}
{% if entry.Books.ratings.__len__() > 0 %} {% if entry.Books.ratings.__len__() > 0 %}
@@ -119,11 +119,9 @@
<a class="author-name" href="{{url_for('web.books_list', data='author', book_id=author.id, sort_param='stored') }}">{{author.name.replace('|',',')|shortentitle(30)}}</a> <a class="author-name" href="{{url_for('web.books_list', data='author', book_id=author.id, sort_param='stored') }}">{{author.name.replace('|',',')|shortentitle(30)}}</a>
{% endif %} {% endif %}
{% endfor %} {% endfor %}
{% for format in entry.Books.data %} {% if entry.Books.data|music %}
{% if format.format|lower in g.constants.EXTENSIONS_AUDIO %}
<span class="glyphicon glyphicon-music"></span> <span class="glyphicon glyphicon-music"></span>
{% endif %} {% endif %}
{%endfor%}
</p> </p>
{% if entry.Books.series.__len__() > 0 %} {% if entry.Books.series.__len__() > 0 %}
<p class="series"> <p class="series">
@@ -134,7 +132,7 @@
{% else %} {% else %}
<span>{{entry.Books.series[0].name}}</span> <span>{{entry.Books.series[0].name}}</span>
{% endif %} {% endif %}
({{entry.Books.series_index|formatseriesindex}}) ({{entry.Books.series_index|formatfloat(2)}})
</p> </p>
{% endif %} {% endif %}
{% if entry.Books.ratings.__len__() > 0 %} {% if entry.Books.ratings.__len__() > 0 %}

View File

@@ -37,11 +37,12 @@
<a class="navbar-brand" href="{{url_for('web.index')}}">{{instance}}</a> <a class="navbar-brand" href="{{url_for('web.index')}}">{{instance}}</a>
</div> </div>
{% if g.current_theme == 1 %} {% if g.current_theme == 1 %}
<div class="home-btn"><a class="home-btn-tooltip" href="{{url_for("web.index",page=1)}}" data-toggle="tooltip" title="" data-placement="bottom" data-original-title="Home"></a></div> <div class="home-btn"><a class="home-btn-tooltip" href="{{url_for("web.index", page=1)}}" data-toggle="tooltip" title="" data-placement="bottom" data-original-title="Home"></a></div>
<div class="plexBack"><a href="{{url_for('web.index')}}"></a></div> <div class="plexBack"><a href="{{url_for('web.index')}}"></a></div>
{% endif %} {% endif %}
{% if current_user.is_authenticated or g.allow_anonymous %} {% if current_user.is_authenticated or g.allow_anonymous %}
<form class="navbar-form navbar-left" role="search" action="{{url_for('search.simple_search')}}" method="GET"> <!--# margin 0, padding 15, background color-->
<form class="navbar-form navbar-left" role="search" action="{{url_for('search.simple_search')}}" method="GET">
<div class="form-group input-group input-group-sm"> <div class="form-group input-group input-group-sm">
<label for="query" class="sr-only">{{_('Search')}}</label> <label for="query" class="sr-only">{{_('Search')}}</label>
<input type="text" class="form-control" id="query" name="query" placeholder="{{_('Search Library')}}" value="{{searchterm}}"> <input type="text" class="form-control" id="query" name="query" placeholder="{{_('Search Library')}}" value="{{searchterm}}">
@@ -55,6 +56,7 @@
{% if current_user.is_authenticated or g.allow_anonymous %} {% if current_user.is_authenticated or g.allow_anonymous %}
<ul class="nav navbar-nav "> <ul class="nav navbar-nav ">
<li><a href="{{url_for('search.advanced_search')}}" id="advanced_search"><span class="glyphicon glyphicon-search"></span><span class="hidden-sm"> {{_('Advanced Search')}}</span></a></li> <li><a href="{{url_for('search.advanced_search')}}" id="advanced_search"><span class="glyphicon glyphicon-search"></span><span class="hidden-sm"> {{_('Advanced Search')}}</span></a></li>
{% if simple==true %} <li><a href="{{url_for('basic.index')}}" id="basic"><span class="glyphicon glyphicon-phone"></span><span>{{_('Simple Theme')}}</span></a><li>{% endif %}
</ul> </ul>
{% endif %} {% endif %}
<ul class="nav navbar-nav navbar-right" id="main-nav"> <ul class="nav navbar-nav navbar-right" id="main-nav">
@@ -80,6 +82,7 @@
<div class="form-group"> <div class="form-group">
<span class="btn btn-default btn-file">{{_('Upload')}}<input id="btn-upload" name="btn-upload" <span class="btn btn-default btn-file">{{_('Upload')}}<input id="btn-upload" name="btn-upload"
type="file" accept="{% for format in accept %}.{% if format != ''%}{{format}}{% else %}*{% endif %}{{ ',' if not loop.last }}{% endfor %}" multiple></span> type="file" accept="{% for format in accept %}.{% if format != ''%}{{format}}{% else %}*{% endif %}{{ ',' if not loop.last }}{% endfor %}" multiple></span>
<input class="hide" id="btn-upload2" name="btn-upload2" type="file" accept="{% for format in accept %}.{% if format != ''%}{{format}}{% else %}*{% endif %}{{ ',' if not loop.last }}{% endfor %}">
</div> </div>
</form> </form>
</li> </li>

View File

@@ -59,7 +59,7 @@
</div> </div>
{% endif %} {% endif %}
{% if entry.series|length > 0 %} {% if entry.series|length > 0 %}
<p>{{_("Book %(index)s of %(range)s", index=entry.series_index | formatfloat(2), range=(url_for('web.books_list', data='series', sort_param='stored', book_id=entry.series[0].id)|escapedlink(entry.series[0].name))|safe)}}</p> <p>{{_("Book %(index)s of %(range)s", index=entry.series_index|formatfloat(2), range=(url_for('web.books_list', data='series', sort_param='stored', book_id=entry.series[0].id)|escapedlink(entry.series[0].name))|safe)}}</p>
{% endif %} {% endif %}

View File

@@ -1,10 +0,0 @@
{
"input": {
"placeholder": "a placeholder"
},
"nav": {
"home": "Home",
"page1": "Page One",
"page2": "Page Two"
}
}

View File

@@ -77,7 +77,7 @@
<div class="md-content"> <div class="md-content">
<h3>{{_('Settings')}}</h3> <h3>{{_('Settings')}}</h3>
<div class="form-group themes" id="themes"> <div class="form-group themes" id="themes">
Choose a theme below: <br /> {{_('Choose a theme below:')}}}<br />
<!-- Hardcoded a tick in the light theme button because it is the "default" theme. Need to find a way to do this dynamically on startup--> <!-- Hardcoded a tick in the light theme button because it is the "default" theme. Need to find a way to do this dynamically on startup-->
<button type="button" id="lightTheme" class="lightTheme" onclick="selectTheme(this.id)"><span <button type="button" id="lightTheme" class="lightTheme" onclick="selectTheme(this.id)"><span
@@ -126,8 +126,8 @@
window.calibre = { window.calibre = {
filePath: "{{ url_for('static', filename='js/libs/') }}", filePath: "{{ url_for('static', filename='js/libs/') }}",
cssPath: "{{ url_for('static', filename='css/') }}", cssPath: "{{ url_for('static', filename='css/') }}",
bookmarkUrl: "{{ url_for('web.set_bookmark', book_id=bookid, book_format='EPUB') }}", bookmarkUrl: "{{ url_for('web.set_bookmark', book_id=bookid, book_format=book_format) }}",
bookUrl: "{{ url_for('web.serve_book', book_id=bookid, book_format='epub', anyname='file.epub') }}", bookUrl: "{{ url_for('web.serve_book', book_id=bookid, book_format=book_format, anyname='file.epub') }}",
bookmark: "{{ bookmark.bookmark_key if bookmark != None }}", bookmark: "{{ bookmark.bookmark_key if bookmark != None }}",
useBookmarks: "{{ current_user.is_authenticated | tojson }}" useBookmarks: "{{ current_user.is_authenticated | tojson }}"
}; };
@@ -135,19 +135,23 @@
window.themes = { window.themes = {
"darkTheme": { "darkTheme": {
"bgColor": "#202124", "bgColor": "#202124",
"css_path": "{{ url_for('static', filename='css/epub_themes.css') }}" "css_path": "{{ url_for('static', filename='css/epub_themes.css') }}",
"title-color": "#fff"
}, },
"lightTheme": { "lightTheme": {
"bgColor": "white", "bgColor": "white",
"css_path": "{{ url_for('static', filename='css/epub_themes.css') }}" "css_path": "{{ url_for('static', filename='css/epub_themes.css') }}",
"title-color": "#4f4f4f"
}, },
"sepiaTheme": { "sepiaTheme": {
"bgColor": "#ece1ca", "bgColor": "#ece1ca",
"css_path": "{{ url_for('static', filename='css/epub_themes.css') }}" "css_path": "{{ url_for('static', filename='css/epub_themes.css') }}",
"title-color": "#4f4f4f"
}, },
"blackTheme": { "blackTheme": {
"bgColor": "black", "bgColor": "black",
"css_path": "{{ url_for('static', filename='css/epub_themes.css') }}" "css_path": "{{ url_for('static', filename='css/epub_themes.css') }}",
"title-color": "#fff"
}, },
}; };
@@ -170,6 +174,8 @@
// Apply theme to rest of the page. // Apply theme to rest of the page.
document.getElementById("main").style.backgroundColor = themes[id]["bgColor"]; document.getElementById("main").style.backgroundColor = themes[id]["bgColor"];
document.getElementById("titlebar").style.color = themes[id]["title-color"] || "#fff";
document.getElementById("progress").style.color = themes[id]["title-color"] || "#fff";
} }
// font size settings logic // font size settings logic
@@ -210,6 +216,6 @@
<script src="{{ url_for('static', filename='js/libs/screenfull.min.js') }}"></script> <script src="{{ url_for('static', filename='js/libs/screenfull.min.js') }}"></script>
<script src="{{ url_for('static', filename='js/libs/reader.min.js') }}"></script> <script src="{{ url_for('static', filename='js/libs/reader.min.js') }}"></script>
<script src="{{ url_for('static', filename='js/reading/epub.js') }}"></script> <script src="{{ url_for('static', filename='js/reading/epub.js') }}"></script>
<script src="{{ url_for('static', filename='js/reading/epub-progress.js') }}"></script> <!--script src="{{ url_for('static', filename='js/reading/locationchange-polyfill.js') }}"></script-->
</body> </body>
</html> </html>

View File

@@ -35,7 +35,7 @@ See https://github.com/adobe-type-tools/cmap-resources
<link rel="stylesheet" href="{{ url_for('static', filename='css/libs/viewer.css') }}"> <link rel="stylesheet" href="{{ url_for('static', filename='css/libs/viewer.css') }}">
<!-- This snippet is used in production (included from viewer.html) --> <!-- This snippet is used in production (included from viewer.html) -->
<link rel="resource" type="application/l10n" href="{{ url_for('static', filename='locale/locale.json') }}"> <link rel="resource" type="application/l10n" href="{{ url_for('static', filename='locale/locale.json') }}">
<script src="{{ url_for('static', filename='js/libs/pdf.mjs') }}" type="module"></script> <script src="{{ url_for('static', filename='js/libs/pdf.js') }}" type="module"></script>
<script type="text/javascript"> <script type="text/javascript">
window.addEventListener('webviewerloaded', function() { window.addEventListener('webviewerloaded', function() {
@@ -46,12 +46,12 @@ See https://github.com/adobe-type-tools/cmap-resources
PDFViewerApplicationOptions.set('cMapUrl', "{{ url_for('static', filename='cmaps/') }}"); PDFViewerApplicationOptions.set('cMapUrl', "{{ url_for('static', filename='cmaps/') }}");
PDFViewerApplicationOptions.set('sidebarViewOnLoad', 0); PDFViewerApplicationOptions.set('sidebarViewOnLoad', 0);
PDFViewerApplicationOptions.set('imageResourcesPath', "{{ url_for('static', filename='css/images/') }}"); PDFViewerApplicationOptions.set('imageResourcesPath', "{{ url_for('static', filename='css/images/') }}");
PDFViewerApplicationOptions.set('workerSrc', "{{ url_for('static', filename='js/libs/pdf.worker.mjs') }}"); PDFViewerApplicationOptions.set('workerSrc', "{{ url_for('static', filename='js/libs/pdf.worker.js') }}");
PDFViewerApplicationOptions.set('defaultUrl',"{{ url_for('web.serve_book', book_id=pdffile, book_format='pdf') }}") PDFViewerApplicationOptions.set('defaultUrl',"{{ url_for('web.serve_book', book_id=pdffile, book_format='pdf') }}")
}); });
</script> </script>
<script src="{{ url_for('static', filename='js/libs/viewer.mjs') }}" type="module">></script> <script src="{{ url_for('static', filename='js/libs/viewer.js') }}" type="module">></script>
</head> </head>

View File

@@ -73,18 +73,16 @@
<a class="author-name" href="{{url_for('web.books_list', data='author', sort_param='stored', book_id=author.id) }}">{{author.name.replace('|',',')|shortentitle(30)}}</a> <a class="author-name" href="{{url_for('web.books_list', data='author', sort_param='stored', book_id=author.id) }}">{{author.name.replace('|',',')|shortentitle(30)}}</a>
{% endif %} {% endif %}
{% endfor %} {% endfor %}
{% for format in entry.Books.data %} {% if entry.Books.data|music %}
{% if format.format|lower in g.constants.EXTENSIONS_AUDIO %}
<span class="glyphicon glyphicon-music"></span> <span class="glyphicon glyphicon-music"></span>
{% endif %} {% endif %}
{% endfor %}
</p> </p>
{% if entry.Books.series.__len__() > 0 %} {% if entry.Books.series.__len__() > 0 %}
<p class="series"> <p class="series">
<a href="{{url_for('web.books_list', data='series', sort_param='stored', book_id=entry.Books.series[0].id )}}"> <a href="{{url_for('web.books_list', data='series', sort_param='stored', book_id=entry.Books.series[0].id )}}">
{{entry.Books.series[0].name}} {{entry.Books.series[0].name}}
</a> </a>
({{entry.Books.series_index|formatseriesindex}}) ({{entry.Books.series_index|formatfloat(2)}})
</p> </p>
{% endif %} {% endif %}

View File

@@ -5,12 +5,12 @@
<form role="form" id="search" action="{{ url_for('search.advanced_search_form') }}" method="POST"> <form role="form" id="search" action="{{ url_for('search.advanced_search_form') }}" method="POST">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<div class="form-group"> <div class="form-group">
<label for="book_title">{{_('Book Title')}}</label> <label for="title">{{_('Book Title')}}</label>
<input type="text" class="form-control" name="book_title" id="book_title" value=""> <input type="text" class="form-control" name="title" id="title" value="">
</div> </div>
<div class="form-group"> <div class="form-group">
<label for="bookAuthor">{{_('Author')}}</label> <label for="authors">{{_('Author')}}</label>
<input type="text" class="form-control typeahead" name="author_name" id="bookAuthor" value="" autocomplete="off"> <input type="text" class="form-control typeahead" name="authors" id="authors" value="" autocomplete="off">
</div> </div>
<div class="form-group"> <div class="form-group">
<label for="Publisher">{{_('Publisher')}}</label> <label for="Publisher">{{_('Publisher')}}</label>
@@ -151,8 +151,8 @@
</div> </div>
</div> </div>
<div class="form-group"> <div class="form-group">
<label for="comment">{{_('Description')}}</label> <label for="comments">{{_('Description')}}</label>
<input type="text" class="form-control" name="comment" id="comment" value=""> <input type="text" class="form-control" name="comments" id="comments" value="">
</div> </div>
{% if cc|length > 0 %} {% if cc|length > 0 %}

Some files were not shown because too many files have changed in this diff Show More