mirror of
https://github.com/janeczku/calibre-web
synced 2025-09-06 12:57:58 +00:00
Merge remote-tracking branch 'upstream'
This commit is contained in:
2
.github/ISSUE_TEMPLATE/bug_report.md
vendored
2
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -11,7 +11,7 @@ assignees: ''
|
||||
|
||||
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
|
||||
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
|
||||
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
|
||||
I have turned off all notifications from GitHub/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
|
||||
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
|
||||
|
||||
|
||||
|
2
.github/ISSUE_TEMPLATE/feature_request.md
vendored
2
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@@ -11,7 +11,7 @@ assignees: ''
|
||||
|
||||
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
|
||||
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
|
||||
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
|
||||
I have turned off all notifications from GitHub/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
|
||||
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
|
||||
|
||||
Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
|
||||
|
@@ -20,7 +20,7 @@ Some of the user languages in Calibre-Web having missing translations. We are ha
|
||||
|
||||
### **Documentation**
|
||||
|
||||
The Calibre-Web documentation is hosted in the Github [Wiki](https://github.com/janeczku/calibre-web/wiki). The Wiki is open to everybody, if you find a problem, feel free to correct it. If information is missing, you are welcome to add it. The content will be reviewed time by time. Please try to be consistent with the form with the other Wiki pages (e.g. the project name is Calibre-Web with 2 capital letters and a dash in between).
|
||||
The Calibre-Web documentation is hosted in the GitHub [Wiki](https://github.com/janeczku/calibre-web/wiki). The Wiki is open to everybody, if you find a problem, feel free to correct it. If information is missing, you are welcome to add it. The content will be reviewed time by time. Please try to be consistent with the form with the other Wiki pages (e.g. the project name is Calibre-Web with 2 capital letters and a dash in between).
|
||||
|
||||
### **Reporting a bug**
|
||||
|
||||
@@ -28,12 +28,12 @@ Do not open up a GitHub issue if the bug is a **security vulnerability** in Cali
|
||||
|
||||
Ensure the **bug was not already reported** by searching on GitHub under [Issues](https://github.com/janeczku/calibre-web/issues). Please also check if a solution for your problem can be found in the [wiki](https://github.com/janeczku/calibre-web/wiki).
|
||||
|
||||
If you're unable to find an **open issue** addressing the problem, open a [new one](https://github.com/janeczku/calibre-web/issues/new/choose). Be sure to include a **title** and **clear description**, as much relevant information as possible, the **issue form** helps you providing the right information. Deleting the form and just pasting the stack trace doesn't speed up fixing the problem. If your issue could be resolved, consider closing the issue.
|
||||
If you're unable to find an **open issue** addressing the problem, open a [new one](https://github.com/janeczku/calibre-web/issues/new/choose). Be sure to include a **title** and **clear description**, as much relevant information as possible, the **issue form** helps you provide the right information. Deleting the form and just pasting the stack trace doesn't speed up fixing the problem. If your issue could be resolved, consider closing the issue.
|
||||
|
||||
### **Feature Request**
|
||||
|
||||
If there is a feature missing in Calibre-Web and you can't find a feature request in the [Issues](https://github.com/janeczku/calibre-web/issues) section, you could create a [feature request](https://github.com/janeczku/calibre-web/issues/new?assignees=&labels=&template=feature_request.md&title=).
|
||||
We will not extend Calibre-Web with any more login abilities or add further files storages, or file syncing ability. Furthermore Calibre-Web is made for home usage for company in-house usage, so requests regarding any sorts of social interaction capability, payment routines, search engine or web site analytics integration will not be implemented.
|
||||
We will not extend Calibre-Web with any more login abilities or add further files storages, or file syncing ability. Calibre-Web is made for home usage for company in-house usage, so requests regarding any sorts of social interaction capability, payment routines, search engine or website analytics integration will not be implemented.
|
||||
|
||||
### **Contributing code to Calibre-Web**
|
||||
|
||||
@@ -42,5 +42,5 @@ Open a new GitHub pull request with the patch. Ensure the PR description clearly
|
||||
In case your code enhances features of Calibre-Web: Create your pull request for the development branch if your enhancement consists of more than some lines of code in a local section of Calibre-Webs code. This makes it easier to test it and check all implication before it's made public.
|
||||
|
||||
Please check if your code runs with python 3, python 2 is no longer supported. If possible and the feature is related to operating system functions, try to check it on Windows and Linux.
|
||||
Calibre-Web is automatically tested on Linux in combination with python 3.8. The code for testing is in a [separate repo](https://github.com/OzzieIsaacs/calibre-web-test) on Github. It uses unit tests and performs real system tests with selenium; it would be great if you could consider also writing some tests.
|
||||
Calibre-Web is automatically tested on Linux in combination with python 3.8. The code for testing is in a [separate repo](https://github.com/OzzieIsaacs/calibre-web-test) on GitHub. It uses unit tests and performs real system tests with selenium; it would be great if you could consider also writing some tests.
|
||||
A static code analysis is done by Codacy, but it's partly broken and doesn't run automatically. You could check your code with ESLint before contributing, a configuration file can be found in the projects root folder.
|
||||
|
@@ -1 +1,3 @@
|
||||
graft src/calibreweb
|
||||
global-exclude __pycache__
|
||||
global-exclude *.pyc
|
||||
|
146
README.md
146
README.md
@@ -1,10 +1,3 @@
|
||||
# Short Notice from the maintainer
|
||||
|
||||
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
|
||||
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
|
||||
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
|
||||
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
|
||||
|
||||
# Calibre-Web
|
||||
|
||||
Calibre-Web is a web app that offers a clean and intuitive interface for browsing, reading, and downloading eBooks using a valid [Calibre](https://calibre-ebook.com) database.
|
||||
@@ -26,13 +19,13 @@ Calibre-Web is a web app that offers a clean and intuitive interface for browsin
|
||||
- [Quick start](#quick-start)
|
||||
- [Requirements](#requirements)
|
||||
4. [Docker Images](#docker-images)
|
||||
5. [Contributor Recognition](#contributor-recognition)
|
||||
6. [Contact](#contact)
|
||||
7. [Contributing to Calibre-Web](#contributing-to-calibre-web)
|
||||
5. [Troubleshooting](#troubleshooting)
|
||||
6. [Contributor Recognition](#contributor-recognition)
|
||||
7. [Contact](#contact)
|
||||
8. [Contributing to Calibre-Web](#contributing-to-calibre-web)
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
*This software is a fork of [library](https://github.com/mutschler/calibreserver) and licensed under the GPL v3 License.*
|
||||
|
||||

|
||||
@@ -64,52 +57,102 @@ Calibre-Web is a web app that offers a clean and intuitive interface for browsin
|
||||
|
||||
## Installation
|
||||
|
||||
#### Installation via pip (recommended)
|
||||
1. Create a virtual environment for Calibre-Web to avoid conflicts with existing Python dependencies
|
||||
2. Install Calibre-Web via pip: `pip install calibreweb` (or `pip3` depending on your OS/distro)
|
||||
3. Install optional features via pip as needed, see [this page](https://github.com/janeczku/calibre-web/wiki/Dependencies-in-Calibre-Web-Linux-and-Windows) for details
|
||||
4. Start Calibre-Web by typing `cps`
|
||||
### Installation via pip (recommended)
|
||||
|
||||
*Note: Raspberry Pi OS users may encounter issues during installation. If so, please update pip (`./venv/bin/python3 -m pip install --upgrade pip`) and/or install cargo (`sudo apt install cargo`) before retrying the installation.*
|
||||
1. **Create a virtual environment**: It’s essential to isolate your Calibre-Web installation to avoid dependency conflicts. You can create a virtual environment by running:
|
||||
```
|
||||
python3 -m venv calibre-web-env
|
||||
```
|
||||
2. **Activate the virtual environment**:
|
||||
```
|
||||
source calibre-web-env/bin/activate
|
||||
```
|
||||
3. **Install Calibre-Web**: Use pip to install the application:
|
||||
```
|
||||
pip install calibreweb
|
||||
```
|
||||
4. **Install optional features**: For additional functionality, you may need to install optional features. Refer to [this page](https://github.com/janeczku/calibre-web/wiki/Dependencies-in-Calibre-Web-Linux-and-Windows) for details on what can be installed.
|
||||
5. **Start Calibre-Web**: After installation, you can start the application with:
|
||||
```
|
||||
cps
|
||||
```
|
||||
|
||||
Refer to the Wiki for additional installation examples: [manual installation](https://github.com/janeczku/calibre-web/wiki/Manual-installation), [Linux Mint](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-in-Linux-Mint-19-or-20), [Cloud Provider](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-on-a-Cloud-Provider).
|
||||
*Note: Users of Raspberry Pi OS may encounter installation issues. If you do, try upgrading pip and/or installing cargo as follows:*
|
||||
```
|
||||
./venv/bin/python3 -m pip install --upgrade pip
|
||||
sudo apt install cargo
|
||||
```
|
||||
|
||||
### Important Links
|
||||
- For additional installation examples, check the following:
|
||||
- [Manual installation](https://github.com/janeczku/calibre-web/wiki/Manual-installation)
|
||||
- [Linux Mint installation](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-in-Linux-Mint-19-or-20)
|
||||
- [Cloud Provider setup](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-on-a-Cloud-Provider)
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. Open your browser and navigate to `http://localhost:8083` or `http://localhost:8083/opds` for the OPDS catalog
|
||||
2. Log in with the default admin credentials
|
||||
3. If you don't have a Calibre database, you can use [this database](https://github.com/janeczku/calibre-web/raw/master/library/metadata.db) (move it out of the Calibre-Web folder to prevent overwriting during updates)
|
||||
4. Set `Location of Calibre database` to the path of the folder containing your Calibre library (metadata.db) and click "Save"
|
||||
5. Optionally, use Google Drive to host your Calibre library by following the [Google Drive integration guide](https://github.com/janeczku/calibre-web/wiki/G-Drive-Setup#using-google-drive-integration)
|
||||
6. Configure your Calibre-Web instance via the admin page, referring to the [Basic Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#basic-configuration) and [UI Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#ui-configuration) guides
|
||||
|
||||
#### Default Admin Login:
|
||||
- **Username:** admin
|
||||
- **Password:** admin123
|
||||
1. **Access Calibre-Web**: Open your browser and navigate to:
|
||||
```
|
||||
http://localhost:8083
|
||||
```
|
||||
or for the OPDS catalog:
|
||||
```
|
||||
http://localhost:8083/opds
|
||||
```
|
||||
2. **Log in**: Use the default admin credentials:
|
||||
- **Username:** admin
|
||||
- **Password:** admin123
|
||||
3. **Database Setup**: If you do not have a Calibre database, download a sample from:
|
||||
```
|
||||
https://github.com/janeczku/calibre-web/raw/master/library/metadata.db
|
||||
```
|
||||
Move it out of the Calibre-Web folder to avoid overwriting during updates.
|
||||
4. **Configure Calibre Database**: In the admin interface, set the `Location of Calibre database` to the path of the folder containing your Calibre library (where `metadata.db` is located) and click "Save".
|
||||
5. **Google Drive Integration**: For hosting your Calibre library on Google Drive, refer to the [Google Drive integration guide](https://github.com/janeczku/calibre-web/wiki/G-Drive-Setup#using-google-drive-integration).
|
||||
6. **Admin Configuration**: Configure your instance via the admin page, referring to the [Basic Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#basic-configuration) and [UI Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#ui-configuration) guides.
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.7+
|
||||
- [Imagemagick](https://imagemagick.org/script/download.php) for cover extraction from EPUBs (Windows users may need to install [Ghostscript](https://ghostscript.com/releases/gsdnld.html) for PDF cover extraction)
|
||||
- Windows users need to install [libmagic for 32bit python](https://gnuwin32.sourceforge.net/downlinks/file.php) or [libmagic for 64bit python](https://github.com/nscaife/file-windows/releases/tag/20170108), depending on the python version; The files need to be installed in path (e.g. script folder of your Calibre-Web venv, or in the root folder of Calibre-Web
|
||||
- Optional: [Calibre desktop program](https://calibre-ebook.com/download) for on-the-fly conversion and metadata editing (set "calibre's converter tool" path on the setup page)
|
||||
- Optional: [Kepubify tool](https://github.com/pgaskin/kepubify/releases/latest) for Kobo device support (place the binary in `/opt/kepubify` on Linux or `C:\Program Files\kepubify` on Windows)
|
||||
- **Python Version**: Ensure you have Python 3.7 or newer.
|
||||
- **Imagemagick**: Required for cover extraction from EPUBs. Windows users may also need to install [Ghostscript](https://ghostscript.com/releases/gsdnld.html) for PDF cover extraction.
|
||||
- **Optional Tools**:
|
||||
- **Calibre desktop program**: Recommended for on-the-fly conversion and metadata editing. Set the path to Calibre’s converter tool on the setup page.
|
||||
- **Kepubify tool**: Needed for Kobo device support. Download the tool and place the binary in `/opt/kepubify` on Linux or `C:\Program Files\kepubify` on Windows.
|
||||
|
||||
## Docker Images
|
||||
|
||||
Pre-built Docker images are available in the following Docker Hub repositories (maintained by the LinuxServer team):
|
||||
Pre-built Docker images are available:
|
||||
|
||||
#### **LinuxServer - x64, aarch64**
|
||||
- [Docker Hub](https://hub.docker.com/r/linuxserver/calibre-web)
|
||||
- [GitHub](https://github.com/linuxserver/docker-calibre-web)
|
||||
- [GitHub - Optional Calibre layer](https://github.com/linuxserver/docker-mods/tree/universal-calibre)
|
||||
### **LinuxServer - x64, aarch64**
|
||||
- **Docker Hub**: [linuxserver/calibre-web](https://hub.docker.com/r/linuxserver/calibre-web)
|
||||
- **GitHub**: [linuxserver/docker-calibre-web](https://github.com/linuxserver/docker-calibre-web)
|
||||
- **Optional Calibre layer**: [linuxserver/docker-mods](https://github.com/linuxserver/docker-mods/tree/universal-calibre)
|
||||
|
||||
Include the environment variable `DOCKER_MODS=linuxserver/mods:universal-calibre` in your Docker run/compose file to add the Calibre `ebook-convert` binary (x64 only). Omit this variable for a lightweight image.
|
||||
To include the Calibre `ebook-convert` binary (x64 only), add the environment variable:
|
||||
```
|
||||
DOCKER_MODS=linuxserver/mods:universal-calibre
|
||||
```
|
||||
in your Docker run/compose file. Omit this variable for a lightweight image.
|
||||
|
||||
Both the Calibre-Web and Calibre-Mod images are automatically rebuilt on new releases and updates.
|
||||
- **Paths Configuration**:
|
||||
- Set **Path to Calibre Binaries** to `/usr/bin`.
|
||||
- Set **Path to Unrar** to `/usr/bin/unrar`.
|
||||
|
||||
- Set "Path to Calibre Binaries" to `/usr/bin`
|
||||
- Set "Path to Unrar" to `/usr/bin/unrar`
|
||||
## Troubleshooting
|
||||
|
||||
- **Common Issues**:
|
||||
- If you experience issues starting the application, check the log files located in the `logs` directory for error messages.
|
||||
- If eBooks fail to load, verify that the `Location of Calibre database` is correctly set and that the database file is accessible.
|
||||
|
||||
- **Configuration Errors**: Ensure that your Calibre database is compatible and properly formatted. Refer to the Calibre documentation for guidance on maintaining the database.
|
||||
|
||||
- **Performance Problems**:
|
||||
- If the application is slow, consider increasing the allocated resources (CPU/RAM) to your server or optimizing the Calibre database by removing duplicates and unnecessary entries.
|
||||
- Regularly clear the cache in your web browser to improve loading times.
|
||||
|
||||
- **User Management Issues**: If users are unable to log in or register, check the user permission settings in the admin interface. Ensure that registration is enabled and that users are being assigned appropriate roles.
|
||||
|
||||
- **Support Resources**: For additional help, consider visiting the [FAQ section](https://github.com/janeczku/calibre-web/wiki/FAQ) of the wiki or posting your questions in the [Discord community](https://discord.gg/h2VsJ2NEfB).
|
||||
|
||||
## Contributor Recognition
|
||||
|
||||
@@ -123,4 +166,21 @@ For more information, How To's, and FAQs, please visit the [Wiki](https://github
|
||||
|
||||
## Contributing to Calibre-Web
|
||||
|
||||
Check out our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
|
||||
To contribute, please check our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md). We welcome issues, feature requests, and pull requests from the community.
|
||||
|
||||
### Reporting Bugs
|
||||
|
||||
If you encounter bugs or issues, please report them in the [issues section](https://github.com/janeczku/calibre-web/issues) of the repository. Be sure to include detailed information about your setup and the problem encountered.
|
||||
|
||||
### Feature Requests
|
||||
|
||||
We welcome suggestions for new features. Please create a new issue in the repository to discuss your ideas.
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- **Documentation**: Comprehensive documentation is available on the [Calibre-Web wiki](https://github.com/janeczku/calibre-web/wiki).
|
||||
- **Community Contributions**: Explore the [community contributions](https://github.com/janeczku/calibre-web/pulls) to see ongoing work and how you can get involved.
|
||||
|
||||
---
|
||||
|
||||
Thank you for using Calibre-Web! We hope you enjoy managing your eBook library with our tool.
|
||||
|
75
SECURITY.md
75
SECURITY.md
@@ -10,41 +10,46 @@ To receive fixes for security vulnerabilities it is required to always upgrade t
|
||||
|
||||
## History
|
||||
|
||||
| Fixed in | Description |CVE number |
|
||||
|---------------|--------------------------------------------------------------------------------------------------------------------|---------|
|
||||
| 3rd July 2018 | Guest access acts as a backdoor ||
|
||||
| V 0.6.7 | Hardcoded secret key for sessions |CVE-2020-12627 |
|
||||
| V 0.6.13 | Calibre-Web Metadata cross site scripting |CVE-2021-25964|
|
||||
| V 0.6.13 | Name of Shelves are only visible to users who can access the corresponding shelf Thanks to @ibarrionuevo ||
|
||||
| V 0.6.13 | JavaScript could get executed in the description field. Thanks to @ranjit-git and Hagai Wechsler (WhiteSource) ||
|
||||
| V 0.6.13 | JavaScript could get executed in a custom column of type "comment" field ||
|
||||
| V 0.6.13 | JavaScript could get executed after converting a book to another format with a title containing javascript code ||
|
||||
| V 0.6.13 | JavaScript could get executed after converting a book to another format with a username containing javascript code ||
|
||||
| V 0.6.13 | JavaScript could get executed in the description series, categories or publishers title ||
|
||||
| V 0.6.13 | JavaScript could get executed in the shelf title ||
|
||||
| V 0.6.13 | Login with the old session cookie after logout. Thanks to @ibarrionuevo ||
|
||||
| V 0.6.14 | CSRF was possible. Thanks to @mik317 and Hagai Wechsler (WhiteSource) |CVE-2021-25965|
|
||||
| V 0.6.14 | Migrated some routes to POST-requests (CSRF protection). Thanks to @scara31 |CVE-2021-4164|
|
||||
| V 0.6.15 | Fix for "javascript:" script links in identifier. Thanks to @scara31 |CVE-2021-4170|
|
||||
| V 0.6.15 | Cross-Site Scripting vulnerability on uploaded cover file names. Thanks to @ibarrionuevo ||
|
||||
| V 0.6.15 | Creating public shelfs is now denied if user is missing the edit public shelf right. Thanks to @ibarrionuevo ||
|
||||
| V 0.6.15 | Changed error message in case of trying to delete a shelf unauthorized. Thanks to @ibarrionuevo ||
|
||||
| V 0.6.16 | JavaScript could get executed on authors page. Thanks to @alicaz |CVE-2022-0352|
|
||||
| V 0.6.16 | Localhost can no longer be used to upload covers. Thanks to @scara31 |CVE-2022-0339|
|
||||
| V 0.6.16 | Another case where public shelfs could be created without permission is prevented. Thanks to @nhiephon |CVE-2022-0273|
|
||||
| V 0.6.16 | It's prevented to get the name of a private shelfs. Thanks to @nhiephon |CVE-2022-0405|
|
||||
| V 0.6.17 | The SSRF Protection can no longer be bypassed via an HTTP redirect. Thanks to @416e6e61 |CVE-2022-0767|
|
||||
| V 0.6.17 | The SSRF Protection can no longer be bypassed via 0.0.0.0 and it's ipv6 equivalent. Thanks to @r0hanSH |CVE-2022-0766|
|
||||
| V 0.6.18 | Possible SQL Injection is prevented in user table Thanks to Iman Sharafaldin (Forward Security) |CVE-2022-30765|
|
||||
| V 0.6.18 | The SSRF protection no longer can be bypassed by IPV6/IPV4 embedding. Thanks to @416e6e61 |CVE-2022-0939|
|
||||
| V 0.6.18 | The SSRF protection no longer can be bypassed to connect to other servers in the local network. Thanks to @michaellrowley |CVE-2022-0990|
|
||||
| V 0.6.20 | Credentials for emails are now stored encrypted ||
|
||||
| V 0.6.20 | Login is rate limited ||
|
||||
| V 0.6.20 | Passwordstrength can be forced ||
|
||||
| V 0.6.21 | SMTP server credentials are no longer returned to client ||
|
||||
| V 0.6.21 | Cross-site scripting (XSS) stored in href bypasses filter using data wrapper no longer possible ||
|
||||
| V 0.6.21 | Cross-site scripting (XSS) is no longer possible via pathchooser ||
|
||||
| V 0.6.21 | Error Handling at non existent rating, language, and user downloaded books was fixed ||
|
||||
| Fixed in | Description | CVE number |
|
||||
|---------------|----------------------------------------------------------------------------------------------------------------------------------|----------------|
|
||||
| 3rd July 2018 | Guest access acts as a backdoor | |
|
||||
| V 0.6.7 | Hardcoded secret key for sessions | CVE-2020-12627 |
|
||||
| V 0.6.13 | Calibre-Web Metadata cross site scripting | CVE-2021-25964 |
|
||||
| V 0.6.13 | Name of Shelves are only visible to users who can access the corresponding shelf Thanks to @ibarrionuevo | |
|
||||
| V 0.6.13 | JavaScript could get executed in the description field. Thanks to @ranjit-git and Hagai Wechsler (WhiteSource) | |
|
||||
| V 0.6.13 | JavaScript could get executed in a custom column of type "comment" field | |
|
||||
| V 0.6.13 | JavaScript could get executed after converting a book to another format with a title containing javascript code | |
|
||||
| V 0.6.13 | JavaScript could get executed after converting a book to another format with a username containing javascript code | |
|
||||
| V 0.6.13 | JavaScript could get executed in the description series, categories or publishers title | |
|
||||
| V 0.6.13 | JavaScript could get executed in the shelf title | |
|
||||
| V 0.6.13 | Login with the old session cookie after logout. Thanks to @ibarrionuevo | |
|
||||
| V 0.6.14 | CSRF was possible. Thanks to @mik317 and Hagai Wechsler (WhiteSource) | CVE-2021-25965 |
|
||||
| V 0.6.14 | Migrated some routes to POST-requests (CSRF protection). Thanks to @scara31 | CVE-2021-4164 |
|
||||
| V 0.6.15 | Fix for "javascript:" script links in identifier. Thanks to @scara31 | CVE-2021-4170 |
|
||||
| V 0.6.15 | Cross-Site Scripting vulnerability on uploaded cover file names. Thanks to @ibarrionuevo | |
|
||||
| V 0.6.15 | Creating public shelfs is now denied if user is missing the edit public shelf right. Thanks to @ibarrionuevo | |
|
||||
| V 0.6.15 | Changed error message in case of trying to delete a shelf unauthorized. Thanks to @ibarrionuevo | |
|
||||
| V 0.6.16 | JavaScript could get executed on authors page. Thanks to @alicaz | CVE-2022-0352 |
|
||||
| V 0.6.16 | Localhost can no longer be used to upload covers. Thanks to @scara31 | CVE-2022-0339 |
|
||||
| V 0.6.16 | Another case where public shelfs could be created without permission is prevented. Thanks to @nhiephon | CVE-2022-0273 |
|
||||
| V 0.6.16 | It's prevented to get the name of a private shelfs. Thanks to @nhiephon | CVE-2022-0405 |
|
||||
| V 0.6.17 | The SSRF Protection can no longer be bypassed via an HTTP redirect. Thanks to @416e6e61 | CVE-2022-0767 |
|
||||
| V 0.6.17 | The SSRF Protection can no longer be bypassed via 0.0.0.0 and it's ipv6 equivalent. Thanks to @r0hanSH | CVE-2022-0766 |
|
||||
| V 0.6.18 | Possible SQL Injection is prevented in user table Thanks to Iman Sharafaldin (Forward Security) | CVE-2022-30765 |
|
||||
| V 0.6.18 | The SSRF protection no longer can be bypassed by IPV6/IPV4 embedding. Thanks to @416e6e61 | CVE-2022-0939 |
|
||||
| V 0.6.18 | The SSRF protection no longer can be bypassed to connect to other servers in the local network. Thanks to @michaellrowley | CVE-2022-0990 |
|
||||
| V 0.6.20 | Credentials for emails are now stored encrypted | |
|
||||
| V 0.6.20 | Login is rate limited | |
|
||||
| V 0.6.20 | Passwordstrength can be forced | |
|
||||
| V 0.6.21 | SMTP server credentials are no longer returned to client | |
|
||||
| V 0.6.21 | Cross-site scripting (XSS) stored in href bypasses filter using data wrapper no longer possible | |
|
||||
| V 0.6.21 | Cross-site scripting (XSS) is no longer possible via pathchooser | |
|
||||
| V 0.6.21 | Error Handling at non existent rating, language, and user downloaded books was fixed | |
|
||||
| V 0.6.22 | Upload mimetype is checked to prevent malicious file content in the books library | |
|
||||
| V 0.6.22 | Cross-site scripting (XSS) stored in comments section is prevented better (switching from lxml to bleach for sanitizing strings) | |
|
||||
| V 0.6.23 | Cookies are no longer stored for opds basic authentication and proxy authentication | |
|
||||
|
||||
|
||||
|
||||
|
||||
## Statement regarding Log4j (CVE-2021-44228 and related)
|
||||
|
@@ -35,7 +35,6 @@ from .reverseproxy import ReverseProxied
|
||||
from .server import WebServer
|
||||
from .dep_check import dependency_check
|
||||
from .updater import Updater
|
||||
from .babel import babel, get_locale
|
||||
from . import config_sql
|
||||
from . import cache_buster
|
||||
from . import ub, db
|
||||
@@ -56,35 +55,41 @@ mimetypes.init()
|
||||
mimetypes.add_type('application/xhtml+xml', '.xhtml')
|
||||
mimetypes.add_type('application/epub+zip', '.epub')
|
||||
mimetypes.add_type('application/epub+zip', '.kepub')
|
||||
mimetypes.add_type('text/xml', '.fb2')
|
||||
mimetypes.add_type('application/octet-stream', '.mobi')
|
||||
mimetypes.add_type('application/fb2+zip', '.fb2')
|
||||
mimetypes.add_type('application/x-mobipocket-ebook', '.mobi')
|
||||
mimetypes.add_type('application/octet-stream', '.prc')
|
||||
mimetypes.add_type('application/vnd.amazon.ebook', '.azw')
|
||||
mimetypes.add_type('application/x-mobi8-ebook', '.azw3')
|
||||
mimetypes.add_type('application/x-rar', '.cbr')
|
||||
mimetypes.add_type('application/zip', '.cbz')
|
||||
mimetypes.add_type('application/x-mobipocket-ebook', '.azw')
|
||||
mimetypes.add_type('application/x-mobipocket-ebook', '.azw3')
|
||||
mimetypes.add_type('application/x-cbr', '.cbr')
|
||||
mimetypes.add_type('application/x-cbz', '.cbz')
|
||||
mimetypes.add_type('application/x-tar', '.cbt')
|
||||
mimetypes.add_type('application/x-7z-compressed', '.cb7')
|
||||
mimetypes.add_type('image/vnd.djv', '.djv')
|
||||
mimetypes.add_type('image/vnd.djv', '.djvu')
|
||||
mimetypes.add_type('image/vnd.djvu', '.djv')
|
||||
mimetypes.add_type('image/vnd.djvu', '.djvu')
|
||||
mimetypes.add_type('application/mpeg', '.mpeg')
|
||||
mimetypes.add_type('audio/mpeg', '.mp3')
|
||||
mimetypes.add_type('audio/x-m4a', '.m4a')
|
||||
mimetypes.add_type('audio/x-m4a', '.m4b')
|
||||
mimetypes.add_type('audio/x-hx-aac-adts', '.aac')
|
||||
mimetypes.add_type('audio/vnd.dolby.dd-raw', '.ac3')
|
||||
mimetypes.add_type('video/x-ms-asf', '.asf')
|
||||
mimetypes.add_type('audio/ogg', '.ogg')
|
||||
mimetypes.add_type('application/ogg', '.oga')
|
||||
mimetypes.add_type('text/css', '.css')
|
||||
mimetypes.add_type('application/x-ms-reader', '.lit')
|
||||
mimetypes.add_type('text/javascript; charset=UTF-8', '.js')
|
||||
mimetypes.add_type('text/javascript', '.js')
|
||||
mimetypes.add_type('text/rtf', '.rtf')
|
||||
|
||||
log = logger.create()
|
||||
|
||||
app = Flask(__name__)
|
||||
app.config.update(
|
||||
SESSION_COOKIE_HTTPONLY=True,
|
||||
SESSION_COOKIE_SAMESITE='Strict',
|
||||
REMEMBER_COOKIE_SAMESITE='Strict', # will be available in flask-login 0.5.1 earliest
|
||||
WTF_CSRF_SSL_STRICT=False
|
||||
SESSION_COOKIE_SAMESITE='Lax',
|
||||
REMEMBER_COOKIE_SAMESITE='Strict',
|
||||
WTF_CSRF_SSL_STRICT=False,
|
||||
SESSION_COOKIE_NAME=os.environ.get('COOKIE_PREFIX', "") + "session",
|
||||
REMEMBER_COOKIE_NAME=os.environ.get('COOKIE_PREFIX', "") + "remember_token"
|
||||
)
|
||||
|
||||
lm = MyLoginManager()
|
||||
@@ -98,7 +103,7 @@ if wtf_present:
|
||||
else:
|
||||
csrf = None
|
||||
|
||||
calibre_db = db.CalibreDB()
|
||||
calibre_db = db.CalibreDB(app)
|
||||
|
||||
web_server = WebServer()
|
||||
|
||||
@@ -142,9 +147,7 @@ def create_app():
|
||||
lm.anonymous_user = ub.Anonymous
|
||||
lm.session_protection = 'strong' if config.config_session == 1 else "basic"
|
||||
|
||||
db.CalibreDB.update_config(config)
|
||||
db.CalibreDB.setup_db(config.config_calibre_dir, cli_param.settings_path)
|
||||
calibre_db.init_db()
|
||||
db.CalibreDB.update_config(config, config.config_calibre_dir, cli_param.settings_path)
|
||||
|
||||
updater_thread.init_updater(config, web_server)
|
||||
# Perform dry run of updater and exit afterward
|
||||
@@ -177,6 +180,7 @@ def create_app():
|
||||
app.secret_key = os.getenv('SECRET_KEY', config_sql.get_flask_session_key(ub.session))
|
||||
|
||||
web_server.init_app(app, config)
|
||||
from .cw_babel import babel, get_locale
|
||||
if hasattr(babel, "localeselector"):
|
||||
babel.init_app(app)
|
||||
babel.localeselector(get_locale)
|
||||
|
11
cps/about.py
11
cps/about.py
@@ -23,10 +23,10 @@
|
||||
import sys
|
||||
import platform
|
||||
import sqlite3
|
||||
from importlib.metadata import metadata
|
||||
from collections import OrderedDict
|
||||
|
||||
import flask
|
||||
import jinja2
|
||||
from flask_babel import gettext as _
|
||||
|
||||
from . import db, calibre_db, converter, uploader, constants, dep_check
|
||||
@@ -41,17 +41,18 @@ req = dep_check.load_dependencies(False)
|
||||
opt = dep_check.load_dependencies(True)
|
||||
for i in (req + opt):
|
||||
modules[i[1]] = i[0]
|
||||
modules['Jinja2'] = jinja2.__version__
|
||||
modules['pySqlite'] = sqlite3.version
|
||||
modules['Jinja2'] = metadata("jinja2")["Version"]
|
||||
if sys.version_info < (3, 12):
|
||||
modules['pySqlite'] = sqlite3.version
|
||||
modules['SQLite'] = sqlite3.sqlite_version
|
||||
sorted_modules = OrderedDict((sorted(modules.items(), key=lambda x: x[0].casefold())))
|
||||
|
||||
|
||||
def collect_stats():
|
||||
if constants.NIGHTLY_VERSION[0] == "$Format:%H$":
|
||||
calibre_web_version = constants.STABLE_VERSION['version'].replace("b", " Beta")
|
||||
calibre_web_version = constants.STABLE_VERSION.replace("b", " Beta")
|
||||
else:
|
||||
calibre_web_version = (constants.STABLE_VERSION['version'].replace("b", " Beta") + ' - '
|
||||
calibre_web_version = (constants.STABLE_VERSION.replace("b", " Beta") + ' - '
|
||||
+ constants.NIGHTLY_VERSION[0].replace('%', '%%') + ' - '
|
||||
+ constants.NIGHTLY_VERSION[1].replace('%', '%%'))
|
||||
|
||||
|
111
cps/admin.py
111
cps/admin.py
@@ -32,7 +32,8 @@ from datetime import time as datetime_time
|
||||
from functools import wraps
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from flask import Blueprint, flash, redirect, url_for, abort, request, make_response, send_from_directory, g, Response
|
||||
from flask import Blueprint, flash, redirect, url_for, abort, request, make_response, \
|
||||
send_from_directory, g, jsonify
|
||||
from markupsafe import Markup
|
||||
from .cw_login import current_user
|
||||
from flask_babel import gettext as _
|
||||
@@ -52,8 +53,9 @@ from .gdriveutils import is_gdrive_ready, gdrive_support
|
||||
from .render_template import render_title_template, get_sidebar_config
|
||||
from .services.worker import WorkerThread
|
||||
from .usermanagement import user_login_required
|
||||
from .babel import get_available_translations, get_available_locale, get_user_locale_language
|
||||
from .cw_babel import get_available_translations, get_available_locale, get_user_locale_language
|
||||
from . import debug_info
|
||||
from .string_helper import strip_whitespaces
|
||||
|
||||
log = logger.create()
|
||||
|
||||
@@ -117,7 +119,7 @@ def before_request():
|
||||
g.allow_upload = config.config_uploading
|
||||
g.current_theme = config.config_theme
|
||||
g.config_authors_max = config.config_authors_max
|
||||
if '/static/' not in request.path and not config.db_configured and \
|
||||
if ('/static/' not in request.path and not config.db_configured and
|
||||
request.endpoint not in ('admin.ajax_db_config',
|
||||
'admin.simulatedbchange',
|
||||
'admin.db_configuration',
|
||||
@@ -125,7 +127,7 @@ def before_request():
|
||||
'web.login_post',
|
||||
'web.logout',
|
||||
'admin.load_dialogtexts',
|
||||
'admin.ajax_pathchooser'):
|
||||
'admin.ajax_pathchooser')):
|
||||
return redirect(url_for('admin.db_configuration'))
|
||||
|
||||
|
||||
@@ -143,7 +145,6 @@ def shutdown():
|
||||
show_text = {}
|
||||
if task in (0, 1): # valid commandos received
|
||||
# close all database connections
|
||||
calibre_db.dispose()
|
||||
ub.dispose()
|
||||
|
||||
if task == 0:
|
||||
@@ -305,7 +306,15 @@ def edit_user_table():
|
||||
.group_by(text('books_tags_link.tag')) \
|
||||
.order_by(db.Tags.name).all()
|
||||
if config.config_restricted_column:
|
||||
custom_values = calibre_db.session.query(db.cc_classes[config.config_restricted_column]).all()
|
||||
try:
|
||||
custom_values = calibre_db.session.query(db.cc_classes[config.config_restricted_column]).all()
|
||||
except (KeyError, AttributeError, IndexError):
|
||||
custom_values = []
|
||||
log.error("Custom Column No.{} does not exist in calibre database".format(
|
||||
config.config_restricted_column))
|
||||
flash(_("Custom Column No.%(column)d does not exist in calibre database",
|
||||
column=config.config_restricted_column),
|
||||
category="error")
|
||||
else:
|
||||
custom_values = []
|
||||
if not config.config_anonbrowse:
|
||||
@@ -370,10 +379,7 @@ def list_users():
|
||||
user.default = get_user_locale_language(user.default_language)
|
||||
|
||||
table_entries = {'totalNotFiltered': total_count, 'total': filtered_count, "rows": users}
|
||||
js_list = json.dumps(table_entries, cls=db.AlchemyEncoder)
|
||||
response = make_response(js_list)
|
||||
response.headers["Content-Type"] = "application/json; charset=utf-8"
|
||||
return response
|
||||
return make_response(json.dumps(table_entries, cls=db.AlchemyEncoder))
|
||||
|
||||
|
||||
@admi.route("/ajax/deleteuser", methods=['POST'])
|
||||
@@ -392,7 +398,7 @@ def delete_user():
|
||||
success = list()
|
||||
if not users:
|
||||
log.error("User not found")
|
||||
return Response(json.dumps({'type': "danger", 'message': _("User not found")}), mimetype='application/json')
|
||||
return make_response(jsonify(type="danger", message=_("User not found")))
|
||||
for user in users:
|
||||
try:
|
||||
message = _delete_user(user)
|
||||
@@ -408,7 +414,7 @@ def delete_user():
|
||||
log.info("Users {} deleted".format(user_ids))
|
||||
success = [{'type': "success", 'message': _("{} users deleted successfully").format(count)}]
|
||||
success.extend(errors)
|
||||
return Response(json.dumps(success), mimetype='application/json')
|
||||
return make_response(jsonify(success))
|
||||
|
||||
|
||||
@admi.route("/ajax/getlocale")
|
||||
@@ -420,7 +426,7 @@ def table_get_locale():
|
||||
current_locale = get_locale()
|
||||
for loc in locale:
|
||||
ret.append({'value': str(loc), 'text': loc.get_language_name(current_locale)})
|
||||
return json.dumps(ret)
|
||||
return json.dumps(sorted(ret, key=lambda x: x['text']))
|
||||
|
||||
|
||||
@admi.route("/ajax/getdefaultlanguage")
|
||||
@@ -432,7 +438,7 @@ def table_get_default_lang():
|
||||
ret.append({'value': 'all', 'text': _('Show All')})
|
||||
for lang in languages:
|
||||
ret.append({'value': lang.lang_code, 'text': lang.name})
|
||||
return json.dumps(ret)
|
||||
return json.dumps(sorted(ret, key=lambda x: x['text']))
|
||||
|
||||
|
||||
@admi.route("/ajax/editlistusers/<param>", methods=['POST'])
|
||||
@@ -463,9 +469,9 @@ def edit_list_user(param):
|
||||
if 'value[]' in vals:
|
||||
setattr(user, param, prepare_tags(user, vals['action'][0], param, vals['value[]']))
|
||||
else:
|
||||
setattr(user, param, vals['value'].strip())
|
||||
setattr(user, param, strip_whitespaces(vals['value']))
|
||||
else:
|
||||
vals['value'] = vals['value'].strip()
|
||||
vals['value'] = strip_whitespaces(vals['value'])
|
||||
if param == 'name':
|
||||
if user.name == "Guest":
|
||||
raise Exception(_("Guest Name can't be changed"))
|
||||
@@ -490,10 +496,10 @@ def edit_list_user(param):
|
||||
if not ub.session.query(ub.User). \
|
||||
filter(ub.User.role.op('&')(constants.ROLE_ADMIN) == constants.ROLE_ADMIN,
|
||||
ub.User.id != user.id).count():
|
||||
return Response(
|
||||
json.dumps([{'type': "danger",
|
||||
return make_response(
|
||||
jsonify([{'type': "danger",
|
||||
'message': _("No admin user remaining, can't remove admin role",
|
||||
nick=user.name)}]), mimetype='application/json')
|
||||
nick=user.name)}]))
|
||||
user.role &= ~value
|
||||
else:
|
||||
raise Exception(_("Value has to be true or false"))
|
||||
@@ -566,7 +572,7 @@ def update_view_configuration():
|
||||
_config_string(to_save, "config_calibre_web_title")
|
||||
_config_string(to_save, "config_columns_to_ignore")
|
||||
if _config_string(to_save, "config_title_regex"):
|
||||
calibre_db.update_title_sort(config)
|
||||
calibre_db.create_functions(config)
|
||||
|
||||
if not check_valid_read_column(to_save.get("config_read_column", "0")):
|
||||
flash(_("Invalid Read Column"), category="error")
|
||||
@@ -690,7 +696,7 @@ def delete_domain():
|
||||
def list_domain(allow):
|
||||
answer = ub.session.query(ub.Registration).filter(ub.Registration.allow == allow).all()
|
||||
json_dumps = json.dumps([{"domain": r.domain.replace('%', '*').replace('_', '?'), "id": r.id} for r in answer])
|
||||
js = json.dumps(json_dumps.replace('"', "'")).lstrip('"').strip('"')
|
||||
js = json.dumps(json_dumps.replace('"', "'")).strip('"')
|
||||
response = make_response(js.replace("'", '"'))
|
||||
response.headers["Content-Type"] = "application/json; charset=utf-8"
|
||||
return response
|
||||
@@ -939,7 +945,7 @@ def do_full_kobo_sync(userid):
|
||||
count = ub.session.query(ub.KoboSyncedBooks).filter(userid == ub.KoboSyncedBooks.user_id).delete()
|
||||
message = _("{} sync entries deleted").format(count)
|
||||
ub.session_commit(message)
|
||||
return Response(json.dumps([{"type": "success", "message": message}]), mimetype='application/json')
|
||||
return make_response(jsonify(type="success", message=message))
|
||||
|
||||
|
||||
def check_valid_read_column(column):
|
||||
@@ -981,8 +987,14 @@ def prepare_tags(user, action, tags_name, id_list):
|
||||
raise Exception(_("Tag not found"))
|
||||
new_tags_list = [x.name for x in tags]
|
||||
else:
|
||||
tags = calibre_db.session.query(db.cc_classes[config.config_restricted_column]) \
|
||||
.filter(db.cc_classes[config.config_restricted_column].id.in_(id_list)).all()
|
||||
try:
|
||||
tags = calibre_db.session.query(db.cc_classes[config.config_restricted_column]) \
|
||||
.filter(db.cc_classes[config.config_restricted_column].id.in_(id_list)).all()
|
||||
except (KeyError, AttributeError, IndexError):
|
||||
log.error("Custom Column No.{} does not exist in calibre database".format(
|
||||
config.config_restricted_column))
|
||||
raise Exception(_("Custom Column No.%(column)d does not exist in calibre database",
|
||||
column=config.config_restricted_column))
|
||||
new_tags_list = [x.value for x in tags]
|
||||
saved_tags_list = user.__dict__[tags_name].split(",") if len(user.__dict__[tags_name]) else []
|
||||
if action == "remove":
|
||||
@@ -1100,7 +1112,7 @@ def _config_checkbox_int(to_save, x):
|
||||
|
||||
|
||||
def _config_string(to_save, x):
|
||||
return config.set_from_dictionary(to_save, x, lambda y: y.strip().strip(u'\u200B\u200C\u200D\ufeff') if y else y)
|
||||
return config.set_from_dictionary(to_save, x, lambda y: strip_whitespaces(y) if y else y)
|
||||
|
||||
|
||||
def _configuration_gdrive_helper(to_save):
|
||||
@@ -1250,7 +1262,7 @@ def _configuration_ldap_helper(to_save):
|
||||
@admin_required
|
||||
def simulatedbchange():
|
||||
db_change, db_valid = _db_simulate_change()
|
||||
return Response(json.dumps({"change": db_change, "valid": db_valid}), mimetype='application/json')
|
||||
return make_response(jsonify(change=db_change, valid=db_valid))
|
||||
|
||||
|
||||
@admi.route("/admin/user/new", methods=["GET", "POST"])
|
||||
@@ -1311,9 +1323,9 @@ def update_mailsettings():
|
||||
if to_save.get("mail_password_e", ""):
|
||||
_config_string(to_save, "mail_password_e")
|
||||
_config_int(to_save, "mail_size", lambda y: int(y) * 1024 * 1024)
|
||||
config.mail_server = to_save.get('mail_server', "").strip()
|
||||
config.mail_from = to_save.get('mail_from', "").strip()
|
||||
config.mail_login = to_save.get('mail_login', "").strip()
|
||||
config.mail_server = strip_whitespaces(to_save.get('mail_server', ""))
|
||||
config.mail_from = strip_whitespaces(to_save.get('mail_from', ""))
|
||||
config.mail_login = strip_whitespaces(to_save.get('mail_login', ""))
|
||||
try:
|
||||
config.save()
|
||||
except (OperationalError, InvalidRequestError) as e:
|
||||
@@ -1678,10 +1690,10 @@ def cancel_task():
|
||||
def _db_simulate_change():
|
||||
param = request.form.to_dict()
|
||||
to_save = dict()
|
||||
to_save['config_calibre_dir'] = re.sub(r'[\\/]metadata\.db$',
|
||||
to_save['config_calibre_dir'] = strip_whitespaces(re.sub(r'[\\/]metadata\.db$',
|
||||
'',
|
||||
param['config_calibre_dir'],
|
||||
flags=re.IGNORECASE).strip()
|
||||
flags=re.IGNORECASE))
|
||||
db_valid, db_change = calibre_db.check_valid_db(to_save["config_calibre_dir"],
|
||||
ub.app_DB_path,
|
||||
config.config_calibre_uuid)
|
||||
@@ -1715,14 +1727,19 @@ def _db_configuration_update_helper():
|
||||
db_change = True
|
||||
except Exception as ex:
|
||||
return _db_configuration_result('{}'.format(ex), gdrive_error)
|
||||
|
||||
if db_change or not db_valid or not config.db_configured \
|
||||
or config.config_calibre_dir != to_save["config_calibre_dir"]:
|
||||
config.config_calibre_split = to_save.get('config_calibre_split', 0) == "on"
|
||||
if config.config_calibre_split:
|
||||
split_dir = to_save.get("config_calibre_split_dir")
|
||||
if not os.path.exists(split_dir):
|
||||
return _db_configuration_result(_("Books path not valid"), gdrive_error)
|
||||
else:
|
||||
_config_string(to_save, "config_calibre_split_dir")
|
||||
if (db_change or not db_valid or not config.db_configured
|
||||
or config.config_calibre_dir != to_save["config_calibre_dir"]):
|
||||
if not os.path.exists(metadata_db) or not to_save['config_calibre_dir']:
|
||||
return _db_configuration_result(_('DB Location is not Valid, Please Enter Correct Path'), gdrive_error)
|
||||
else:
|
||||
calibre_db.setup_db(to_save['config_calibre_dir'], ub.app_DB_path)
|
||||
config.store_calibre_uuid(calibre_db, db.Library_Id)
|
||||
# if db changed -> delete shelfs, delete download books, delete read books, kobo sync...
|
||||
if db_change:
|
||||
log.info("Calibre Database changed, all Calibre-Web info related to old Database gets deleted")
|
||||
@@ -1736,13 +1753,20 @@ def _db_configuration_update_helper():
|
||||
ub.session.query(ub.KoboSyncedBooks).delete()
|
||||
helper.delete_thumbnail_cache()
|
||||
ub.session_commit()
|
||||
# deleted visibilities based on custom column and tags
|
||||
config.config_restricted_column = 0
|
||||
config.config_denied_tags = ""
|
||||
config.config_allowed_tags = ""
|
||||
config.config_columns_to_ignore = ""
|
||||
config.config_denied_column_value = ""
|
||||
config.config_allowed_column_value = ""
|
||||
config.config_read_column = 0
|
||||
_config_string(to_save, "config_calibre_dir")
|
||||
calibre_db.update_config(config)
|
||||
calibre_db.update_config(config, config.config_calibre_dir, ub.app_DB_path)
|
||||
config.store_calibre_uuid(calibre_db, db.Library_Id)
|
||||
if not os.access(os.path.join(config.config_calibre_dir, "metadata.db"), os.W_OK):
|
||||
flash(_("DB is not Writeable"), category="warning")
|
||||
_config_string(to_save, "config_calibre_split_dir")
|
||||
config.config_calibre_split = to_save.get('config_calibre_split', 0) == "on"
|
||||
calibre_db.update_config(config)
|
||||
calibre_db.update_config(config, config.config_calibre_dir, ub.app_DB_path)
|
||||
config.save()
|
||||
return _db_configuration_result(None, gdrive_error)
|
||||
|
||||
@@ -1775,9 +1799,8 @@ def _configuration_update_helper():
|
||||
|
||||
if "config_upload_formats" in to_save:
|
||||
to_save["config_upload_formats"] = ','.join(
|
||||
helper.uniq([x.lstrip().rstrip().lower() for x in to_save["config_upload_formats"].split(',')]))
|
||||
helper.uniq([x.strip().lower() for x in to_save["config_upload_formats"].split(',')]))
|
||||
_config_string(to_save, "config_upload_formats")
|
||||
# constants.EXTENSIONS_UPLOAD = config.config_upload_formats.split(',')
|
||||
|
||||
_config_string(to_save, "config_calibre")
|
||||
_config_string(to_save, "config_binariesdir")
|
||||
@@ -1871,7 +1894,7 @@ def _configuration_result(error_flash=None, reboot=False):
|
||||
resp['result'] = [{'type': "success", 'message': _("Calibre-Web configuration updated")}]
|
||||
resp['reboot'] = reboot
|
||||
resp['config_upload'] = config.config_upload_formats
|
||||
return Response(json.dumps(resp), mimetype='application/json')
|
||||
return make_response(jsonify(resp))
|
||||
|
||||
|
||||
def _db_configuration_result(error_flash=None, gdrive_error=None):
|
||||
@@ -2076,7 +2099,7 @@ def _handle_edit_user(to_save, content, languages, translations, kobo_support):
|
||||
|
||||
|
||||
def extract_user_data_from_field(user, field):
|
||||
match = re.search(field + r"=([@\.\d\s\w-]+)", user, re.IGNORECASE | re.UNICODE)
|
||||
match = re.search(field + r"=(.*?)($|(?<!\\),)", user, re.IGNORECASE | re.UNICODE)
|
||||
if match:
|
||||
return match.group(1)
|
||||
else:
|
||||
@@ -2084,7 +2107,7 @@ def extract_user_data_from_field(user, field):
|
||||
|
||||
|
||||
def extract_dynamic_field_from_filter(user, filtr):
|
||||
match = re.search("([a-zA-Z0-9-]+)=%s", filtr, re.IGNORECASE | re.UNICODE)
|
||||
match = re.search(r"([a-zA-Z0-9-]+)=%s", filtr, re.IGNORECASE | re.UNICODE)
|
||||
if match:
|
||||
return match.group(1)
|
||||
else:
|
||||
|
147
cps/audio.py
Normal file
147
cps/audio.py
Normal file
@@ -0,0 +1,147 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2024 Ozzieisaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import mutagen
|
||||
import base64
|
||||
from . import cover, logger
|
||||
|
||||
from cps.constants import BookMeta
|
||||
|
||||
log = logger.create()
|
||||
|
||||
def get_audio_file_info(tmp_file_path, original_file_extension, original_file_name, no_cover_processing):
|
||||
tmp_cover_name = None
|
||||
audio_file = mutagen.File(tmp_file_path)
|
||||
comments = None
|
||||
if original_file_extension in [".mp3", ".wav", ".aiff"]:
|
||||
cover_data = list()
|
||||
for key, val in audio_file.tags.items():
|
||||
if key.startswith("APIC:"):
|
||||
cover_data.append(val)
|
||||
if key.startswith("COMM:"):
|
||||
comments = val.text[0]
|
||||
title = audio_file.tags.get('TIT2').text[0] if "TIT2" in audio_file.tags else None
|
||||
author = audio_file.tags.get('TPE1').text[0] if "TPE1" in audio_file.tags else None
|
||||
if author is None:
|
||||
author = audio_file.tags.get('TPE2').text[0] if "TPE2" in audio_file.tags else None
|
||||
tags = audio_file.tags.get('TCON').text[0] if "TCON" in audio_file.tags else None # Genre
|
||||
series = audio_file.tags.get('TALB').text[0] if "TALB" in audio_file.tags else None# Album
|
||||
series_id = audio_file.tags.get('TRCK').text[0] if "TRCK" in audio_file.tags else None # track no.
|
||||
publisher = audio_file.tags.get('TPUB').text[0] if "TPUB" in audio_file.tags else None
|
||||
pubdate = str(audio_file.tags.get('TDRL').text[0]) if "TDRL" in audio_file.tags else None
|
||||
if not pubdate:
|
||||
pubdate = str(audio_file.tags.get('TDRC').text[0]) if "TDRC" in audio_file.tags else None
|
||||
if not pubdate:
|
||||
pubdate = str(audio_file.tags.get('TDOR').text[0]) if "TDOR" in audio_file.tags else None
|
||||
if cover_data and not no_cover_processing:
|
||||
cover_info = cover_data[0]
|
||||
for dat in cover_data:
|
||||
if dat.type == mutagen.id3.PictureType.COVER_FRONT:
|
||||
cover_info = dat
|
||||
break
|
||||
tmp_cover_name = cover.cover_processing(tmp_file_path, cover_info.data, "." + cover_info.mime[-3:])
|
||||
elif original_file_extension in [".ogg", ".flac", ".opus", ".ogv"]:
|
||||
title = audio_file.tags.get('TITLE')[0] if "TITLE" in audio_file else None
|
||||
author = audio_file.tags.get('ARTIST')[0] if "ARTIST" in audio_file else None
|
||||
comments = audio_file.tags.get('COMMENTS')[0] if "COMMENTS" in audio_file else None
|
||||
tags = audio_file.tags.get('GENRE')[0] if "GENRE" in audio_file else None # Genre
|
||||
series = audio_file.tags.get('ALBUM')[0] if "ALBUM" in audio_file else None
|
||||
series_id = audio_file.tags.get('TRACKNUMBER')[0] if "TRACKNUMBER" in audio_file else None
|
||||
publisher = audio_file.tags.get('LABEL')[0] if "LABEL" in audio_file else None
|
||||
pubdate = audio_file.tags.get('DATE')[0] if "DATE" in audio_file else None
|
||||
cover_data = audio_file.tags.get('METADATA_BLOCK_PICTURE')
|
||||
if not no_cover_processing:
|
||||
if cover_data:
|
||||
cover_info = mutagen.flac.Picture(base64.b64decode(cover_data[0]))
|
||||
tmp_cover_name = cover.cover_processing(tmp_file_path, cover_info.data, "." + cover_info.mime[-3:])
|
||||
if hasattr(audio_file, "pictures"):
|
||||
cover_info = audio_file.pictures[0]
|
||||
for dat in audio_file.pictures:
|
||||
if dat.type == mutagen.id3.PictureType.COVER_FRONT:
|
||||
cover_info = dat
|
||||
break
|
||||
tmp_cover_name = cover.cover_processing(tmp_file_path, cover_info.data, "." + cover_info.mime[-3:])
|
||||
elif original_file_extension in [".aac"]:
|
||||
title = audio_file.tags.get('Title').value if "Title" in audio_file else None
|
||||
author = audio_file.tags.get('Artist').value if "Artist" in audio_file else None
|
||||
comments = audio_file.tags.get('Comment').value if "Comment" in audio_file else None
|
||||
tags = audio_file.tags.get('Genre').value if "Genre" in audio_file else None
|
||||
series = audio_file.tags.get('Album').value if "Album" in audio_file else None
|
||||
series_id = audio_file.tags.get('Track').value if "Track" in audio_file else None
|
||||
publisher = audio_file.tags.get('Label').value if "Label" in audio_file else None
|
||||
pubdate = audio_file.tags.get('Year').value if "Year" in audio_file else None
|
||||
cover_data = audio_file.tags['Cover Art (Front)']
|
||||
if cover_data and not no_cover_processing:
|
||||
tmp_cover_name = tmp_file_path + '.jpg'
|
||||
with open(tmp_cover_name, "wb") as cover_file:
|
||||
cover_file.write(cover_data.value.split(b"\x00",1)[1])
|
||||
elif original_file_extension in [".asf"]:
|
||||
title = audio_file.tags.get('Title')[0].value if "Title" in audio_file else None
|
||||
author = audio_file.tags.get('Artist')[0].value if "Artist" in audio_file else None
|
||||
comments = audio_file.tags.get('Comments')[0].value if "Comments" in audio_file else None
|
||||
tags = audio_file.tags.get('Genre')[0].value if "Genre" in audio_file else None
|
||||
series = audio_file.tags.get('Album')[0].value if "Album" in audio_file else None
|
||||
series_id = audio_file.tags.get('Track')[0].value if "Track" in audio_file else None
|
||||
publisher = audio_file.tags.get('Label')[0].value if "Label" in audio_file else None
|
||||
pubdate = audio_file.tags.get('Year')[0].value if "Year" in audio_file else None
|
||||
cover_data = audio_file.tags.get('WM/Picture', None)
|
||||
if cover_data and not no_cover_processing:
|
||||
tmp_cover_name = tmp_file_path + '.jpg'
|
||||
with open(tmp_cover_name, "wb") as cover_file:
|
||||
cover_file.write(cover_data[0].value)
|
||||
elif original_file_extension in [".mp4", ".m4a", ".m4b"]:
|
||||
title = audio_file.tags.get('©nam')[0] if "©nam" in audio_file.tags else None
|
||||
author = audio_file.tags.get('©ART')[0] if "©ART" in audio_file.tags else None
|
||||
comments = audio_file.tags.get('©cmt')[0] if "©cmt" in audio_file.tags else None
|
||||
tags = audio_file.tags.get('©gen')[0] if "©gen" in audio_file.tags else None
|
||||
series = audio_file.tags.get('©alb')[0] if "©alb" in audio_file.tags else None
|
||||
series_id = str(audio_file.tags.get('trkn')[0][0]) if "trkn" in audio_file.tags else None
|
||||
publisher = ""
|
||||
pubdate = audio_file.tags.get('©day')[0] if "©day" in audio_file.tags else None
|
||||
cover_data = audio_file.tags.get('covr', None)
|
||||
if cover_data and not no_cover_processing:
|
||||
cover_type = None
|
||||
for c in cover_data:
|
||||
if c.imageformat == mutagen.mp4.AtomDataType.JPEG:
|
||||
cover_type =".jpg"
|
||||
cover_bin = c
|
||||
break
|
||||
elif c.imageformat == mutagen.mp4.AtomDataType.PNG:
|
||||
cover_type = ".png"
|
||||
cover_bin = c
|
||||
break
|
||||
if cover_type:
|
||||
tmp_cover_name = cover.cover_processing(tmp_file_path, cover_bin, cover_type)
|
||||
else:
|
||||
logger.error("Unknown covertype in file {} ".format(original_file_name))
|
||||
|
||||
return BookMeta(
|
||||
file_path=tmp_file_path,
|
||||
extension=original_file_extension,
|
||||
title=title or original_file_name ,
|
||||
author="Unknown" if author is None else author,
|
||||
cover=tmp_cover_name,
|
||||
description="" if comments is None else comments,
|
||||
tags="" if tags is None else tags,
|
||||
series="" if series is None else series,
|
||||
series_id="1" if series_id is None else series_id.split("/")[0],
|
||||
languages="",
|
||||
publisher= "" if publisher is None else publisher,
|
||||
pubdate="" if pubdate is None else pubdate,
|
||||
identifiers=[],
|
||||
)
|
90
cps/basic.py
Normal file
90
cps/basic.py
Normal file
@@ -0,0 +1,90 @@
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2018-2019 OzzieIsaacs, cervinko, jkrehm, bodybybuddha, ok11,
|
||||
# andy29485, idalin, Kyosfonica, wuqi, Kennyl, lemmsh,
|
||||
# falgh1, grunjol, csitko, ytils, xybydy, trasba, vrabe,
|
||||
# ruben-herold, marblepebble, JackED42, SiphonSquirrel,
|
||||
# apetresc, nanu-c, mutschler, carderne
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
|
||||
from cps.pagination import Pagination
|
||||
from flask import Blueprint
|
||||
from flask_babel import gettext as _
|
||||
from flask_babel import get_locale
|
||||
from flask import request, redirect, url_for
|
||||
|
||||
from . import logger, isoLanguages
|
||||
from . import db, config
|
||||
from . import calibre_db
|
||||
from .usermanagement import login_required_if_no_ano
|
||||
from .render_template import render_title_template
|
||||
from .web import get_sort_function
|
||||
|
||||
try:
|
||||
from natsort import natsorted as sort
|
||||
except ImportError:
|
||||
sort = sorted # Just use regular sort then, may cause issues with badly named pages in cbz/cbr files
|
||||
|
||||
basic = Blueprint('basic', __name__)
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
||||
@basic.route("/basic", methods=["GET"])
|
||||
@login_required_if_no_ano
|
||||
def index():
|
||||
term = request.args.get("query", "") # default to showing all books
|
||||
limit = 15
|
||||
page = int(request.args.get("page") or 1)
|
||||
off = (page - 1) * limit
|
||||
order = get_sort_function("stored", "search")
|
||||
join = db.books_series_link, db.Books.id == db.books_series_link.c.book, db.Series
|
||||
entries, result_count, pagination = calibre_db.get_search_results(term,
|
||||
config,
|
||||
off,
|
||||
order,
|
||||
limit,
|
||||
*join)
|
||||
return render_title_template('basic_index.html',
|
||||
searchterm=term,
|
||||
pagination=pagination,
|
||||
query=term,
|
||||
adv_searchterm=term,
|
||||
entries=entries,
|
||||
result_count=result_count,
|
||||
title=_("Search"),
|
||||
page="search",
|
||||
order=order[1])
|
||||
|
||||
|
||||
@basic.route("/basic_book/<int:book_id>")
|
||||
@login_required_if_no_ano
|
||||
def show_book(book_id):
|
||||
entries = calibre_db.get_book_read_archived(book_id, config.config_read_column, allow_show_archived=True)
|
||||
if entries:
|
||||
entry = entries[0]
|
||||
for lang_index in range(0, len(entry.languages)):
|
||||
entry.languages[lang_index].language_name = isoLanguages.get_language_name(get_locale(), entry.languages[
|
||||
lang_index].lang_code)
|
||||
entry.ordered_authors = calibre_db.order_authors([entry])
|
||||
|
||||
return render_title_template('basic_detail.html',
|
||||
entry=entry,
|
||||
is_xhr=request.headers.get('X-Requested-With') == 'XMLHttpRequest',
|
||||
title=entry.title,
|
||||
page="book")
|
||||
else:
|
||||
log.debug("Selected book is unavailable. File does not exist or is not accessible")
|
||||
return redirect(url_for("basic.index"))
|
@@ -29,8 +29,8 @@ from .constants import DEFAULT_SETTINGS_FILE, DEFAULT_GDRIVE_FILE
|
||||
|
||||
def version_info():
|
||||
if _NIGHTLY_VERSION[1].startswith('$Format'):
|
||||
return "Calibre-Web version: %s - unknown git-clone" % _STABLE_VERSION['version'].replace("b", " Beta")
|
||||
return "Calibre-Web version: %s -%s" % (_STABLE_VERSION['version'].replace("b", " Beta"), _NIGHTLY_VERSION[1])
|
||||
return "Calibre-Web version: %s - unknown git-clone" % _STABLE_VERSION.replace("b", " Beta")
|
||||
return "Calibre-Web version: %s -%s" % (_STABLE_VERSION.replace("b", " Beta"), _NIGHTLY_VERSION[1])
|
||||
|
||||
|
||||
class CliParameter(object):
|
||||
|
27
cps/comic.py
27
cps/comic.py
@@ -90,7 +90,7 @@ def _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_exec
|
||||
if len(ext) > 1:
|
||||
extension = ext[1].lower()
|
||||
if extension in cover.COVER_EXTENSIONS:
|
||||
cover_data = cf.read([name])
|
||||
cover_data = cf.read(name)
|
||||
break
|
||||
except Exception as ex:
|
||||
log.error('Rarfile failed with error: {}'.format(ex))
|
||||
@@ -109,13 +109,13 @@ def _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_exec
|
||||
return cover_data, extension
|
||||
|
||||
|
||||
def _extract_cover(tmp_file_name, original_file_extension, rar_executable):
|
||||
def _extract_cover(tmp_file_path, original_file_extension, rar_executable):
|
||||
cover_data = extension = None
|
||||
if use_comic_meta:
|
||||
try:
|
||||
archive = ComicArchive(tmp_file_name, rar_exe_path=rar_executable)
|
||||
archive = ComicArchive(tmp_file_path, rar_exe_path=rar_executable)
|
||||
except TypeError:
|
||||
archive = ComicArchive(tmp_file_name)
|
||||
archive = ComicArchive(tmp_file_path)
|
||||
name_list = archive.getPageNameList if hasattr(archive, "getPageNameList") else archive.get_page_name_list
|
||||
for index, name in enumerate(name_list()):
|
||||
ext = os.path.splitext(name)
|
||||
@@ -126,11 +126,11 @@ def _extract_cover(tmp_file_name, original_file_extension, rar_executable):
|
||||
cover_data = get_page(index)
|
||||
break
|
||||
else:
|
||||
cover_data, extension = _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_executable)
|
||||
return cover.cover_processing(tmp_file_name, cover_data, extension)
|
||||
cover_data, extension = _extract_cover_from_archive(original_file_extension, tmp_file_path, rar_executable)
|
||||
return cover.cover_processing(tmp_file_path, cover_data, extension)
|
||||
|
||||
|
||||
def get_comic_info(tmp_file_path, original_file_name, original_file_extension, rar_executable):
|
||||
def get_comic_info(tmp_file_path, original_file_name, original_file_extension, rar_executable, no_cover_processing):
|
||||
if use_comic_meta:
|
||||
try:
|
||||
archive = ComicArchive(tmp_file_path, rar_exe_path=rar_executable)
|
||||
@@ -155,14 +155,17 @@ def get_comic_info(tmp_file_path, original_file_name, original_file_extension, r
|
||||
|
||||
lang = loaded_metadata.language or ""
|
||||
loaded_metadata.language = isoLanguages.get_lang3(lang)
|
||||
|
||||
if not no_cover_processing:
|
||||
cover_file = _extract_cover(tmp_file_path, original_file_extension, rar_executable)
|
||||
else:
|
||||
cover_file = None
|
||||
return BookMeta(
|
||||
file_path=tmp_file_path,
|
||||
extension=original_file_extension,
|
||||
title=loaded_metadata.title or original_file_name,
|
||||
author=" & ".join([credit["person"]
|
||||
for credit in loaded_metadata.credits if credit["role"] == "Writer"]) or 'Unknown',
|
||||
cover=_extract_cover(tmp_file_path, original_file_extension, rar_executable),
|
||||
cover=cover_file,
|
||||
description=loaded_metadata.comments or "",
|
||||
tags="",
|
||||
series=loaded_metadata.series or "",
|
||||
@@ -171,13 +174,17 @@ def get_comic_info(tmp_file_path, original_file_name, original_file_extension, r
|
||||
publisher="",
|
||||
pubdate="",
|
||||
identifiers=[])
|
||||
if not no_cover_processing:
|
||||
cover_file = _extract_cover(tmp_file_path, original_file_extension, rar_executable)
|
||||
else:
|
||||
cover_file = None
|
||||
|
||||
return BookMeta(
|
||||
file_path=tmp_file_path,
|
||||
extension=original_file_extension,
|
||||
title=original_file_name,
|
||||
author='Unknown',
|
||||
cover=_extract_cover(tmp_file_path, original_file_extension, rar_executable),
|
||||
cover=cover_file,
|
||||
description="",
|
||||
tags="",
|
||||
series="",
|
||||
|
@@ -35,7 +35,7 @@ except ImportError:
|
||||
|
||||
from . import constants, logger
|
||||
from .subproc_wrapper import process_wait
|
||||
|
||||
from .string_helper import strip_whitespaces
|
||||
|
||||
log = logger.create()
|
||||
_Base = declarative_base()
|
||||
@@ -182,26 +182,6 @@ class _Settings(_Base):
|
||||
class ConfigSQL(object):
|
||||
# pylint: disable=no-member
|
||||
def __init__(self):
|
||||
'''self.config_calibre_uuid = None
|
||||
self.config_calibre_split_dir = None
|
||||
self.dirty = None
|
||||
self.config_logfile = None
|
||||
self.config_upload_formats = None
|
||||
self.mail_gmail_token = None
|
||||
self.mail_server_type = None
|
||||
self.mail_server = None
|
||||
self.config_log_level = None
|
||||
self.config_allowed_column_value = None
|
||||
self.config_denied_column_value = None
|
||||
self.config_allowed_tags = None
|
||||
self.config_denied_tags = None
|
||||
self.config_default_show = None
|
||||
self.config_default_role = None
|
||||
self.config_keyfile = None
|
||||
self.config_certfile = None
|
||||
self.config_rarfile_location = None
|
||||
self.config_kepubifypath = None
|
||||
self.config_binariesdir = None'''
|
||||
self.__dict__["dirty"] = list()
|
||||
|
||||
def init_config(self, session, secret_key, cli):
|
||||
@@ -288,19 +268,19 @@ class ConfigSQL(object):
|
||||
|
||||
def list_denied_tags(self):
|
||||
mct = self.config_denied_tags or ""
|
||||
return [t.strip() for t in mct.split(",")]
|
||||
return [strip_whitespaces(t) for t in mct.split(",")]
|
||||
|
||||
def list_allowed_tags(self):
|
||||
mct = self.config_allowed_tags or ""
|
||||
return [t.strip() for t in mct.split(",")]
|
||||
return [strip_whitespaces(t) for t in mct.split(",")]
|
||||
|
||||
def list_denied_column_values(self):
|
||||
mct = self.config_denied_column_value or ""
|
||||
return [t.strip() for t in mct.split(",")]
|
||||
return [strip_whitespaces(t) for t in mct.split(",")]
|
||||
|
||||
def list_allowed_column_values(self):
|
||||
mct = self.config_allowed_column_value or ""
|
||||
return [t.strip() for t in mct.split(",")]
|
||||
return [strip_whitespaces(t) for t in mct.split(",")]
|
||||
|
||||
def get_log_level(self):
|
||||
return logger.get_level_name(self.config_log_level)
|
||||
@@ -372,7 +352,7 @@ class ConfigSQL(object):
|
||||
db_file = os.path.join(self.config_calibre_dir, 'metadata.db')
|
||||
have_metadata_db = os.path.isfile(db_file)
|
||||
self.db_configured = have_metadata_db
|
||||
# constants.EXTENSIONS_UPLOAD = [x.lstrip().rstrip().lower() for x in self.config_upload_formats.split(',')]
|
||||
|
||||
from . import cli_param
|
||||
if os.environ.get('FLASK_DEBUG'):
|
||||
logfile = logger.setup(logger.LOG_TO_STDOUT, logger.logging.DEBUG)
|
||||
@@ -425,11 +405,13 @@ class ConfigSQL(object):
|
||||
return self.config_calibre_split_dir if self.config_calibre_split_dir else self.config_calibre_dir
|
||||
|
||||
def store_calibre_uuid(self, calibre_db, Library_table):
|
||||
from . import app
|
||||
try:
|
||||
calibre_uuid = calibre_db.session.query(Library_table).one_or_none()
|
||||
if self.config_calibre_uuid != calibre_uuid.uuid:
|
||||
self.config_calibre_uuid = calibre_uuid.uuid
|
||||
self.save()
|
||||
with app.app_context():
|
||||
calibre_uuid = calibre_db.session.query(Library_table).one_or_none()
|
||||
if self.config_calibre_uuid != calibre_uuid.uuid:
|
||||
self.config_calibre_uuid = calibre_uuid.uuid
|
||||
self.save()
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
@@ -503,6 +485,8 @@ def autodetect_calibre_binaries():
|
||||
"C:\\program files(x86)\\calibre\\",
|
||||
"C:\\program files(x86)\\calibre2\\",
|
||||
"C:\\program files\\calibre2\\"]
|
||||
elif sys.platform.startswith("freebsd"):
|
||||
calibre_path = ["/usr/local/bin/"]
|
||||
else:
|
||||
calibre_path = ["/opt/calibre/"]
|
||||
for element in calibre_path:
|
||||
@@ -533,6 +517,8 @@ def autodetect_unrar_binary():
|
||||
if sys.platform == "win32":
|
||||
calibre_path = ["C:\\program files\\WinRar\\unRAR.exe",
|
||||
"C:\\program files(x86)\\WinRar\\unRAR.exe"]
|
||||
elif sys.platform.startswith("freebsd"):
|
||||
calibre_path = ["/usr/local/bin/unrar"]
|
||||
else:
|
||||
calibre_path = ["/usr/bin/unrar"]
|
||||
for element in calibre_path:
|
||||
@@ -545,6 +531,8 @@ def autodetect_kepubify_binary():
|
||||
if sys.platform == "win32":
|
||||
calibre_path = ["C:\\program files\\kepubify\\kepubify-windows-64Bit.exe",
|
||||
"C:\\program files(x86)\\kepubify\\kepubify-windows-64Bit.exe"]
|
||||
elif sys.platform.startswith("freebsd"):
|
||||
calibre_path = ["/usr/local/bin/kepubify"]
|
||||
else:
|
||||
calibre_path = ["/opt/kepubify/kepubify-linux-64bit", "/opt/kepubify/kepubify-linux-32bit"]
|
||||
for element in calibre_path:
|
||||
|
@@ -19,9 +19,6 @@
|
||||
import sys
|
||||
import os
|
||||
from collections import namedtuple
|
||||
from sqlalchemy import __version__ as sql_version
|
||||
|
||||
sqlalchemy_version2 = ([int(x) for x in sql_version.split('.')] >= [2, 0, 0])
|
||||
|
||||
# APP_MODE - production, development, or test
|
||||
APP_MODE = os.environ.get('APP_MODE', 'production')
|
||||
@@ -175,7 +172,7 @@ BookMeta = namedtuple('BookMeta', 'file_path, extension, title, author, cover, d
|
||||
'series_id, languages, publisher, pubdate, identifiers')
|
||||
|
||||
# python build process likes to have x.y.zbw -> b for beta and w a counting number
|
||||
STABLE_VERSION = {'version': '0.6.23b'}
|
||||
STABLE_VERSION = '0.6.25b'
|
||||
|
||||
NIGHTLY_VERSION = dict()
|
||||
NIGHTLY_VERSION[0] = '$Format:%H$'
|
||||
@@ -193,7 +190,7 @@ THUMBNAIL_TYPE_AUTHOR = 3
|
||||
COVER_THUMBNAIL_ORIGINAL = 0
|
||||
COVER_THUMBNAIL_SMALL = 1
|
||||
COVER_THUMBNAIL_MEDIUM = 2
|
||||
COVER_THUMBNAIL_LARGE = 3
|
||||
COVER_THUMBNAIL_LARGE = 4
|
||||
|
||||
# clean-up the module namespace
|
||||
del sys, os, namedtuple
|
||||
|
@@ -29,13 +29,14 @@ NO_JPEG_EXTENSIONS = ['.png', '.webp', '.bmp']
|
||||
COVER_EXTENSIONS = ['.png', '.webp', '.bmp', '.jpg', '.jpeg']
|
||||
|
||||
|
||||
def cover_processing(tmp_file_name, img, extension):
|
||||
tmp_cover_name = os.path.join(os.path.dirname(tmp_file_name), 'cover.jpg')
|
||||
def cover_processing(tmp_file_path, img, extension):
|
||||
# tmp_cover_name = os.path.join(os.path.dirname(tmp_file_name), 'cover.jpg')
|
||||
tmp_cover_name = tmp_file_path + '.jpg'
|
||||
if extension in NO_JPEG_EXTENSIONS:
|
||||
if use_IM:
|
||||
with Image(blob=img) as imgc:
|
||||
imgc.format = 'jpeg'
|
||||
imgc.transform_colorspace('rgb')
|
||||
imgc.transform_colorspace('srgb')
|
||||
imgc.save(filename=tmp_cover_name)
|
||||
return tmp_cover_name
|
||||
else:
|
||||
|
22
cps/cw_advocate/__init__.py
Normal file
22
cps/cw_advocate/__init__.py
Normal file
@@ -0,0 +1,22 @@
|
||||
#
|
||||
# Copyright 2015 Jordan Milne
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Source: https://github.com/JordanMilne/Advocate
|
||||
|
||||
|
||||
from .adapters import ValidatingHTTPAdapter
|
||||
from .api import *
|
||||
from .addrvalidator import AddrValidator
|
||||
from .exceptions import UnacceptableAddressException
|
48
cps/cw_advocate/adapters.py
Normal file
48
cps/cw_advocate/adapters.py
Normal file
@@ -0,0 +1,48 @@
|
||||
#
|
||||
# Copyright 2015 Jordan Milne
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Source: https://github.com/JordanMilne/Advocate
|
||||
|
||||
from requests.adapters import HTTPAdapter, DEFAULT_POOLBLOCK
|
||||
|
||||
from .addrvalidator import AddrValidator
|
||||
from .exceptions import ProxyDisabledException
|
||||
from .poolmanager import ValidatingPoolManager
|
||||
|
||||
|
||||
class ValidatingHTTPAdapter(HTTPAdapter):
|
||||
__attrs__ = HTTPAdapter.__attrs__ + ['_validator']
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
self._validator = kwargs.pop('validator', None)
|
||||
if not self._validator:
|
||||
self._validator = AddrValidator()
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
def init_poolmanager(self, connections, maxsize, block=DEFAULT_POOLBLOCK,
|
||||
**pool_kwargs):
|
||||
self._pool_connections = connections
|
||||
self._pool_maxsize = maxsize
|
||||
self._pool_block = block
|
||||
self.poolmanager = ValidatingPoolManager(
|
||||
num_pools=connections,
|
||||
maxsize=maxsize,
|
||||
block=block,
|
||||
validator=self._validator,
|
||||
**pool_kwargs
|
||||
)
|
||||
|
||||
def proxy_manager_for(self, proxy, **proxy_kwargs):
|
||||
raise ProxyDisabledException("Proxies cannot be used with Advocate")
|
281
cps/cw_advocate/addrvalidator.py
Normal file
281
cps/cw_advocate/addrvalidator.py
Normal file
@@ -0,0 +1,281 @@
|
||||
#
|
||||
# Copyright 2015 Jordan Milne
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Source: https://github.com/JordanMilne/Advocate
|
||||
|
||||
import functools
|
||||
import fnmatch
|
||||
import ipaddress
|
||||
import re
|
||||
|
||||
try:
|
||||
import netifaces
|
||||
HAVE_NETIFACES = True
|
||||
except ImportError:
|
||||
netifaces = None
|
||||
HAVE_NETIFACES = False
|
||||
|
||||
from .exceptions import NameserverException, ConfigException
|
||||
|
||||
|
||||
def canonicalize_hostname(hostname):
|
||||
"""Lowercase and punycodify a hostname"""
|
||||
# We do the lowercasing after IDNA encoding because we only want to
|
||||
# lowercase the *ASCII* chars.
|
||||
# TODO: The differences between IDNA2003 and IDNA2008 might be relevant
|
||||
# to us, but both specs are damn confusing.
|
||||
return str(hostname.encode("idna").lower(), 'utf-8')
|
||||
|
||||
|
||||
def determine_local_addresses():
|
||||
"""Get all IPs that refer to this machine according to netifaces"""
|
||||
if not HAVE_NETIFACES:
|
||||
raise ConfigException("Tried to determine local addresses, "
|
||||
"but netifaces module was not importable")
|
||||
ips = []
|
||||
for interface in netifaces.interfaces():
|
||||
if_families = netifaces.ifaddresses(interface)
|
||||
for family_kind in {netifaces.AF_INET, netifaces.AF_INET6}:
|
||||
addrs = if_families.get(family_kind, [])
|
||||
for addr in (x.get("addr", "") for x in addrs):
|
||||
if family_kind == netifaces.AF_INET6:
|
||||
# We can't do anything sensible with the scope here
|
||||
addr = addr.split("%")[0]
|
||||
ips.append(ipaddress.ip_network(addr))
|
||||
return ips
|
||||
|
||||
|
||||
def add_local_address_arg(func):
|
||||
"""Add the "_local_addresses" kwarg if it's missing
|
||||
|
||||
IMO this information shouldn't be cached between calls (what if one of the
|
||||
adapters got a new IP at runtime?,) and we don't want each function to
|
||||
recalculate it. Just recalculate it if the caller didn't provide it for us.
|
||||
"""
|
||||
@functools.wraps(func)
|
||||
def wrapper(self, *args, **kwargs):
|
||||
if "_local_addresses" not in kwargs:
|
||||
if self.autodetect_local_addresses:
|
||||
kwargs["_local_addresses"] = determine_local_addresses()
|
||||
else:
|
||||
kwargs["_local_addresses"] = []
|
||||
return func(self, *args, **kwargs)
|
||||
return wrapper
|
||||
|
||||
|
||||
class AddrValidator:
|
||||
_6TO4_RELAY_NET = ipaddress.ip_network("192.88.99.0/24")
|
||||
# Just the well known prefix, DNS64 servers can set their own
|
||||
# prefix, but in practice most probably don't.
|
||||
_DNS64_WK_PREFIX = ipaddress.ip_network("64:ff9b::/96")
|
||||
DEFAULT_PORT_WHITELIST = {80, 8080, 443, 8443, 8000}
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
ip_blacklist=None,
|
||||
ip_whitelist=None,
|
||||
port_whitelist=None,
|
||||
port_blacklist=None,
|
||||
hostname_blacklist=None,
|
||||
allow_ipv6=False,
|
||||
allow_teredo=False,
|
||||
allow_6to4=False,
|
||||
allow_dns64=False,
|
||||
# Must be explicitly set to "False" if you don't want to try
|
||||
# detecting local interface addresses with netifaces.
|
||||
autodetect_local_addresses=True,
|
||||
):
|
||||
if not port_blacklist and not port_whitelist:
|
||||
# An assortment of common HTTPS? ports.
|
||||
port_whitelist = self.DEFAULT_PORT_WHITELIST.copy()
|
||||
self.ip_blacklist = ip_blacklist or set()
|
||||
self.ip_whitelist = ip_whitelist or set()
|
||||
self.port_blacklist = port_blacklist or set()
|
||||
self.port_whitelist = port_whitelist or set()
|
||||
# TODO: ATM this can contain either regexes or globs that are converted
|
||||
# to regexes upon every check. Create a collection that automagically
|
||||
# converts them to regexes on insert?
|
||||
self.hostname_blacklist = hostname_blacklist or set()
|
||||
self.allow_ipv6 = allow_ipv6
|
||||
self.allow_teredo = allow_teredo
|
||||
self.allow_6to4 = allow_6to4
|
||||
self.allow_dns64 = allow_dns64
|
||||
self.autodetect_local_addresses = autodetect_local_addresses
|
||||
|
||||
@add_local_address_arg
|
||||
def is_ip_allowed(self, addr_ip, _local_addresses=None):
|
||||
if not isinstance(addr_ip,
|
||||
(ipaddress.IPv4Address, ipaddress.IPv6Address)):
|
||||
addr_ip = ipaddress.ip_address(addr_ip)
|
||||
|
||||
# The whitelist should take precedence over the blacklist so we can
|
||||
# punch holes in blacklisted ranges
|
||||
if any(addr_ip in net for net in self.ip_whitelist):
|
||||
return True
|
||||
|
||||
if any(addr_ip in net for net in self.ip_blacklist):
|
||||
return False
|
||||
|
||||
if any(addr_ip in net for net in _local_addresses):
|
||||
return False
|
||||
|
||||
if addr_ip.version == 4:
|
||||
if not addr_ip.is_private:
|
||||
# IPs for carrier-grade NAT. Seems weird that it doesn't set
|
||||
# `is_private`, but we need to check `not is_global`
|
||||
if not ipaddress.ip_network(addr_ip).is_global:
|
||||
return False
|
||||
elif addr_ip.version == 6:
|
||||
# You'd better have a good reason for enabling IPv6
|
||||
# because Advocate's techniques don't work well without NAT.
|
||||
if not self.allow_ipv6:
|
||||
return False
|
||||
|
||||
# v6 addresses can also map to IPv4 addresses! Tricky!
|
||||
v4_nested = []
|
||||
if addr_ip.ipv4_mapped:
|
||||
v4_nested.append(addr_ip.ipv4_mapped)
|
||||
# WTF IPv6? Why you gotta have a billion tunneling mechanisms?
|
||||
# XXX: Do we even really care about these? If we're tunneling
|
||||
# through public servers we shouldn't be able to access
|
||||
# addresses on our private network, right?
|
||||
if addr_ip.sixtofour:
|
||||
if not self.allow_6to4:
|
||||
return False
|
||||
v4_nested.append(addr_ip.sixtofour)
|
||||
if addr_ip.teredo:
|
||||
if not self.allow_teredo:
|
||||
return False
|
||||
# Check both the client *and* server IPs
|
||||
v4_nested.extend(addr_ip.teredo)
|
||||
if addr_ip in self._DNS64_WK_PREFIX:
|
||||
if not self.allow_dns64:
|
||||
return False
|
||||
# When using the well-known prefix the last 4 bytes
|
||||
# are the IPv4 addr
|
||||
v4_nested.append(ipaddress.ip_address(addr_ip.packed[-4:]))
|
||||
|
||||
if not all(self.is_ip_allowed(addr_v4) for addr_v4 in v4_nested):
|
||||
return False
|
||||
|
||||
# fec0::*, apparently deprecated?
|
||||
if addr_ip.is_site_local:
|
||||
return False
|
||||
else:
|
||||
raise ValueError("Unsupported IP version(?): %r" % addr_ip)
|
||||
|
||||
# 169.254.XXX.XXX, AWS uses these for autoconfiguration
|
||||
if addr_ip.is_link_local:
|
||||
return False
|
||||
# 127.0.0.1, ::1, etc.
|
||||
if addr_ip.is_loopback:
|
||||
return False
|
||||
if addr_ip.is_multicast:
|
||||
return False
|
||||
# 192.168.XXX.XXX, 10.XXX.XXX.XXX
|
||||
if addr_ip.is_private:
|
||||
return False
|
||||
# 255.255.255.255, ::ffff:XXXX:XXXX (v6->v4) mapping
|
||||
if addr_ip.is_reserved:
|
||||
return False
|
||||
# There's no reason to connect directly to a 6to4 relay
|
||||
if addr_ip in self._6TO4_RELAY_NET:
|
||||
return False
|
||||
# 0.0.0.0
|
||||
if addr_ip.is_unspecified:
|
||||
return False
|
||||
|
||||
# It doesn't look bad, so... it's must be ok!
|
||||
return True
|
||||
|
||||
def _hostname_matches_pattern(self, hostname, pattern):
|
||||
# If they specified a string, just assume they only want basic globbing.
|
||||
# This stops people from not realizing they're dealing in REs and
|
||||
# not escaping their periods unless they specifically pass in an RE.
|
||||
# This has the added benefit of letting us sanely handle globbed
|
||||
# IDNs by default.
|
||||
if isinstance(pattern, str):
|
||||
# convert the glob to a punycode glob, then a regex
|
||||
pattern = fnmatch.translate(canonicalize_hostname(pattern))
|
||||
|
||||
hostname = canonicalize_hostname(hostname)
|
||||
# Down the line the hostname may get treated as a null-terminated string
|
||||
# (as with `socket.getaddrinfo`.) Try to account for that.
|
||||
#
|
||||
# >>> socket.getaddrinfo("example.com\x00aaaa", 80)
|
||||
# [(2, 1, 6, '', ('93.184.216.34', 80)), [...]
|
||||
no_null_hostname = hostname.split("\x00")[0]
|
||||
|
||||
return any(re.match(pattern, x.strip(".")) for x
|
||||
in (no_null_hostname, hostname))
|
||||
|
||||
def is_hostname_allowed(self, hostname):
|
||||
# Sometimes (like with "external" services that your IP has privileged
|
||||
# access to) you might not always know the IP range to blacklist access
|
||||
# to, or the `A` record might change without you noticing.
|
||||
# For e.x.: `foocorp.external.org`.
|
||||
#
|
||||
# Another option is doing something like:
|
||||
#
|
||||
# for addrinfo in socket.getaddrinfo("foocorp.external.org", 80):
|
||||
# global_validator.ip_blacklist.add(ip_address(addrinfo[4][0]))
|
||||
#
|
||||
# but that's not always a good idea if they're behind a third-party lb.
|
||||
for pattern in self.hostname_blacklist:
|
||||
if self._hostname_matches_pattern(hostname, pattern):
|
||||
return False
|
||||
return True
|
||||
|
||||
@add_local_address_arg
|
||||
def is_addrinfo_allowed(self, addrinfo, _local_addresses=None):
|
||||
assert(len(addrinfo) == 5)
|
||||
# XXX: Do we care about any of the other elements? Guessing not.
|
||||
family, socktype, proto, canonname, sockaddr = addrinfo
|
||||
|
||||
# The 4th elem inaddrinfo may either be a touple of two or four items,
|
||||
# depending on whether we're dealing with IPv4 or v6
|
||||
if len(sockaddr) == 2:
|
||||
# v4
|
||||
ip, port = sockaddr
|
||||
elif len(sockaddr) == 4:
|
||||
# v6
|
||||
# XXX: what *are* `flow_info` and `scope_id`? Anything useful?
|
||||
# Seems like we can figure out all we need about the scope from
|
||||
# the `is_<x>` properties.
|
||||
ip, port, flow_info, scope_id = sockaddr
|
||||
else:
|
||||
raise ValueError("Unexpected addrinfo format %r" % sockaddr)
|
||||
|
||||
# Probably won't help protect against SSRF, but might prevent our being
|
||||
# used to attack others' non-HTTP services. See
|
||||
# http://www.remote.org/jochen/sec/hfpa/
|
||||
if self.port_whitelist and port not in self.port_whitelist:
|
||||
return False
|
||||
if port in self.port_blacklist:
|
||||
return False
|
||||
|
||||
if self.hostname_blacklist:
|
||||
if not canonname:
|
||||
raise NameserverException(
|
||||
"addrinfo must contain the canon name to do blacklisting "
|
||||
"based on hostname. Make sure you use the "
|
||||
"`socket.AI_CANONNAME` flag, and that each record contains "
|
||||
"the canon name. Your DNS server might also be garbage."
|
||||
)
|
||||
|
||||
if not self.is_hostname_allowed(canonname):
|
||||
return False
|
||||
|
||||
return self.is_ip_allowed(ip, _local_addresses=_local_addresses)
|
202
cps/cw_advocate/api.py
Normal file
202
cps/cw_advocate/api.py
Normal file
@@ -0,0 +1,202 @@
|
||||
#
|
||||
# Copyright 2015 Jordan Milne
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Source: https://github.com/JordanMilne/Advocate
|
||||
|
||||
"""
|
||||
advocate.api
|
||||
~~~~~~~~~~~~
|
||||
|
||||
This module implements the Requests API, largely a copy/paste from `requests`
|
||||
itself.
|
||||
|
||||
:copyright: (c) 2015 by Jordan Milne.
|
||||
:license: Apache2, see LICENSE for more details.
|
||||
|
||||
"""
|
||||
from collections import OrderedDict
|
||||
import hashlib
|
||||
import pickle
|
||||
|
||||
from requests import Session as RequestsSession
|
||||
|
||||
# import cw_advocate
|
||||
from .adapters import ValidatingHTTPAdapter
|
||||
from .exceptions import MountDisabledException
|
||||
|
||||
|
||||
class Session(RequestsSession):
|
||||
"""Convenience wrapper around `requests.Session` set up for `advocate`ing"""
|
||||
|
||||
__attrs__ = RequestsSession.__attrs__ + ["validator"]
|
||||
DEFAULT_VALIDATOR = None
|
||||
"""
|
||||
User-replaceable default validator to use for all Advocate sessions,
|
||||
includes sessions created by advocate.get()
|
||||
"""
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
self.validator = kwargs.pop("validator", None) or self.DEFAULT_VALIDATOR
|
||||
adapter_kwargs = kwargs.pop("_adapter_kwargs", {})
|
||||
|
||||
# `Session.__init__()` calls `mount()` internally, so we need to allow
|
||||
# it temporarily
|
||||
self.__mount_allowed = True
|
||||
RequestsSession.__init__(self, *args, **kwargs)
|
||||
|
||||
# Drop any existing adapters
|
||||
self.adapters = OrderedDict()
|
||||
|
||||
self.mount("http://", ValidatingHTTPAdapter(validator=self.validator, **adapter_kwargs))
|
||||
self.mount("https://", ValidatingHTTPAdapter(validator=self.validator, **adapter_kwargs))
|
||||
self.__mount_allowed = False
|
||||
|
||||
def mount(self, *args, **kwargs):
|
||||
"""Wrapper around `mount()` to prevent a protection bypass"""
|
||||
if self.__mount_allowed:
|
||||
super().mount(*args, **kwargs)
|
||||
else:
|
||||
raise MountDisabledException(
|
||||
"mount() is disabled to prevent protection bypasses"
|
||||
)
|
||||
|
||||
|
||||
def session(*args, **kwargs):
|
||||
return Session(*args, **kwargs)
|
||||
|
||||
|
||||
def request(method, url, **kwargs):
|
||||
"""Constructs and sends a :class:`Request <Request>`.
|
||||
|
||||
:param method: method for the new :class:`Request` object.
|
||||
:param url: URL for the new :class:`Request` object.
|
||||
:param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`.
|
||||
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
|
||||
:param json: (optional) json data to send in the body of the :class:`Request`.
|
||||
:param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`.
|
||||
:param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`.
|
||||
:param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': ('filename', fileobj)}``) for multipart encoding upload.
|
||||
:param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth.
|
||||
:param timeout: (optional) How long to wait for the server to send data
|
||||
before giving up, as a float, or a (`connect timeout, read timeout
|
||||
<user/advanced.html#timeouts>`_) tuple.
|
||||
:type timeout: float or tuple
|
||||
:param allow_redirects: (optional) Boolean. Set to True if POST/PUT/DELETE redirect following is allowed.
|
||||
:type allow_redirects: bool
|
||||
:param proxies: (optional) Dictionary mapping protocol to the URL of the proxy.
|
||||
:param verify: (optional) if ``True``, the SSL cert will be verified. A CA_BUNDLE path can also be provided.
|
||||
:param stream: (optional) if ``False``, the response content will be immediately downloaded.
|
||||
:param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
|
||||
:return: :class:`Response <Response>` object
|
||||
:rtype: requests.Response
|
||||
"""
|
||||
|
||||
validator = kwargs.pop("validator", None)
|
||||
with Session(validator=validator) as sess:
|
||||
response = sess.request(method=method, url=url, **kwargs)
|
||||
return response
|
||||
|
||||
|
||||
def get(url, **kwargs):
|
||||
"""Sends a GET request.
|
||||
|
||||
:param url: URL for the new :class:`Request` object.
|
||||
:param **kwargs: Optional arguments that ``request`` takes.
|
||||
:return: :class:`Response <Response>` object
|
||||
:rtype: requests.Response
|
||||
"""
|
||||
|
||||
kwargs.setdefault('allow_redirects', True)
|
||||
return request('get', url, **kwargs)
|
||||
|
||||
|
||||
class RequestsAPIWrapper:
|
||||
"""Provides a `requests.api`-like interface with a specific validator"""
|
||||
|
||||
# Due to how the classes are dynamically constructed pickling may not work
|
||||
# correctly unless loaded within the same interpreter instance.
|
||||
# Enable at your peril.
|
||||
SUPPORT_WRAPPER_PICKLING = False
|
||||
|
||||
def __init__(self, validator):
|
||||
# Do this here to avoid circular import issues
|
||||
try:
|
||||
from .futures import FuturesSession
|
||||
have_requests_futures = True
|
||||
except ImportError as e:
|
||||
have_requests_futures = False
|
||||
|
||||
self.validator = validator
|
||||
outer_self = self
|
||||
|
||||
class _WrappedSession(Session):
|
||||
"""An `advocate.Session` that uses the wrapper's blacklist
|
||||
|
||||
the wrapper is meant to be a transparent replacement for `requests`,
|
||||
so people should be able to subclass `wrapper.Session` and still
|
||||
get the desired validation behaviour
|
||||
"""
|
||||
DEFAULT_VALIDATOR = outer_self.validator
|
||||
|
||||
self._make_wrapper_cls_global(_WrappedSession)
|
||||
|
||||
if have_requests_futures:
|
||||
|
||||
class _WrappedFuturesSession(FuturesSession):
|
||||
"""Like _WrappedSession, but for `FuturesSession`s"""
|
||||
DEFAULT_VALIDATOR = outer_self.validator
|
||||
self._make_wrapper_cls_global(_WrappedFuturesSession)
|
||||
|
||||
self.FuturesSession = _WrappedFuturesSession
|
||||
|
||||
self.request = self._default_arg_wrapper(request)
|
||||
self.get = self._default_arg_wrapper(get)
|
||||
self.Session = _WrappedSession
|
||||
|
||||
def __getattr__(self, item):
|
||||
# This class is meant to mimic the requests base module, so if we don't
|
||||
# have this attribute, it might be on the base module (like the Request
|
||||
# class, etc.)
|
||||
try:
|
||||
return object.__getattribute__(self, item)
|
||||
except AttributeError:
|
||||
from . import cw_advocate
|
||||
return getattr(cw_advocate, item)
|
||||
|
||||
def _default_arg_wrapper(self, fun):
|
||||
def wrapped_func(*args, **kwargs):
|
||||
kwargs.setdefault("validator", self.validator)
|
||||
return fun(*args, **kwargs)
|
||||
return wrapped_func
|
||||
|
||||
def _make_wrapper_cls_global(self, cls):
|
||||
if not self.SUPPORT_WRAPPER_PICKLING:
|
||||
return
|
||||
# Gnarly, but necessary to give pickle a consistent module-level
|
||||
# reference for each wrapper.
|
||||
wrapper_hash = hashlib.sha256(pickle.dumps(self)).hexdigest()
|
||||
cls.__name__ = "_".join((cls.__name__, wrapper_hash))
|
||||
cls.__qualname__ = ".".join((__name__, cls.__name__))
|
||||
if not globals().get(cls.__name__):
|
||||
globals()[cls.__name__] = cls
|
||||
|
||||
|
||||
__all__ = (
|
||||
"get",
|
||||
"request",
|
||||
"session",
|
||||
"Session",
|
||||
"RequestsAPIWrapper",
|
||||
)
|
201
cps/cw_advocate/connection.py
Normal file
201
cps/cw_advocate/connection.py
Normal file
@@ -0,0 +1,201 @@
|
||||
#
|
||||
# Copyright 2015 Jordan Milne
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Source: https://github.com/JordanMilne/Advocate
|
||||
|
||||
import ipaddress
|
||||
import socket
|
||||
from socket import timeout as SocketTimeout
|
||||
|
||||
from urllib3.connection import HTTPSConnection, HTTPConnection
|
||||
from urllib3.exceptions import ConnectTimeoutError
|
||||
from urllib3.util.connection import _set_socket_options
|
||||
from urllib3.util.connection import create_connection as old_create_connection
|
||||
|
||||
from . import addrvalidator
|
||||
from .exceptions import UnacceptableAddressException
|
||||
|
||||
|
||||
def advocate_getaddrinfo(host, port, get_canonname=False):
|
||||
addrinfo = socket.getaddrinfo(
|
||||
host,
|
||||
port,
|
||||
0,
|
||||
socket.SOCK_STREAM,
|
||||
0,
|
||||
# We need what the DNS client sees the hostname as, correctly handles
|
||||
# IDNs and tricky things like `private.foocorp.org\x00.google.com`.
|
||||
# All IDNs will be converted to punycode.
|
||||
socket.AI_CANONNAME if get_canonname else 0,
|
||||
)
|
||||
return fix_addrinfo(addrinfo)
|
||||
|
||||
|
||||
def fix_addrinfo(records):
|
||||
"""
|
||||
Propagate the canonname across records and parse IPs
|
||||
|
||||
I'm not sure if this is just the behaviour of `getaddrinfo` on Linux, but
|
||||
it seems like only the first record in the set has the canonname field
|
||||
populated.
|
||||
"""
|
||||
def fix_record(record, canonname):
|
||||
sa = record[4]
|
||||
sa = (ipaddress.ip_address(sa[0]),) + sa[1:]
|
||||
return record[0], record[1], record[2], canonname, sa
|
||||
|
||||
canonname = None
|
||||
if records:
|
||||
# Apparently the canonical name is only included in the first record?
|
||||
# Add it to all of them.
|
||||
assert(len(records[0]) == 5)
|
||||
canonname = records[0][3]
|
||||
return tuple(fix_record(x, canonname) for x in records)
|
||||
|
||||
|
||||
# Lifted from requests' urllib3, which in turn lifted it from `socket.py`. Oy!
|
||||
def validating_create_connection(address,
|
||||
timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
|
||||
source_address=None, socket_options=None,
|
||||
validator=None):
|
||||
"""Connect to *address* and return the socket object.
|
||||
|
||||
Convenience function. Connect to *address* (a 2-tuple ``(host,
|
||||
port)``) and return the socket object. Passing the optional
|
||||
*timeout* parameter will set the timeout on the socket instance
|
||||
before attempting to connect. If no *timeout* is supplied, the
|
||||
global default timeout setting returned by :func:`getdefaulttimeout`
|
||||
is used. If *source_address* is set it must be a tuple of (host, port)
|
||||
for the socket to bind as a source address before making the connection.
|
||||
An host of '' or port 0 tells the OS to use the default.
|
||||
"""
|
||||
|
||||
host, port = address
|
||||
# We can skip asking for the canon name if we're not doing hostname-based
|
||||
# blacklisting.
|
||||
need_canonname = False
|
||||
if validator.hostname_blacklist:
|
||||
need_canonname = True
|
||||
# We check both the non-canonical and canonical hostnames so we can
|
||||
# catch both of these:
|
||||
# CNAME from nonblacklisted.com -> blacklisted.com
|
||||
# CNAME from blacklisted.com -> nonblacklisted.com
|
||||
if not validator.is_hostname_allowed(host):
|
||||
raise UnacceptableAddressException(host)
|
||||
|
||||
err = None
|
||||
addrinfo = advocate_getaddrinfo(host, port, get_canonname=need_canonname)
|
||||
if addrinfo:
|
||||
if validator.autodetect_local_addresses:
|
||||
local_addresses = addrvalidator.determine_local_addresses()
|
||||
else:
|
||||
local_addresses = []
|
||||
for res in addrinfo:
|
||||
# Are we allowed to connect with this result?
|
||||
if not validator.is_addrinfo_allowed(
|
||||
res,
|
||||
_local_addresses=local_addresses,
|
||||
):
|
||||
continue
|
||||
af, socktype, proto, canonname, sa = res
|
||||
# Unparse the validated IP
|
||||
sa = (sa[0].exploded,) + sa[1:]
|
||||
sock = None
|
||||
try:
|
||||
sock = socket.socket(af, socktype, proto)
|
||||
|
||||
# If provided, set socket level options before connecting.
|
||||
# This is the only addition urllib3 makes to this function.
|
||||
_set_socket_options(sock, socket_options)
|
||||
|
||||
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
|
||||
sock.settimeout(timeout)
|
||||
if source_address:
|
||||
sock.bind(source_address)
|
||||
sock.connect(sa)
|
||||
return sock
|
||||
|
||||
except socket.error as _:
|
||||
err = _
|
||||
if sock is not None:
|
||||
sock.close()
|
||||
sock = None
|
||||
|
||||
if err is None:
|
||||
# If we got here, none of the results were acceptable
|
||||
err = UnacceptableAddressException(address)
|
||||
if err is not None:
|
||||
raise err
|
||||
else:
|
||||
raise socket.error("getaddrinfo returns an empty list")
|
||||
|
||||
|
||||
# TODO: Is there a better way to add this to multiple classes with different
|
||||
# base classes? I tried a mixin, but it used the base method instead.
|
||||
def _validating_new_conn(self):
|
||||
""" Establish a socket connection and set nodelay settings on it.
|
||||
|
||||
:return: New socket connection.
|
||||
"""
|
||||
extra_kw = {}
|
||||
if self.source_address:
|
||||
extra_kw['source_address'] = self.source_address
|
||||
|
||||
if self.socket_options:
|
||||
extra_kw['socket_options'] = self.socket_options
|
||||
|
||||
try:
|
||||
# Hack around HTTPretty's patched sockets
|
||||
# TODO: some better method of hacking around it that checks if we
|
||||
# _would have_ connected to a private addr?
|
||||
conn_func = validating_create_connection
|
||||
if socket.getaddrinfo.__module__.startswith("httpretty"):
|
||||
conn_func = old_create_connection
|
||||
else:
|
||||
extra_kw["validator"] = self._validator
|
||||
|
||||
conn = conn_func(
|
||||
(self.host, self.port),
|
||||
self.timeout,
|
||||
**extra_kw
|
||||
)
|
||||
|
||||
except SocketTimeout:
|
||||
raise ConnectTimeoutError(
|
||||
self, "Connection to %s timed out. (connect timeout=%s)" %
|
||||
(self.host, self.timeout))
|
||||
|
||||
return conn
|
||||
|
||||
|
||||
# Don't silently break if the private API changes across urllib3 versions
|
||||
assert(hasattr(HTTPConnection, '_new_conn'))
|
||||
assert(hasattr(HTTPSConnection, '_new_conn'))
|
||||
|
||||
|
||||
class ValidatingHTTPConnection(HTTPConnection):
|
||||
_new_conn = _validating_new_conn
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
self._validator = kwargs.pop("validator")
|
||||
HTTPConnection.__init__(self, *args, **kwargs)
|
||||
|
||||
|
||||
class ValidatingHTTPSConnection(HTTPSConnection):
|
||||
_new_conn = _validating_new_conn
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
self._validator = kwargs.pop("validator")
|
||||
HTTPSConnection.__init__(self, *args, **kwargs)
|
39
cps/cw_advocate/connectionpool.py
Normal file
39
cps/cw_advocate/connectionpool.py
Normal file
@@ -0,0 +1,39 @@
|
||||
#
|
||||
# Copyright 2015 Jordan Milne
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Source: https://github.com/JordanMilne/Advocate
|
||||
|
||||
from urllib3 import HTTPConnectionPool, HTTPSConnectionPool
|
||||
|
||||
from .connection import (
|
||||
ValidatingHTTPConnection,
|
||||
ValidatingHTTPSConnection,
|
||||
)
|
||||
|
||||
# Don't silently break if the private API changes across urllib3 versions
|
||||
assert(hasattr(HTTPConnectionPool, 'ConnectionCls'))
|
||||
assert(hasattr(HTTPSConnectionPool, 'ConnectionCls'))
|
||||
assert(hasattr(HTTPConnectionPool, 'scheme'))
|
||||
assert(hasattr(HTTPSConnectionPool, 'scheme'))
|
||||
|
||||
|
||||
class ValidatingHTTPConnectionPool(HTTPConnectionPool):
|
||||
scheme = 'http'
|
||||
ConnectionCls = ValidatingHTTPConnection
|
||||
|
||||
|
||||
class ValidatingHTTPSConnectionPool(HTTPSConnectionPool):
|
||||
scheme = 'https'
|
||||
ConnectionCls = ValidatingHTTPSConnection
|
39
cps/cw_advocate/exceptions.py
Normal file
39
cps/cw_advocate/exceptions.py
Normal file
@@ -0,0 +1,39 @@
|
||||
#
|
||||
# Copyright 2015 Jordan Milne
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Source: https://github.com/JordanMilne/Advocate
|
||||
|
||||
class AdvocateException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class UnacceptableAddressException(AdvocateException):
|
||||
pass
|
||||
|
||||
|
||||
class NameserverException(AdvocateException):
|
||||
pass
|
||||
|
||||
|
||||
class MountDisabledException(AdvocateException):
|
||||
pass
|
||||
|
||||
|
||||
class ProxyDisabledException(NotImplementedError, AdvocateException):
|
||||
pass
|
||||
|
||||
|
||||
class ConfigException(AdvocateException):
|
||||
pass
|
61
cps/cw_advocate/poolmanager.py
Normal file
61
cps/cw_advocate/poolmanager.py
Normal file
@@ -0,0 +1,61 @@
|
||||
#
|
||||
# Copyright 2015 Jordan Milne
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# Source: https://github.com/JordanMilne/Advocate
|
||||
|
||||
import collections
|
||||
import functools
|
||||
|
||||
from urllib3 import PoolManager
|
||||
from urllib3.poolmanager import _default_key_normalizer, PoolKey
|
||||
|
||||
from .connectionpool import (
|
||||
ValidatingHTTPSConnectionPool,
|
||||
ValidatingHTTPConnectionPool,
|
||||
)
|
||||
|
||||
pool_classes_by_scheme = {
|
||||
"http": ValidatingHTTPConnectionPool,
|
||||
"https": ValidatingHTTPSConnectionPool,
|
||||
}
|
||||
|
||||
AdvocatePoolKey = collections.namedtuple('AdvocatePoolKey',
|
||||
PoolKey._fields + ('key_validator',))
|
||||
|
||||
|
||||
def key_normalizer(key_class, request_context):
|
||||
request_context = request_context.copy()
|
||||
# TODO: add ability to serialize validator rules to dict,
|
||||
# allowing pool to be shared between sessions with the same
|
||||
# rules.
|
||||
request_context["validator"] = id(request_context["validator"])
|
||||
return _default_key_normalizer(key_class, request_context)
|
||||
|
||||
|
||||
key_fn_by_scheme = {
|
||||
'http': functools.partial(key_normalizer, AdvocatePoolKey),
|
||||
'https': functools.partial(key_normalizer, AdvocatePoolKey),
|
||||
}
|
||||
|
||||
|
||||
class ValidatingPoolManager(PoolManager):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
# Make sure the API hasn't changed
|
||||
assert (hasattr(self, 'pool_classes_by_scheme'))
|
||||
|
||||
self.pool_classes_by_scheme = pool_classes_by_scheme
|
||||
self.key_fn_by_scheme = key_fn_by_scheme.copy()
|
@@ -34,7 +34,7 @@ def get_user_locale_language(user_language):
|
||||
|
||||
|
||||
def get_available_locale():
|
||||
return [Locale('en')] + babel.list_translations()
|
||||
return sorted(babel.list_translations(), key=lambda x: x.display_name.lower())
|
||||
|
||||
|
||||
def get_available_translations():
|
@@ -1,4 +1,5 @@
|
||||
from datetime import datetime
|
||||
from datetime import timezone
|
||||
from datetime import timedelta
|
||||
import hashlib
|
||||
|
||||
@@ -496,7 +497,7 @@ class LoginManager:
|
||||
duration = timedelta(seconds=duration)
|
||||
|
||||
try:
|
||||
expires = datetime.utcnow() + duration
|
||||
expires = datetime.now(timezone.utc) + duration
|
||||
except TypeError as e:
|
||||
raise Exception(
|
||||
"REMEMBER_COOKIE_DURATION must be a datetime.timedelta,"
|
||||
|
184
cps/db.py
184
cps/db.py
@@ -20,10 +20,11 @@
|
||||
import os
|
||||
import re
|
||||
import json
|
||||
from datetime import datetime
|
||||
from datetime import datetime, timezone
|
||||
from urllib.parse import quote
|
||||
import unidecode
|
||||
from weakref import WeakSet
|
||||
# from weakref import WeakSet
|
||||
from uuid import uuid4
|
||||
|
||||
from sqlite3 import OperationalError as sqliteOperationalError
|
||||
from sqlalchemy import create_engine
|
||||
@@ -44,11 +45,11 @@ from sqlalchemy.ext.associationproxy import association_proxy
|
||||
from .cw_login import current_user
|
||||
from flask_babel import gettext as _
|
||||
from flask_babel import get_locale
|
||||
from flask import flash
|
||||
from flask import flash, g, Flask
|
||||
|
||||
from . import logger, ub, isoLanguages
|
||||
from .pagination import Pagination
|
||||
|
||||
from .string_helper import strip_whitespaces
|
||||
|
||||
log = logger.create()
|
||||
|
||||
@@ -101,6 +102,16 @@ class Identifiers(Base):
|
||||
type = Column(String(collation='NOCASE'), nullable=False, default="isbn")
|
||||
val = Column(String(collation='NOCASE'), nullable=False)
|
||||
book = Column(Integer, ForeignKey('books.id'), nullable=False)
|
||||
amazon = {
|
||||
"jp": "co.jp",
|
||||
"uk": "co.uk",
|
||||
"us": "com",
|
||||
"au": "com.au",
|
||||
"be": "com.be",
|
||||
"br": "com.br",
|
||||
"tr": "com.tr",
|
||||
"mx": "com.mx",
|
||||
}
|
||||
|
||||
def __init__(self, val, id_type, book):
|
||||
super().__init__()
|
||||
@@ -113,7 +124,11 @@ class Identifiers(Base):
|
||||
if format_type == 'amazon':
|
||||
return "Amazon"
|
||||
elif format_type.startswith("amazon_"):
|
||||
return "Amazon.{0}".format(format_type[7:])
|
||||
label_amazon = "Amazon.{0}"
|
||||
country_code = format_type[7:].lower()
|
||||
if country_code not in self.amazon:
|
||||
return label_amazon.format(country_code)
|
||||
return label_amazon.format(self.amazon[country_code])
|
||||
elif format_type == "isbn":
|
||||
return "ISBN"
|
||||
elif format_type == "doi":
|
||||
@@ -136,6 +151,8 @@ class Identifiers(Base):
|
||||
return "ISSN"
|
||||
elif format_type == "isfdb":
|
||||
return "ISFDB"
|
||||
elif format_type == "storygraph":
|
||||
return "StoryGraph"
|
||||
if format_type == "lubimyczytac":
|
||||
return "Lubimyczytac"
|
||||
if format_type == "databazeknih":
|
||||
@@ -148,7 +165,11 @@ class Identifiers(Base):
|
||||
if format_type == "amazon" or format_type == "asin":
|
||||
return "https://amazon.com/dp/{0}".format(self.val)
|
||||
elif format_type.startswith('amazon_'):
|
||||
return "https://amazon.{0}/dp/{1}".format(format_type[7:], self.val)
|
||||
link_amazon = "https://amazon.{0}/dp/{1}"
|
||||
country_code = format_type[7:].lower()
|
||||
if country_code not in self.amazon:
|
||||
return link_amazon.format(country_code, self.val)
|
||||
return link_amazon.format(self.amazon[country_code], self.val)
|
||||
elif format_type == "isbn":
|
||||
return "https://www.worldcat.org/isbn/{0}".format(self.val)
|
||||
elif format_type == "doi":
|
||||
@@ -172,9 +193,11 @@ class Identifiers(Base):
|
||||
elif format_type == "issn":
|
||||
return "https://portal.issn.org/resource/ISSN/{0}".format(self.val)
|
||||
elif format_type == "isfdb":
|
||||
return "http://www.isfdb.org/cgi-bin/pl.cgi?{0}".format(self.val)
|
||||
return "https://www.isfdb.org/cgi-bin/pl.cgi?{0}".format(self.val)
|
||||
elif format_type == "databazeknih":
|
||||
return "https://www.databazeknih.cz/knihy/{0}".format(self.val)
|
||||
elif format_type == "storygraph":
|
||||
return "https://app.thestorygraph.com/books/{0}".format(self.val)
|
||||
elif self.val.lower().startswith("javascript:"):
|
||||
return quote(self.val)
|
||||
elif self.val.lower().startswith("data:"):
|
||||
@@ -378,10 +401,10 @@ class Books(Base):
|
||||
title = Column(String(collation='NOCASE'), nullable=False, default='Unknown')
|
||||
sort = Column(String(collation='NOCASE'))
|
||||
author_sort = Column(String(collation='NOCASE'))
|
||||
timestamp = Column(TIMESTAMP, default=datetime.utcnow)
|
||||
timestamp = Column(TIMESTAMP, default=lambda: datetime.now(timezone.utc))
|
||||
pubdate = Column(TIMESTAMP, default=DEFAULT_PUBDATE)
|
||||
series_index = Column(String, nullable=False, default="1.0")
|
||||
last_modified = Column(TIMESTAMP, default=datetime.utcnow)
|
||||
last_modified = Column(TIMESTAMP, default=lambda: datetime.now(timezone.utc))
|
||||
path = Column(String, default="", nullable=False)
|
||||
has_cover = Column(Integer, default=0)
|
||||
uuid = Column(String)
|
||||
@@ -509,34 +532,25 @@ class AlchemyEncoder(json.JSONEncoder):
|
||||
|
||||
|
||||
class CalibreDB:
|
||||
_init = False
|
||||
engine = None
|
||||
config = None
|
||||
session_factory = None
|
||||
# This is a WeakSet so that references here don't keep other CalibreDB
|
||||
# instances alive once they reach the end of their respective scopes
|
||||
instances = WeakSet()
|
||||
config_calibre_dir = None
|
||||
app_db_path = None
|
||||
|
||||
def __init__(self, expire_on_commit=True, init=False):
|
||||
def __init__(self, _app: Flask=None): # , expire_on_commit=True, init=False):
|
||||
""" Initialize a new CalibreDB session
|
||||
"""
|
||||
self.session = None
|
||||
if init:
|
||||
self.init_db(expire_on_commit)
|
||||
self.Session = None
|
||||
#if init:
|
||||
# self.init_db(expire_on_commit)
|
||||
if _app is not None and not _app._got_first_request:
|
||||
self.init_app(_app)
|
||||
|
||||
def init_db(self, expire_on_commit=True):
|
||||
if self._init:
|
||||
self.init_session(expire_on_commit)
|
||||
|
||||
self.instances.add(self)
|
||||
|
||||
def init_session(self, expire_on_commit=True):
|
||||
self.session = self.session_factory()
|
||||
self.session.expire_on_commit = expire_on_commit
|
||||
self.update_title_sort(self.config)
|
||||
def init_app(self, _app):
|
||||
_app.teardown_appcontext(self.teardown)
|
||||
|
||||
@classmethod
|
||||
def setup_db_cc_classes(cls, cc):
|
||||
global cc_classes
|
||||
cc_ids = []
|
||||
books_custom_column_links = {}
|
||||
for row in cc:
|
||||
@@ -604,8 +618,6 @@ class CalibreDB:
|
||||
secondary=books_custom_column_links[cc_id[0]],
|
||||
backref='books'))
|
||||
|
||||
return cc_classes
|
||||
|
||||
@classmethod
|
||||
def check_valid_db(cls, config_calibre_dir, app_db_path, config_calibre_uuid):
|
||||
if not config_calibre_dir:
|
||||
@@ -625,7 +637,6 @@ class CalibreDB:
|
||||
local_session = scoped_session(sessionmaker())
|
||||
local_session.configure(bind=connection)
|
||||
database_uuid = local_session().query(Library_Id).one_or_none()
|
||||
# local_session.dispose()
|
||||
|
||||
check_engine.connect()
|
||||
db_change = config_calibre_uuid != database_uuid.uuid
|
||||
@@ -633,13 +644,30 @@ class CalibreDB:
|
||||
return False, False
|
||||
return True, db_change
|
||||
|
||||
def teardown(self, exception):
|
||||
ctx = g.get("lib_sql")
|
||||
if ctx:
|
||||
ctx.close()
|
||||
|
||||
@property
|
||||
def session(self):
|
||||
# connect or get active connection
|
||||
if not g.get("lib_sql"):
|
||||
g.lib_sql = self.connect()
|
||||
return g.lib_sql
|
||||
|
||||
@classmethod
|
||||
def update_config(cls, config):
|
||||
def update_config(cls, config, config_calibre_dir, app_db_path):
|
||||
cls.config = config
|
||||
cls.config_calibre_dir = config_calibre_dir
|
||||
cls.app_db_path = app_db_path
|
||||
|
||||
|
||||
def connect(self):
|
||||
return self.setup_db(self.config_calibre_dir, self.app_db_path)
|
||||
|
||||
@classmethod
|
||||
def setup_db(cls, config_calibre_dir, app_db_path):
|
||||
cls.dispose()
|
||||
|
||||
if not config_calibre_dir:
|
||||
cls.config.invalidate()
|
||||
@@ -651,16 +679,17 @@ class CalibreDB:
|
||||
return None
|
||||
|
||||
try:
|
||||
cls.engine = create_engine('sqlite://',
|
||||
engine = create_engine('sqlite://',
|
||||
echo=False,
|
||||
isolation_level="SERIALIZABLE",
|
||||
connect_args={'check_same_thread': False},
|
||||
poolclass=StaticPool)
|
||||
with cls.engine.begin() as connection:
|
||||
with engine.begin() as connection:
|
||||
connection.execute(text('PRAGMA cache_size = 10000;'))
|
||||
connection.execute(text("attach database '{}' as calibre;".format(dbpath)))
|
||||
connection.execute(text("attach database '{}' as app_settings;".format(app_db_path)))
|
||||
|
||||
conn = cls.engine.connect()
|
||||
conn = engine.connect()
|
||||
# conn.text_factory = lambda b: b.decode(errors = 'ignore') possible fix for #1302
|
||||
except Exception as ex:
|
||||
cls.config.invalidate(ex)
|
||||
@@ -676,13 +705,10 @@ class CalibreDB:
|
||||
log.error_or_exception(e)
|
||||
return None
|
||||
|
||||
cls.session_factory = scoped_session(sessionmaker(autocommit=False,
|
||||
autoflush=True,
|
||||
bind=cls.engine, future=True))
|
||||
for inst in cls.instances:
|
||||
inst.init_session()
|
||||
return scoped_session(sessionmaker(autocommit=False,
|
||||
autoflush=False,
|
||||
bind=engine, future=True))
|
||||
|
||||
cls._init = True
|
||||
|
||||
def get_book(self, book_id):
|
||||
return self.session.query(Books).filter(Books.id == book_id).first()
|
||||
@@ -875,10 +901,12 @@ class CalibreDB:
|
||||
authors_ordered = list()
|
||||
# error = False
|
||||
for auth in sort_authors:
|
||||
results = self.session.query(Authors).filter(Authors.sort == auth.lstrip().strip()).all()
|
||||
auth = strip_whitespaces(auth)
|
||||
results = self.session.query(Authors).filter(Authors.sort == auth).all()
|
||||
# ToDo: How to handle not found author name
|
||||
if not len(results):
|
||||
log.error("Author {} not found to display name in right order".format(auth.strip()))
|
||||
book_id = entry.id if isinstance(entry, Books) else entry[0].id
|
||||
log.error("Author '{}' of book {} not found to display name in right order".format(auth, book_id))
|
||||
# error = True
|
||||
break
|
||||
for r in results:
|
||||
@@ -900,7 +928,8 @@ class CalibreDB:
|
||||
|
||||
def get_typeahead(self, database, query, replace=('', ''), tag_filter=true()):
|
||||
query = query or ''
|
||||
self.session.connection().connection.connection.create_function("lower", 1, lcase)
|
||||
self.create_functions()
|
||||
# self.session.connection().connection.connection.create_function("lower", 1, lcase)
|
||||
entries = self.session.query(database).filter(tag_filter). \
|
||||
filter(func.lower(database.name).ilike("%" + query + "%")).all()
|
||||
# json_dumps = json.dumps([dict(name=escape(r.name.replace(*replace))) for r in entries])
|
||||
@@ -908,7 +937,8 @@ class CalibreDB:
|
||||
return json_dumps
|
||||
|
||||
def check_exists_book(self, authr, title):
|
||||
self.session.connection().connection.connection.create_function("lower", 1, lcase)
|
||||
self.create_functions()
|
||||
# self.session.connection().connection.connection.create_function("lower", 1, lcase)
|
||||
q = list()
|
||||
author_terms = re.split(r'\s*&\s*', authr)
|
||||
for author_term in author_terms:
|
||||
@@ -918,8 +948,9 @@ class CalibreDB:
|
||||
.filter(and_(Books.authors.any(and_(*q)), func.lower(Books.title).ilike("%" + title + "%"))).first()
|
||||
|
||||
def search_query(self, term, config, *join):
|
||||
term.strip().lower()
|
||||
self.session.connection().connection.connection.create_function("lower", 1, lcase)
|
||||
strip_whitespaces(term).lower()
|
||||
self.create_functions()
|
||||
# self.session.connection().connection.connection.create_function("lower", 1, lcase)
|
||||
q = list()
|
||||
author_terms = re.split("[, ]+", term)
|
||||
for author_term in author_terms:
|
||||
@@ -1017,7 +1048,7 @@ class CalibreDB:
|
||||
lang.name = isoLanguages.get_language_name(get_locale(), lang.lang_code)
|
||||
return sorted(languages, key=lambda x: x.name, reverse=reverse_order)
|
||||
|
||||
def update_title_sort(self, config, conn=None):
|
||||
def create_functions(self, config=None):
|
||||
# user defined sort function for calibre databases (Series, etc.)
|
||||
def _title_sort(title):
|
||||
# calibre sort stuff
|
||||
@@ -1026,56 +1057,27 @@ class CalibreDB:
|
||||
if match:
|
||||
prep = match.group(1)
|
||||
title = title[len(prep):] + ', ' + prep
|
||||
return title.strip()
|
||||
return strip_whitespaces(title)
|
||||
|
||||
try:
|
||||
# sqlalchemy <1.4.24
|
||||
conn = conn or self.session.connection().connection.driver_connection
|
||||
# sqlalchemy <1.4.24 and sqlalchemy 2.0
|
||||
conn = self.session.connection().connection.driver_connection
|
||||
except AttributeError:
|
||||
# sqlalchemy >1.4.24 and sqlalchemy 2.0
|
||||
conn = conn or self.session.connection().connection.connection
|
||||
# sqlalchemy >1.4.24
|
||||
conn = self.session.connection().connection.connection
|
||||
try:
|
||||
conn.create_function("title_sort", 1, _title_sort)
|
||||
if config:
|
||||
conn.create_function("title_sort", 1, _title_sort)
|
||||
conn.create_function('uuid4', 0, lambda: str(uuid4()))
|
||||
conn.create_function("lower", 1, lcase)
|
||||
except sqliteOperationalError:
|
||||
pass
|
||||
|
||||
@classmethod
|
||||
def dispose(cls):
|
||||
# global session
|
||||
|
||||
for inst in cls.instances:
|
||||
old_session = inst.session
|
||||
inst.session = None
|
||||
if old_session:
|
||||
try:
|
||||
old_session.close()
|
||||
except Exception:
|
||||
pass
|
||||
if old_session.bind:
|
||||
try:
|
||||
old_session.bind.dispose()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
for attr in list(Books.__dict__.keys()):
|
||||
if attr.startswith("custom_column_"):
|
||||
setattr(Books, attr, None)
|
||||
|
||||
for db_class in cc_classes.values():
|
||||
Base.metadata.remove(db_class.__table__)
|
||||
cc_classes.clear()
|
||||
|
||||
for table in reversed(Base.metadata.sorted_tables):
|
||||
name = table.key
|
||||
if name.startswith("custom_column_") or name.startswith("books_custom_column_"):
|
||||
if table is not None:
|
||||
Base.metadata.remove(table)
|
||||
|
||||
def reconnect_db(self, config, app_db_path):
|
||||
self.dispose()
|
||||
self.engine.dispose()
|
||||
# self.dispose()
|
||||
# self.engine.dispose()
|
||||
self.setup_db(config.config_calibre_dir, app_db_path)
|
||||
self.update_config(config)
|
||||
self.update_config(config, config.config_calibre_dir, app_db_path)
|
||||
|
||||
|
||||
def lcase(s):
|
||||
|
@@ -23,10 +23,10 @@ import zipfile
|
||||
import json
|
||||
from io import BytesIO
|
||||
from flask_babel.speaklater import LazyString
|
||||
|
||||
from importlib.metadata import metadata
|
||||
import os
|
||||
|
||||
from flask import send_file, __version__
|
||||
from flask import send_file
|
||||
|
||||
from . import logger, config
|
||||
from .about import collect_stats
|
||||
@@ -49,7 +49,8 @@ def assemble_logfiles(file_name):
|
||||
with open(f, 'rb') as fd:
|
||||
shutil.copyfileobj(fd, wfd)
|
||||
wfd.seek(0)
|
||||
if int(__version__.split('.')[0]) < 2:
|
||||
version = metadata("flask")["Version"]
|
||||
if int(version.split('.')[0]) < 2:
|
||||
return send_file(wfd,
|
||||
as_attachment=True,
|
||||
attachment_filename=os.path.basename(file_name))
|
||||
@@ -72,7 +73,8 @@ def send_debug():
|
||||
for fp in file_list:
|
||||
zf.write(fp, os.path.basename(fp))
|
||||
memory_zip.seek(0)
|
||||
if int(__version__.split('.')[0]) < 2:
|
||||
version = metadata("flask")["Version"]
|
||||
if int(version.split('.')[0]) < 2:
|
||||
return send_file(memory_zip,
|
||||
as_attachment=True,
|
||||
attachment_filename="Calibre-Web-debug-pack.zip")
|
||||
|
@@ -39,8 +39,24 @@ def load_dependencies(optional=False):
|
||||
with open(req_path, 'r') as f:
|
||||
for line in f:
|
||||
if not line.startswith('#') and not line == '\n' and not line.startswith('git'):
|
||||
res = re.match(r'(.*?)([<=>\s]+)([\d\.]+),?\s?([<=>\s]+)?([\d\.]+)?', line.strip())
|
||||
res = re.match(r'(.*?)([<=>\s]+)([\d\.]+),?\s?([<=>\s]+)?([\d\.]+)?(?:\s?;\s?'
|
||||
r'(?:(python_version)\s?([<=>]+)\s?\'([\d\.]+)\'|'
|
||||
r'(sys_platform)\s?([\!=]+)\s?\'([\w]+)\'))?', line.strip())
|
||||
try:
|
||||
if res.group(7) and res.group(8):
|
||||
val = res.group(8).split(".")
|
||||
if not eval(str(sys.version_info[0]) + "." + "{:02d}".format(sys.version_info[1]) +
|
||||
res.group(7) + val[0] + "." + "{:02d}".format(int(val[1]))):
|
||||
continue
|
||||
elif res.group(10) and res.group(11):
|
||||
# only installed if platform is eqal, don't check if platform is not equal
|
||||
if res.group(10) == "==":
|
||||
if sys.platform != res.group(11):
|
||||
continue
|
||||
# installed if platform is not eqal, don't check if platform is equal
|
||||
elif res.group(10) == "!=":
|
||||
if sys.platform == res.group(11):
|
||||
continue
|
||||
if getattr(sys, 'frozen', False):
|
||||
dep_version = exe_deps[res.group(1).lower().replace('_', '-')]
|
||||
else:
|
||||
@@ -63,7 +79,7 @@ def dependency_check(optional=False):
|
||||
deps = load_dependencies(optional)
|
||||
for dep in deps:
|
||||
try:
|
||||
dep_version_int = [int(x) if x.isnumeric() else 0 for x in dep[0].split('.')]
|
||||
dep_version_int = [int(x) if x.isnumeric() else 0 for x in dep[0].split('.')[:3]]
|
||||
low_check = [int(x) for x in dep[3].split('.')]
|
||||
high_check = [int(x) for x in dep[5].split('.')]
|
||||
except AttributeError:
|
||||
|
700
cps/editbooks.py
700
cps/editbooks.py
File diff suppressed because it is too large
Load Diff
@@ -35,9 +35,7 @@ def do_calibre_export(book_id, book_format):
|
||||
my_env = os.environ.copy()
|
||||
if config.config_calibre_split:
|
||||
my_env['CALIBRE_OVERRIDE_DATABASE_PATH'] = os.path.join(config.config_calibre_dir, "metadata.db")
|
||||
library_path = config.config_calibre_split_dir
|
||||
else:
|
||||
library_path = config.config_calibre_dir
|
||||
library_path = config.get_book_path()
|
||||
opf_command = [calibredb_binarypath, 'export', '--dont-write-opf', '--with-library', library_path,
|
||||
'--to-dir', tmp_dir, '--formats', book_format, "--template", "{}".format(temp_file_name),
|
||||
str(book_id)]
|
||||
|
12
cps/epub.py
12
cps/epub.py
@@ -25,6 +25,7 @@ from . import config, logger
|
||||
from .helper import split_authors
|
||||
from .epub_helper import get_content_opf, default_ns
|
||||
from .constants import BookMeta
|
||||
from .string_helper import strip_whitespaces
|
||||
|
||||
log = logger.create()
|
||||
|
||||
@@ -55,7 +56,7 @@ def get_epub_layout(book, book_data):
|
||||
p = tree.xpath('/pkg:package/pkg:metadata', namespaces=default_ns)[0]
|
||||
|
||||
layout = p.xpath('pkg:meta[@property="rendition:layout"]/text()', namespaces=default_ns)
|
||||
except (etree.XMLSyntaxError, KeyError, IndexError, OSError) as e:
|
||||
except (etree.XMLSyntaxError, KeyError, IndexError, OSError, UnicodeDecodeError) as e:
|
||||
log.error("Could not parse epub metadata of book {} during kobo sync: {}".format(book.id, e))
|
||||
layout = []
|
||||
|
||||
@@ -65,7 +66,7 @@ def get_epub_layout(book, book_data):
|
||||
return layout[0]
|
||||
|
||||
|
||||
def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
|
||||
def get_epub_info(tmp_file_path, original_file_name, original_file_extension, no_cover_processing):
|
||||
ns = {
|
||||
'n': 'urn:oasis:names:tc:opendocument:xmlns:container',
|
||||
'pkg': 'http://www.idpf.org/2007/opf',
|
||||
@@ -90,7 +91,7 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
|
||||
elif s == 'date':
|
||||
epub_metadata[s] = tmp[0][:10]
|
||||
else:
|
||||
epub_metadata[s] = tmp[0].strip()
|
||||
epub_metadata[s] = strip_whitespaces(tmp[0])
|
||||
else:
|
||||
epub_metadata[s] = 'Unknown'
|
||||
|
||||
@@ -116,7 +117,10 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
|
||||
epub_metadata = parse_epub_series(ns, tree, epub_metadata)
|
||||
|
||||
epub_zip = zipfile.ZipFile(tmp_file_path)
|
||||
cover_file = parse_epub_cover(ns, tree, epub_zip, cover_path, tmp_file_path)
|
||||
if not no_cover_processing:
|
||||
cover_file = parse_epub_cover(ns, tree, epub_zip, cover_path, tmp_file_path)
|
||||
else:
|
||||
cover_file = None
|
||||
|
||||
identifiers = []
|
||||
for node in p.xpath('dc:identifier', namespaces=ns):
|
||||
|
@@ -37,17 +37,31 @@ def error_http(error):
|
||||
error_code="Error {0}".format(error.code),
|
||||
error_name=error.name,
|
||||
issue=False,
|
||||
goto_admin=False,
|
||||
unconfigured=not config.db_configured,
|
||||
instance=config.config_calibre_web_title
|
||||
), error.code
|
||||
|
||||
|
||||
def internal_error(error):
|
||||
if (isinstance(error.original_exception, AttributeError) and
|
||||
error.original_exception.args[0] == "'NoneType' object has no attribute 'query'"
|
||||
and error.original_exception.name == "query"):
|
||||
return render_template('http_error.html',
|
||||
error_code="Database Error",
|
||||
error_name='The library used is invalid or has permission errors',
|
||||
issue=False,
|
||||
goto_admin=True,
|
||||
unconfigured=False,
|
||||
error_stack="",
|
||||
instance=config.config_calibre_web_title
|
||||
), 500
|
||||
return render_template('http_error.html',
|
||||
error_code="500 Internal Server Error",
|
||||
error_name='The server encountered an internal error and was unable to complete your '
|
||||
'request. There is an error in the application.',
|
||||
issue=True,
|
||||
goto_admin=False,
|
||||
unconfigured=False,
|
||||
error_stack=traceback.format_exc().split("\n"),
|
||||
instance=config.config_calibre_web_title
|
||||
|
@@ -34,6 +34,15 @@ except ImportError as e:
|
||||
error = "Cannot import python-magic, checking uploaded file metadata will not work: {}".format(e)
|
||||
|
||||
|
||||
def get_mimetype(ext):
|
||||
# overwrite some mimetypes for proper file detection
|
||||
mimes = {".fb2": "text/xml",
|
||||
".cbz": "application/zip",
|
||||
".cbr": "application/x-rar"
|
||||
}
|
||||
return mimes.get(ext, mimetypes.types_map[ext])
|
||||
|
||||
|
||||
def get_temp_dir():
|
||||
tmp_dir = os.path.join(gettempdir(), 'calibre_web')
|
||||
if not os.path.isdir(tmp_dir):
|
||||
@@ -54,7 +63,7 @@ def validate_mime_type(file_buffer, allowed_extensions):
|
||||
allowed_mimetypes = list()
|
||||
for x in allowed_extensions:
|
||||
try:
|
||||
allowed_mimetypes.append(mimetypes.types_map["." + x])
|
||||
allowed_mimetypes.append(get_mimetype("." + x))
|
||||
except KeyError:
|
||||
log.error("Unkown mimetype for Extension: {}".format(x))
|
||||
tmp_mime_type = mime.from_buffer(file_buffer.read())
|
||||
|
131
cps/helper.py
131
cps/helper.py
@@ -25,12 +25,12 @@ import re
|
||||
import regex
|
||||
import shutil
|
||||
import socket
|
||||
from datetime import datetime, timedelta
|
||||
from datetime import datetime, timedelta, timezone
|
||||
import requests
|
||||
import unidecode
|
||||
from uuid import uuid4
|
||||
|
||||
from flask import send_from_directory, make_response, abort, url_for, Response
|
||||
from flask import send_from_directory, make_response, abort, url_for, Response, request
|
||||
from flask_babel import gettext as _
|
||||
from flask_babel import lazy_gettext as N_
|
||||
from flask_babel import get_locale
|
||||
@@ -43,15 +43,16 @@ from markupsafe import escape
|
||||
from urllib.parse import quote
|
||||
|
||||
try:
|
||||
import advocate
|
||||
from advocate.exceptions import UnacceptableAddressException
|
||||
from . import cw_advocate
|
||||
from .cw_advocate.exceptions import UnacceptableAddressException
|
||||
use_advocate = True
|
||||
except ImportError:
|
||||
except ImportError as e:
|
||||
use_advocate = False
|
||||
advocate = requests
|
||||
UnacceptableAddressException = MissingSchema = BaseException
|
||||
|
||||
from . import calibre_db, cli_param
|
||||
from .string_helper import strip_whitespaces
|
||||
from .tasks.convert import TaskConvert
|
||||
from . import logger, config, db, ub, fs
|
||||
from . import gdriveutils as gd
|
||||
@@ -118,7 +119,7 @@ def convert_book_format(book_id, calibre_path, old_book_format, new_book_format,
|
||||
# Texts are not lazy translated as they are supposed to get send out as is
|
||||
def send_test_mail(ereader_mail, user_name):
|
||||
for email in ereader_mail.split(','):
|
||||
email = email.strip()
|
||||
email = strip_whitespaces(email)
|
||||
WorkerThread.add(user_name, TaskEmail(_('Calibre-Web Test Email'), None, None,
|
||||
config.get_mail_settings(), email, N_("Test Email"),
|
||||
_('This Email has been sent via Calibre-Web.')))
|
||||
@@ -198,7 +199,7 @@ def check_send_to_ereader(entry):
|
||||
# Check if a reader is existing for any of the book formats, if not, return empty list, otherwise return
|
||||
# list with supported formats
|
||||
def check_read_formats(entry):
|
||||
extensions_reader = {'TXT', 'PDF', 'EPUB', 'CBZ', 'CBT', 'CBR', 'DJVU', 'DJV'}
|
||||
extensions_reader = {'TXT', 'PDF', 'EPUB', 'KEPUB', 'CBZ', 'CBT', 'CBR', 'DJVU', 'DJV'}
|
||||
book_formats = list()
|
||||
if len(entry.data):
|
||||
for ele in iter(entry.data):
|
||||
@@ -228,7 +229,7 @@ def send_mail(book_id, book_format, convert, ereader_mail, calibrepath, user_id)
|
||||
link = '<a href="{}">{}</a>'.format(url_for('web.show_book', book_id=book_id), escape(book.title))
|
||||
email_text = N_("%(book)s send to eReader", book=link)
|
||||
for email in ereader_mail.split(','):
|
||||
email = email.strip()
|
||||
email = strip_whitespaces(email)
|
||||
WorkerThread.add(user_id, TaskEmail(_("Send to eReader"), book.path, converted_file_name,
|
||||
config.get_mail_settings(), email,
|
||||
email_text, _('This Email has been sent via Calibre-Web.'), book.id))
|
||||
@@ -236,7 +237,7 @@ def send_mail(book_id, book_format, convert, ereader_mail, calibrepath, user_id)
|
||||
return _("The requested file could not be read. Maybe wrong permissions?")
|
||||
|
||||
|
||||
def get_valid_filename(value, replace_whitespace=True, chars=128):
|
||||
def get_valid_filename(value, replace_whitespace=True, chars=128, force_unidecode=False):
|
||||
"""
|
||||
Returns the given string converted to a string that can be used for a clean
|
||||
filename. Limits num characters to 128 max.
|
||||
@@ -244,7 +245,7 @@ def get_valid_filename(value, replace_whitespace=True, chars=128):
|
||||
if value[-1:] == '.':
|
||||
value = value[:-1]+'_'
|
||||
value = value.replace("/", "_").replace(":", "_").strip('\0')
|
||||
if config.config_unicode_filename:
|
||||
if config.config_unicode_filename or force_unidecode:
|
||||
value = (unidecode.unidecode(value))
|
||||
if replace_whitespace:
|
||||
# *+:\"/<>? are replaced by _
|
||||
@@ -252,7 +253,7 @@ def get_valid_filename(value, replace_whitespace=True, chars=128):
|
||||
# pipe has to be replaced with comma
|
||||
value = re.sub(r'[|]+', ',', value, flags=re.U)
|
||||
|
||||
value = value.encode('utf-8')[:chars].decode('utf-8', errors='ignore').strip()
|
||||
value = strip_whitespaces(value.encode('utf-8')[:chars].decode('utf-8', errors='ignore'))
|
||||
|
||||
if not value:
|
||||
raise ValueError("Filename cannot be empty")
|
||||
@@ -267,11 +268,11 @@ def split_authors(values):
|
||||
commas = author.count(',')
|
||||
if commas == 1:
|
||||
author_split = author.split(',')
|
||||
authors_list.append(author_split[1].strip() + ' ' + author_split[0].strip())
|
||||
authors_list.append(strip_whitespaces(author_split[1]) + ' ' + strip_whitespaces(author_split[0]))
|
||||
elif commas > 1:
|
||||
authors_list.extend([x.strip() for x in author.split(',')])
|
||||
authors_list.extend([strip_whitespaces(x) for x in author.split(',')])
|
||||
else:
|
||||
authors_list.append(author.strip())
|
||||
authors_list.append(strip_whitespaces(author))
|
||||
return authors_list
|
||||
|
||||
|
||||
@@ -327,7 +328,7 @@ def edit_book_read_status(book_id, read_status=None):
|
||||
ub.session_commit("Book {} readbit toggled".format(book_id))
|
||||
else:
|
||||
try:
|
||||
calibre_db.update_title_sort(config)
|
||||
calibre_db.create_functions(config)
|
||||
book = calibre_db.get_filtered_book(book_id)
|
||||
book_read_status = getattr(book, 'custom_column_' + str(config.config_read_column))
|
||||
if len(book_read_status):
|
||||
@@ -417,8 +418,6 @@ def rename_author_path(first_author, old_author_dir, renamed_author, calibre_pat
|
||||
# Create new_author_dir from parameter or from database
|
||||
# Create new title_dir from database and add id
|
||||
new_authordir = get_valid_filename(first_author, chars=96)
|
||||
# new_author = calibre_db.session.query(db.Authors).filter(db.Authors.name == renamed_author).first()
|
||||
# old_author_dir = get_valid_filename(old_author_name, chars=96)
|
||||
new_author_rename_dir = get_valid_filename(renamed_author, chars=96)
|
||||
if gdrive:
|
||||
g_file = gd.getFileFromEbooksFolder(None, old_author_dir)
|
||||
@@ -467,7 +466,6 @@ def update_dir_structure_file(book_id, calibre_path, original_filepath, new_auth
|
||||
db_filename,
|
||||
original_filepath,
|
||||
path)
|
||||
# old_path = os.path.join(calibre_path, author_dir, new_title_dir).replace('\\', '/')
|
||||
new_path = os.path.join(calibre_path, new_author_dir, new_title_dir).replace('\\', '/')
|
||||
all_new_name = get_valid_filename(local_book.title, chars=42) + ' - ' \
|
||||
+ get_valid_filename(new_author, chars=42)
|
||||
@@ -476,8 +474,6 @@ def update_dir_structure_file(book_id, calibre_path, original_filepath, new_auth
|
||||
|
||||
if error:
|
||||
return error
|
||||
|
||||
# Rename all files from old names to new names
|
||||
return False
|
||||
|
||||
|
||||
@@ -489,7 +485,7 @@ def upload_new_file_gdrive(book_id, first_author, title, title_dir, original_fil
|
||||
title_dir + " (" + str(book_id) + ")")
|
||||
book.path = gdrive_path.replace("\\", "/")
|
||||
gd.uploadFileToEbooksFolder(os.path.join(gdrive_path, file_name).replace("\\", "/"), original_filepath)
|
||||
return False # rename_files_on_change(first_author, renamed_author, local_book=book, gdrive=True)
|
||||
return False
|
||||
|
||||
|
||||
def update_dir_structure_gdrive(book_id, first_author):
|
||||
@@ -517,24 +513,26 @@ def update_dir_structure_gdrive(book_id, first_author):
|
||||
book.path = new_authordir + '/' + book.path.split('/')[1]
|
||||
gd.updateDatabaseOnEdit(g_file['id'], book.path)
|
||||
else:
|
||||
return _('File %(file)s not found on Google Drive', file=authordir) # file not found'''
|
||||
return _('File %(file)s not found on Google Drive', file=authordir) # file not found
|
||||
if titledir != new_titledir or authordir != new_authordir :
|
||||
all_new_name = get_valid_filename(book.title, chars=42) + ' - ' \
|
||||
+ get_valid_filename(new_authordir, chars=42)
|
||||
rename_all_files_on_change(book, book.path, book.path, all_new_name, gdrive=True) # todo: Move filenames on gdrive
|
||||
# change location in database to new author/title path
|
||||
# book.path = os.path.join(authordir, new_titledir).replace('\\', '/')
|
||||
return False
|
||||
|
||||
|
||||
def move_files_on_change(calibre_path, new_author_dir, new_titledir, localbook, db_filename, original_filepath, path):
|
||||
new_path = os.path.join(calibre_path, new_author_dir, new_titledir)
|
||||
# new_name = get_valid_filename(localbook.title, chars=96) + ' - ' + new_author_dir
|
||||
try:
|
||||
if original_filepath:
|
||||
if not os.path.isdir(new_path):
|
||||
os.makedirs(new_path)
|
||||
shutil.move(original_filepath, os.path.join(new_path, db_filename))
|
||||
try:
|
||||
shutil.move(original_filepath, os.path.join(new_path, db_filename))
|
||||
except OSError:
|
||||
log.error("Rename title from {} to {} failed with error, trying to "
|
||||
"move without metadata".format(path, new_path))
|
||||
shutil.move(original_filepath, os.path.join(new_path, db_filename), copy_function=shutil.copy)
|
||||
log.debug("Moving title: %s to %s", original_filepath, new_path)
|
||||
else:
|
||||
# Check new path is not valid path
|
||||
@@ -661,7 +659,7 @@ def check_email(email):
|
||||
|
||||
|
||||
def check_username(username):
|
||||
username = username.strip()
|
||||
username = strip_whitespaces(username)
|
||||
if ub.session.query(ub.User).filter(func.lower(ub.User.name) == username.lower()).scalar():
|
||||
log.error("This username is already taken")
|
||||
raise Exception(_("This username is already taken"))
|
||||
@@ -669,16 +667,18 @@ def check_username(username):
|
||||
|
||||
|
||||
def valid_email(emails):
|
||||
valid_emails = []
|
||||
for email in emails.split(','):
|
||||
email = email.strip()
|
||||
# if email is not deleted
|
||||
if email:
|
||||
# Regex according to https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input/email#validation
|
||||
if not re.search(r"^[\w.!#$%&'*+\\/=?^_`{|}~-]+@[\w](?:[\w-]{0,61}[\w])?(?:\.[\w](?:[\w-]{0,61}[\w])?)*$",
|
||||
email):
|
||||
log.error("Invalid Email address format")
|
||||
raise Exception(_("Invalid Email address format"))
|
||||
return email
|
||||
email = strip_whitespaces(email)
|
||||
# if email is not deleted
|
||||
if email:
|
||||
# Regex according to https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input/email#validation
|
||||
if not re.search(r"^[\w.!#$%&'*+\\/=?^_`{|}~-]+@[\w](?:[\w-]{0,61}[\w])?(?:\.[\w](?:[\w-]{0,61}[\w])?)*$",
|
||||
email):
|
||||
log.error("Invalid Email address format for {}".format(email))
|
||||
raise Exception(_("Invalid Email address format"))
|
||||
valid_emails.append(email)
|
||||
return ",".join(valid_emails)
|
||||
|
||||
|
||||
def valid_password(check_password):
|
||||
@@ -788,24 +788,23 @@ def get_book_cover_internal(book, resolution=None):
|
||||
|
||||
def get_book_cover_thumbnail(book, resolution):
|
||||
if book and book.has_cover:
|
||||
return ub.session \
|
||||
.query(ub.Thumbnail) \
|
||||
.filter(ub.Thumbnail.type == THUMBNAIL_TYPE_COVER) \
|
||||
.filter(ub.Thumbnail.entity_id == book.id) \
|
||||
.filter(ub.Thumbnail.resolution == resolution) \
|
||||
.filter(or_(ub.Thumbnail.expiration.is_(None), ub.Thumbnail.expiration > datetime.utcnow())) \
|
||||
.first()
|
||||
return (ub.session
|
||||
.query(ub.Thumbnail)
|
||||
.filter(ub.Thumbnail.type == THUMBNAIL_TYPE_COVER)
|
||||
.filter(ub.Thumbnail.entity_id == book.id)
|
||||
.filter(ub.Thumbnail.resolution == resolution)
|
||||
.filter(or_(ub.Thumbnail.expiration.is_(None), ub.Thumbnail.expiration > datetime.now(timezone.utc)))
|
||||
.first())
|
||||
|
||||
|
||||
def get_series_thumbnail_on_failure(series_id, resolution):
|
||||
book = calibre_db.session \
|
||||
.query(db.Books) \
|
||||
.join(db.books_series_link) \
|
||||
.join(db.Series) \
|
||||
.filter(db.Series.id == series_id) \
|
||||
.filter(db.Books.has_cover == 1) \
|
||||
.first()
|
||||
|
||||
book = (calibre_db.session
|
||||
.query(db.Books)
|
||||
.join(db.books_series_link)
|
||||
.join(db.Series)
|
||||
.filter(db.Series.id == series_id)
|
||||
.filter(db.Books.has_cover == 1)
|
||||
.first())
|
||||
return get_book_cover_internal(book, resolution=resolution)
|
||||
|
||||
|
||||
@@ -827,13 +826,13 @@ def get_series_cover_internal(series_id, resolution=None):
|
||||
|
||||
|
||||
def get_series_thumbnail(series_id, resolution):
|
||||
return ub.session \
|
||||
.query(ub.Thumbnail) \
|
||||
.filter(ub.Thumbnail.type == THUMBNAIL_TYPE_SERIES) \
|
||||
.filter(ub.Thumbnail.entity_id == series_id) \
|
||||
.filter(ub.Thumbnail.resolution == resolution) \
|
||||
.filter(or_(ub.Thumbnail.expiration.is_(None), ub.Thumbnail.expiration > datetime.utcnow())) \
|
||||
.first()
|
||||
return (ub.session
|
||||
.query(ub.Thumbnail)
|
||||
.filter(ub.Thumbnail.type == THUMBNAIL_TYPE_SERIES)
|
||||
.filter(ub.Thumbnail.entity_id == series_id)
|
||||
.filter(ub.Thumbnail.resolution == resolution)
|
||||
.filter(or_(ub.Thumbnail.expiration.is_(None), ub.Thumbnail.expiration > datetime.now(timezone.utc)))
|
||||
.first())
|
||||
|
||||
|
||||
# saves book cover from url
|
||||
@@ -842,7 +841,7 @@ def save_cover_from_url(url, book_path):
|
||||
if cli_param.allow_localhost:
|
||||
img = requests.get(url, timeout=(10, 200), allow_redirects=False) # ToDo: Error Handling
|
||||
elif use_advocate:
|
||||
img = advocate.get(url, timeout=(10, 200), allow_redirects=False) # ToDo: Error Handling
|
||||
img = cw_advocate.get(url, timeout=(10, 200), allow_redirects=False) # ToDo: Error Handling
|
||||
else:
|
||||
log.error("python module advocate is not installed but is needed")
|
||||
return False, _("Python module 'advocate' is not installed but is needed for cover uploads")
|
||||
@@ -906,7 +905,7 @@ def save_cover(img, book_path):
|
||||
else:
|
||||
imgc = Image(blob=io.BytesIO(img.content))
|
||||
imgc.format = 'jpeg'
|
||||
imgc.transform_colorspace("rgb")
|
||||
imgc.transform_colorspace("srgb")
|
||||
img = imgc
|
||||
except (BlobError, MissingDelegateError):
|
||||
log.error("Invalid cover file content")
|
||||
@@ -975,7 +974,8 @@ def do_download_file(book, book_format, client, data, headers):
|
||||
# ToDo Check headers parameter
|
||||
for element in headers:
|
||||
response.headers[element[0]] = element[1]
|
||||
log.info('Downloading file: {}'.format(os.path.join(filename, book_name + "." + book_format)))
|
||||
log.info('Downloading file: \'%s\' by %s - %s', format(os.path.join(filename, book_name + "." + book_format)),
|
||||
current_user.name, request.headers.get('X-Forwarded-For', request.remote_addr))
|
||||
return response
|
||||
|
||||
|
||||
@@ -1105,11 +1105,14 @@ def get_download_link(book_id, book_format, client):
|
||||
file_name = book.title
|
||||
if len(book.authors) > 0:
|
||||
file_name = file_name + ' - ' + book.authors[0].name
|
||||
file_name = get_valid_filename(file_name, replace_whitespace=False)
|
||||
if client == "kindle":
|
||||
file_name = get_valid_filename(file_name, replace_whitespace=False, force_unidecode=True)
|
||||
else:
|
||||
file_name = quote(get_valid_filename(file_name, replace_whitespace=False))
|
||||
headers = Headers()
|
||||
headers["Content-Type"] = mimetypes.types_map.get('.' + book_format, "application/octet-stream")
|
||||
headers["Content-Disposition"] = "attachment; filename=%s.%s; filename*=UTF-8''%s.%s" % (
|
||||
quote(file_name), book_format, quote(file_name), book_format)
|
||||
headers["Content-Disposition"] = ('attachment; filename="{}.{}"; filename*=UTF-8\'\'{}.{}').format(
|
||||
file_name, book_format, file_name, book_format)
|
||||
return do_download_file(book, book_format, client, data1, headers)
|
||||
else:
|
||||
log.error("Book id {} not found for downloading".format(book_id))
|
||||
|
@@ -15,24 +15,17 @@
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
import sys
|
||||
|
||||
from .iso_language_names import LANGUAGE_NAMES as _LANGUAGE_NAMES
|
||||
from . import logger
|
||||
from .string_helper import strip_whitespaces
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
||||
try:
|
||||
from iso639 import languages, __version__
|
||||
get = languages.get
|
||||
except ImportError:
|
||||
from pycountry import languages as pyc_languages
|
||||
try:
|
||||
import pkg_resources
|
||||
__version__ = pkg_resources.get_distribution('pycountry').version + ' (PyCountry)'
|
||||
del pkg_resources
|
||||
except (ImportError, Exception):
|
||||
__version__ = "? (PyCountry)"
|
||||
|
||||
def _copy_fields(l):
|
||||
l.part1 = getattr(l, 'alpha_2', None)
|
||||
@@ -46,6 +39,11 @@ except ImportError:
|
||||
return _copy_fields(pyc_languages.get(alpha_2=part1))
|
||||
if name is not None:
|
||||
return _copy_fields(pyc_languages.get(name=name))
|
||||
except ImportError as ex:
|
||||
if sys.version_info >= (3, 12):
|
||||
print("Python 3.12 isn't compatible with iso-639. Please install pycountry.")
|
||||
from iso639 import languages
|
||||
get = languages.get
|
||||
|
||||
|
||||
def get_language_names(locale):
|
||||
@@ -69,20 +67,20 @@ def get_language_name(locale, lang_code):
|
||||
return name
|
||||
|
||||
|
||||
def get_language_codes(locale, language_names, remainder=None):
|
||||
language_names = set(x.strip().lower() for x in language_names if x)
|
||||
def get_language_code_from_name(locale, language_names, remainder=None):
|
||||
language_names = set(strip_whitespaces(x).lower() for x in language_names if x)
|
||||
lang = list()
|
||||
for k, v in get_language_names(locale).items():
|
||||
v = v.lower()
|
||||
if v in language_names:
|
||||
lang.append(k)
|
||||
language_names.remove(v)
|
||||
for key, val in get_language_names(locale).items():
|
||||
val = val.lower()
|
||||
if val in language_names:
|
||||
lang.append(key)
|
||||
language_names.remove(val)
|
||||
if remainder is not None and language_names:
|
||||
remainder.extend(language_names)
|
||||
return lang
|
||||
|
||||
|
||||
def get_valid_language_codes(locale, language_names, remainder=None):
|
||||
def get_valid_language_codes_from_code(locale, language_names, remainder=None):
|
||||
lang = list()
|
||||
if "" in language_names:
|
||||
language_names.remove("")
|
||||
@@ -103,6 +101,6 @@ def get_lang3(lang):
|
||||
ret_value = lang
|
||||
else:
|
||||
ret_value = ""
|
||||
except KeyError:
|
||||
except (KeyError, AttributeError):
|
||||
ret_value = lang
|
||||
return ret_value
|
||||
|
@@ -8138,6 +8138,384 @@ LANGUAGE_NAMES = {
|
||||
"zul": "Zulu",
|
||||
"zun": "Zuni"
|
||||
},
|
||||
"sl": {
|
||||
"abk": "abhazijski",
|
||||
"ace": "achinese",
|
||||
"ach": "Acoli",
|
||||
"ada": "Adangme",
|
||||
"ady": "Adyghe",
|
||||
"aar": "afarski",
|
||||
"afh": "Afrihili",
|
||||
"afr": "afrikanski",
|
||||
"ain": "Ainu (Japan)",
|
||||
"aka": "Akan",
|
||||
"akk": "akadski",
|
||||
"sqi": "albanščina",
|
||||
"ale": "aleutski",
|
||||
"amh": "amharski",
|
||||
"anp": "Angika",
|
||||
"ara": "arabski",
|
||||
"arg": "aragonski",
|
||||
"arp": "Arapaho",
|
||||
"arw": "araukanski",
|
||||
"hye": "armenščina",
|
||||
"asm": "asamski",
|
||||
"ast": "Asturian",
|
||||
"ava": "avarski",
|
||||
"ave": "avestijski jeziki",
|
||||
"awa": "Awadhi",
|
||||
"aym": "Aymara",
|
||||
"aze": "azerbajdžanski",
|
||||
"ban": "balijščina",
|
||||
"bal": "belučijski",
|
||||
"bam": "bambarski",
|
||||
"bas": "Basa (Cameroon)",
|
||||
"bak": "baškirski",
|
||||
"eus": "baskovščina",
|
||||
"bej": "Beja",
|
||||
"bel": "beloruščina",
|
||||
"bem": "Bemba (Zambia)",
|
||||
"ben": "bengalščina",
|
||||
"bit": "Berinomo",
|
||||
"bho": "Bhojpuri",
|
||||
"bik": "bikolščina",
|
||||
"byn": "Bilin",
|
||||
"bin": "Bini",
|
||||
"bis": "bislama",
|
||||
"zbl": "Blissymbols",
|
||||
"bos": "bošnjaščina",
|
||||
"bra": "Braj",
|
||||
"bre": "bretonščina",
|
||||
"bug": "buginščina",
|
||||
"bul": "bolgarščina",
|
||||
"bua": "burjatščina",
|
||||
"mya": "burmanščina",
|
||||
"cad": "kadajščina?",
|
||||
"cat": "katalonščina",
|
||||
"ceb": "cebuanščina",
|
||||
"chg": "Chagatai",
|
||||
"cha": "čamorščina",
|
||||
"che": "čečenščina",
|
||||
"chr": "čerokeščina",
|
||||
"chy": "čejenščina",
|
||||
"chb": "čibčevščina",
|
||||
"zho": "kitajščina",
|
||||
"chn": "Chinook jargon",
|
||||
"chp": "čipevščina",
|
||||
"cho": "Choctaw",
|
||||
"cht": "Cholón",
|
||||
"chk": "Chuukese",
|
||||
"chv": "čuvaščina",
|
||||
"cop": "koptščina",
|
||||
"cor": "kornijščina",
|
||||
"cos": "korzijščina",
|
||||
"cre": "krijščina",
|
||||
"mus": "Creek",
|
||||
"hrv": "hrvaščina",
|
||||
"ces": "češčina",
|
||||
"dak": "Dakota",
|
||||
"dan": "danski",
|
||||
"dar": "Dargwa",
|
||||
"del": "Delaware",
|
||||
"div": "Dhivehi",
|
||||
"din": "Dinka",
|
||||
"doi": "Dogri (macrolanguage)",
|
||||
"dgr": "Dogrib",
|
||||
"dua": "Duala",
|
||||
"nld": "nizozemščina",
|
||||
"dse": "Dutch Sign Language",
|
||||
"dyu": "Dyula",
|
||||
"dzo": "dzongkha",
|
||||
"efi": "Efik",
|
||||
"egy": "egipčanski",
|
||||
"eka": "Ekajuk",
|
||||
"elx": "elamščina",
|
||||
"eng": "angleščina",
|
||||
"enu": "Enu",
|
||||
"myv": "Erzya",
|
||||
"epo": "esperanto",
|
||||
"est": "estonščina",
|
||||
"ewe": "evenščina",
|
||||
"ewo": "Ewondo",
|
||||
"fan": "Fang (Equatorial Guinea)",
|
||||
"fat": "Fanti",
|
||||
"fao": "ferščina",
|
||||
"fij": "fidžijščina",
|
||||
"fil": "Filipino",
|
||||
"fin": "finščina",
|
||||
"fon": "Fon",
|
||||
"fra": "francoščina",
|
||||
"fur": "furlanščina",
|
||||
"ful": "fulščina",
|
||||
"gaa": "Ga",
|
||||
"glg": "Galician",
|
||||
"lug": "Ganda",
|
||||
"gay": "gajščina?",
|
||||
"gba": "Gbaya (Central African Republic)",
|
||||
"hmj": "Ge",
|
||||
"gez": "etiopščina?",
|
||||
"kat": "gruzinščina",
|
||||
"deu": "nemški",
|
||||
"gil": "gilbertščina",
|
||||
"gon": "Gondi",
|
||||
"gor": "Gorontalo",
|
||||
"got": "gotščina",
|
||||
"grb": "Grebo",
|
||||
"grn": "gvaranijščina",
|
||||
"guj": "gudžaratščina",
|
||||
"gwi": "Gwichʼin",
|
||||
"hai": "haidščina",
|
||||
"hau": "havščina",
|
||||
"haw": "havajščina",
|
||||
"heb": "hebrejščina",
|
||||
"her": "Herero",
|
||||
"hil": "hilingajnonščina",
|
||||
"hin": "hindijščina",
|
||||
"hmo": "hiri motu",
|
||||
"hit": "hetitščina",
|
||||
"hmn": "hmonščina; miaojščina",
|
||||
"hun": "madžarščina",
|
||||
"hup": "hupščina",
|
||||
"iba": "ibanščina",
|
||||
"isl": "islandščina",
|
||||
"ido": "Ido",
|
||||
"ibo": "Igbo",
|
||||
"ilo": "Iloko",
|
||||
"ind": "indonezijščina",
|
||||
"inh": "inguščina",
|
||||
"ina": "interlingva",
|
||||
"ile": "Interlingue",
|
||||
"iku": "inuktituščina",
|
||||
"ipk": "Inupiaq",
|
||||
"gle": "irščina",
|
||||
"ita": "italijanščina",
|
||||
"jpn": "japonščina",
|
||||
"jav": "javanščina",
|
||||
"jrb": "Judeo-Arabic",
|
||||
"jpr": "Judeo-Persian",
|
||||
"kbd": "kabardinščina",
|
||||
"kab": "Kabyle",
|
||||
"kac": "Kachin",
|
||||
"kal": "Kalaallisut",
|
||||
"xal": "Kalmyk",
|
||||
"kam": "Kamba (Kenya)",
|
||||
"kan": "kanareščina",
|
||||
"kau": "Kanuri",
|
||||
"kaa": "Kara-Kalpak",
|
||||
"krc": "Karachay-Balkar",
|
||||
"krl": "Karelian",
|
||||
"kas": "kašmirščina",
|
||||
"csb": "Kashubian",
|
||||
"kaw": "kavi",
|
||||
"kaz": "kazaščina",
|
||||
"kha": "Khasi",
|
||||
"kho": "Khotanese",
|
||||
"kik": "kikujščina",
|
||||
"kmb": "Kimbundu",
|
||||
"kin": "Kinyarwanda",
|
||||
"kir": "kirgiščina",
|
||||
"tlh": "Klingon",
|
||||
"kom": "komijščina",
|
||||
"kon": "Kongo",
|
||||
"kok": "Konkani (macrolanguage)",
|
||||
"kor": "korejščina",
|
||||
"kos": "Kosraean",
|
||||
"kpe": "Kpelle",
|
||||
"kua": "Kuanyama",
|
||||
"kum": "kumiščina",
|
||||
"kur": "kurdščina",
|
||||
"kru": "Kurukh",
|
||||
"kut": "kutenajščina",
|
||||
"lad": "ladinščina",
|
||||
"lah": "Lahnda",
|
||||
"lam": "Lamba",
|
||||
"lao": "laoščina",
|
||||
"lat": "latinščina",
|
||||
"lav": "latvijščina",
|
||||
"lez": "lezginščina",
|
||||
"lim": "Limburgan",
|
||||
"lin": "lingala",
|
||||
"lit": "litvanščina",
|
||||
"jbo": "Lojban",
|
||||
"loz": "Lozi",
|
||||
"lub": "Luba-Katanga",
|
||||
"lua": "lubalulujščina",
|
||||
"lui": "Luiseno",
|
||||
"smj": "Lule Sami",
|
||||
"lun": "Lunda",
|
||||
"luo": "Luo (Kenya and Tanzania)",
|
||||
"lus": "Lushai",
|
||||
"ltz": "Luxembourgish",
|
||||
"mkd": "makedonščina",
|
||||
"mad": "madurščina",
|
||||
"mag": "Magahi",
|
||||
"mai": "Maithili",
|
||||
"mak": "makasarščina",
|
||||
"mlg": "malgaščina",
|
||||
"msa": "Malay (macrolanguage)",
|
||||
"mal": "malajalščina",
|
||||
"mlt": "malteščina",
|
||||
"mnc": "Manchu",
|
||||
"mdr": "Mandar",
|
||||
"man": "Mandingo",
|
||||
"mni": "manipurščina",
|
||||
"glv": "manska gelščina",
|
||||
"mri": "maorščina",
|
||||
"arn": "Mapudungun",
|
||||
"mar": "maratščina",
|
||||
"chm": "Mari (Russia)",
|
||||
"mah": "Marshallese",
|
||||
"mwr": "Marwari",
|
||||
"mas": "masajščina",
|
||||
"men": "Mende (Sierra Leone)",
|
||||
"mic": "Mi'kmaq",
|
||||
"min": "Minangkabau",
|
||||
"mwl": "Mirandese",
|
||||
"moh": "mohoščina",
|
||||
"mdf": "Moksha",
|
||||
"lol": "Mongo",
|
||||
"mon": "mongolščina",
|
||||
"mos": "mosanščina",
|
||||
"mul": "Več jezikov",
|
||||
"nqo": "N'Ko",
|
||||
"nau": "Nauru",
|
||||
"nav": "navaščina",
|
||||
"ndo": "Ndonga",
|
||||
"nap": "napolitanščina",
|
||||
"nia": "niaščina",
|
||||
"niu": "niuejščina",
|
||||
"zxx": "No linguistic content",
|
||||
"nog": "Nogai",
|
||||
"nor": "norveščina",
|
||||
"nob": "Norwegian Bokmål",
|
||||
"nno": "norveščina; nynorsk",
|
||||
"nym": "Nyamwezi",
|
||||
"nya": "Nyanja",
|
||||
"nyn": "Nyankole",
|
||||
"nyo": "Nyoro",
|
||||
"nzi": "Nzima",
|
||||
"oci": "Occitan (post 1500)",
|
||||
"oji": "Ojibwa",
|
||||
"orm": "Oromo",
|
||||
"osa": "Osage",
|
||||
"oss": "Ossetian",
|
||||
"pal": "Pahlavi",
|
||||
"pau": "palavanščina",
|
||||
"pli": "Pali",
|
||||
"pam": "Pampanga",
|
||||
"pag": "pangasinanščina",
|
||||
"pan": "Panjabi",
|
||||
"pap": "papiamentu",
|
||||
"fas": "perzijščina",
|
||||
"phn": "feničanščina",
|
||||
"pon": "Pohnpeian",
|
||||
"pol": "poljščina",
|
||||
"por": "portugalđščina",
|
||||
"pus": "paštu",
|
||||
"que": "Quechua",
|
||||
"raj": "radžastanščina",
|
||||
"rap": "rapanujščina",
|
||||
"ron": "romunščina",
|
||||
"roh": "Romansh",
|
||||
"rom": "romščina",
|
||||
"run": "rundščina",
|
||||
"rus": "ruščina",
|
||||
"smo": "samoanščina",
|
||||
"sad": "Sandawe",
|
||||
"sag": "Sango",
|
||||
"san": "sanskrt",
|
||||
"sat": "santalščina",
|
||||
"srd": "sardinščina",
|
||||
"sas": "Sasak",
|
||||
"sco": "škotščina",
|
||||
"sel": "selkupščina",
|
||||
"srp": "srbščina",
|
||||
"srr": "Serer",
|
||||
"shn": "šanščina",
|
||||
"sna": "šonščina",
|
||||
"scn": "sicilijanščina",
|
||||
"sid": "Sidamo",
|
||||
"bla": "Siksika",
|
||||
"snd": "sindščina",
|
||||
"sin": "Sinhala",
|
||||
"den": "Slave (Athapascan)",
|
||||
"slk": "slovaščina",
|
||||
"slv": "slovenščina",
|
||||
"sog": "Sogdian",
|
||||
"som": "Somali",
|
||||
"snk": "Soninke",
|
||||
"spa": "španščina",
|
||||
"srn": "Sranan Tongo",
|
||||
"suk": "Sukuma",
|
||||
"sux": "sumerščina",
|
||||
"sun": "sundščina",
|
||||
"sus": "susuamijščina?",
|
||||
"swa": "Swahili (macrolanguage)",
|
||||
"ssw": "svazijščina?",
|
||||
"swe": "švedščina",
|
||||
"syr": "sirščina",
|
||||
"tgl": "tagaloščina",
|
||||
"tah": "tahitijščina",
|
||||
"tgk": "tadžiščina",
|
||||
"tmh": "Tamashek",
|
||||
"tam": "tamilščina",
|
||||
"tat": "tatarščina",
|
||||
"tel": "Telugu",
|
||||
"ter": "Tereno",
|
||||
"tet": "Tetum",
|
||||
"tha": "tajščina",
|
||||
"bod": "tibetanščina",
|
||||
"tig": "Tigre",
|
||||
"tir": "Tigrinya",
|
||||
"tem": "Timne",
|
||||
"tiv": "Tiv",
|
||||
"tli": "Tlingit",
|
||||
"tpi": "tok pisin",
|
||||
"tkl": "Tokelau",
|
||||
"tog": "Tonga (Nyasa)",
|
||||
"ton": "tonganščina",
|
||||
"tsi": "tsimšijščina",
|
||||
"tso": "Tsonga",
|
||||
"tsn": "Tswana",
|
||||
"tum": "Tumbuka",
|
||||
"tur": "turščina",
|
||||
"tuk": "turkmenščina",
|
||||
"tvl": "tuvalujščina",
|
||||
"tyv": "Tuvinian",
|
||||
"twi": "Twi",
|
||||
"udm": "Udmurt",
|
||||
"uga": "ugaritščina",
|
||||
"uig": "ujgurščina",
|
||||
"ukr": "ukrajinščina",
|
||||
"umb": "Umbundu",
|
||||
"mis": "Uncoded languages",
|
||||
"und": "nedoločen",
|
||||
"urd": "urdujščina",
|
||||
"uzb": "uzbeščina",
|
||||
"vai": "vajščina",
|
||||
"ven": "Venda",
|
||||
"vie": "vietnamščina",
|
||||
"vol": "Volapük",
|
||||
"vot": "votjaščina",
|
||||
"wln": "valonščina",
|
||||
"war": "Waray (Philippines)",
|
||||
"was": "Washo",
|
||||
"cym": "valižanščina",
|
||||
"wal": "Wolaytta",
|
||||
"wol": "Wolof",
|
||||
"xho": "koščina",
|
||||
"sah": "jakutščina",
|
||||
"yao": "jaojščina",
|
||||
"yap": "Yapese",
|
||||
"yid": "jidiš",
|
||||
"yor": "jorubščina",
|
||||
"zap": "Zapotec",
|
||||
"zza": "Zaza",
|
||||
"zen": "Zenaga",
|
||||
"zha": "Zhuang",
|
||||
"zul": "zulujščina",
|
||||
"zun": "Zuni"
|
||||
},
|
||||
"sv": {
|
||||
"aar": "Afar",
|
||||
"abk": "Abchaziska",
|
||||
|
@@ -27,7 +27,7 @@ import datetime
|
||||
import mimetypes
|
||||
from uuid import uuid4
|
||||
|
||||
from flask import Blueprint, request, url_for
|
||||
from flask import Blueprint, request, url_for, g
|
||||
from flask_babel import format_date
|
||||
from .cw_login import current_user
|
||||
|
||||
@@ -43,6 +43,8 @@ def url_for_other_page(page):
|
||||
args = request.view_args.copy()
|
||||
args['page'] = page
|
||||
for get, val in request.args.items():
|
||||
if get == "page":
|
||||
continue
|
||||
args[get] = val
|
||||
return url_for(request.endpoint, **args)
|
||||
|
||||
@@ -111,21 +113,12 @@ def yesno(value, yes, no):
|
||||
|
||||
@jinjia.app_template_filter('formatfloat')
|
||||
def formatfloat(value, decimals=1):
|
||||
value = 0 if not value else value
|
||||
return ('{0:.' + str(decimals) + 'f}').format(value).rstrip('0').rstrip('.')
|
||||
|
||||
|
||||
@jinjia.app_template_filter('formatseriesindex')
|
||||
def formatseriesindex_filter(series_index):
|
||||
if series_index:
|
||||
try:
|
||||
if int(series_index) - series_index == 0:
|
||||
return int(series_index)
|
||||
else:
|
||||
return series_index
|
||||
except (ValueError, TypeError):
|
||||
return series_index
|
||||
return 0
|
||||
if not value or (isinstance(value, str) and not value.is_numeric()):
|
||||
return value
|
||||
formated_value = ('{0:.' + str(decimals) + 'f}').format(value)
|
||||
if formated_value.endswith('.' + "0" * decimals):
|
||||
formated_value = formated_value.rstrip('0').rstrip('.')
|
||||
return formated_value
|
||||
|
||||
|
||||
@jinjia.app_template_filter('escapedlink')
|
||||
@@ -179,3 +172,12 @@ def get_cover_srcset(series):
|
||||
url = url_for('web.get_series_cover', series_id=series.id, resolution=shortname, c=cache_timestamp())
|
||||
srcset.append(f'{url} {resolution}x')
|
||||
return ', '.join(srcset)
|
||||
|
||||
|
||||
@jinjia.app_template_filter('music')
|
||||
def contains_music(book_formats):
|
||||
result = False
|
||||
for format in book_formats:
|
||||
if format.format.lower() in g.constants.EXTENSIONS_AUDIO:
|
||||
result = True
|
||||
return result
|
||||
|
201
cps/kobo.py
201
cps/kobo.py
@@ -18,7 +18,7 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import base64
|
||||
import datetime
|
||||
from datetime import datetime, timezone
|
||||
import os
|
||||
import uuid
|
||||
import zipfile
|
||||
@@ -47,7 +47,7 @@ import requests
|
||||
from . import config, logger, kobo_auth, db, calibre_db, helper, shelf as shelf_lib, ub, csrf, kobo_sync_status
|
||||
from . import isoLanguages
|
||||
from .epub import get_epub_layout
|
||||
from .constants import COVER_THUMBNAIL_SMALL
|
||||
from .constants import COVER_THUMBNAIL_SMALL, COVER_THUMBNAIL_MEDIUM, COVER_THUMBNAIL_LARGE
|
||||
from .helper import get_download_link
|
||||
from .services import SyncToken as SyncToken
|
||||
from .web import download_required
|
||||
@@ -106,24 +106,29 @@ def make_request_to_kobo_store(sync_token=None):
|
||||
return store_response
|
||||
|
||||
|
||||
def redirect_or_proxy_request():
|
||||
def redirect_or_proxy_request(auth=False):
|
||||
if config.config_kobo_proxy:
|
||||
if request.method == "GET":
|
||||
return redirect(get_store_url_for_current_request(), 307)
|
||||
else:
|
||||
# The Kobo device turns other request types into GET requests on redirects,
|
||||
# so we instead proxy to the Kobo store ourselves.
|
||||
store_response = make_request_to_kobo_store()
|
||||
try:
|
||||
if request.method == "GET":
|
||||
alfa = redirect(get_store_url_for_current_request(), 307)
|
||||
return alfa
|
||||
else:
|
||||
# The Kobo device turns other request types into GET requests on redirects,
|
||||
# so we instead proxy to the Kobo store ourselves.
|
||||
store_response = make_request_to_kobo_store()
|
||||
|
||||
response_headers = store_response.headers
|
||||
for header_key in CONNECTION_SPECIFIC_HEADERS:
|
||||
response_headers.pop(header_key, default=None)
|
||||
response_headers = store_response.headers
|
||||
for header_key in CONNECTION_SPECIFIC_HEADERS:
|
||||
response_headers.pop(header_key, default=None)
|
||||
|
||||
return make_response(
|
||||
store_response.content, store_response.status_code, response_headers.items()
|
||||
)
|
||||
else:
|
||||
return make_response(jsonify({}))
|
||||
return make_response(
|
||||
store_response.content, store_response.status_code, response_headers.items()
|
||||
)
|
||||
except Exception as e:
|
||||
log.error("Failed to receive or parse response from Kobo's endpoint: {}".format(e))
|
||||
if auth:
|
||||
return make_calibre_web_auth_response()
|
||||
return make_response(jsonify({}))
|
||||
|
||||
|
||||
def convert_to_kobo_timestamp_string(timestamp):
|
||||
@@ -131,7 +136,7 @@ def convert_to_kobo_timestamp_string(timestamp):
|
||||
return timestamp.strftime("%Y-%m-%dT%H:%M:%SZ")
|
||||
except AttributeError as exc:
|
||||
log.debug("Timestamp not valid: {}".format(exc))
|
||||
return datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ")
|
||||
return datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
|
||||
|
||||
|
||||
@kobo.route("/v1/library/sync")
|
||||
@@ -150,15 +155,15 @@ def HandleSyncRequest():
|
||||
|
||||
# if no books synced don't respect sync_token
|
||||
if not ub.session.query(ub.KoboSyncedBooks).filter(ub.KoboSyncedBooks.user_id == current_user.id).count():
|
||||
sync_token.books_last_modified = datetime.datetime.min
|
||||
sync_token.books_last_created = datetime.datetime.min
|
||||
sync_token.reading_state_last_modified = datetime.datetime.min
|
||||
sync_token.books_last_modified = datetime.min
|
||||
sync_token.books_last_created = datetime.min
|
||||
sync_token.reading_state_last_modified = datetime.min
|
||||
|
||||
new_books_last_modified = sync_token.books_last_modified # needed for sync selected shelfs only
|
||||
new_books_last_created = sync_token.books_last_created # needed to distinguish between new and changed entitlement
|
||||
new_reading_state_last_modified = sync_token.reading_state_last_modified
|
||||
|
||||
new_archived_last_modified = datetime.datetime.min
|
||||
new_archived_last_modified = datetime.min
|
||||
sync_results = []
|
||||
|
||||
# We reload the book database so that the user gets a fresh view of the library
|
||||
@@ -323,7 +328,7 @@ def generate_sync_response(sync_token, sync_results, set_cont=False):
|
||||
sync_token.to_headers(extra_headers)
|
||||
|
||||
# log.debug("Kobo Sync Content: {}".format(sync_results))
|
||||
# jsonify decodes the unicode string different to what kobo expects
|
||||
# jsonify decodes the Unicode string different to what kobo expects
|
||||
response = make_response(json.dumps(sync_results), extra_headers)
|
||||
response.headers["Content-Type"] = "application/json; charset=utf-8"
|
||||
return response
|
||||
@@ -375,7 +380,7 @@ def create_book_entitlement(book, archived):
|
||||
book_uuid = str(book.uuid)
|
||||
return {
|
||||
"Accessibility": "Full",
|
||||
"ActivePeriod": {"From": convert_to_kobo_timestamp_string(datetime.datetime.utcnow())},
|
||||
"ActivePeriod": {"From": convert_to_kobo_timestamp_string(datetime.now(timezone.utc))},
|
||||
"Created": convert_to_kobo_timestamp_string(book.timestamp),
|
||||
"CrossRevisionId": book_uuid,
|
||||
"Id": book_uuid,
|
||||
@@ -423,7 +428,7 @@ def get_series(book):
|
||||
|
||||
|
||||
def get_seriesindex(book):
|
||||
return book.series_index or 1
|
||||
return book.series_index if isinstance(book.series_index, float) else 1
|
||||
|
||||
|
||||
def get_language(book):
|
||||
@@ -486,14 +491,16 @@ def get_metadata(book):
|
||||
|
||||
if get_series(book):
|
||||
name = get_series(book)
|
||||
metadata["Series"] = {
|
||||
"Name": get_series(book),
|
||||
"Number": get_seriesindex(book), # ToDo Check int() ?
|
||||
"NumberFloat": float(get_seriesindex(book)),
|
||||
# Get a deterministic id based on the series name.
|
||||
"Id": str(uuid.uuid3(uuid.NAMESPACE_DNS, name)),
|
||||
}
|
||||
|
||||
try:
|
||||
metadata["Series"] = {
|
||||
"Name": get_series(book),
|
||||
"Number": get_seriesindex(book), # ToDo Check int() ?
|
||||
"NumberFloat": float(get_seriesindex(book)),
|
||||
# Get a deterministic id based on the series name.
|
||||
"Id": str(uuid.uuid3(uuid.NAMESPACE_DNS, name)),
|
||||
}
|
||||
except Exception as e:
|
||||
print(e)
|
||||
return metadata
|
||||
|
||||
|
||||
@@ -725,7 +732,7 @@ def sync_shelves(sync_token, sync_results, only_kobo_shelves=False):
|
||||
ub.session_commit()
|
||||
|
||||
|
||||
# Creates a Kobo "Tag" object from a ub.Shelf object
|
||||
# Creates a Kobo "Tag" object from an ub.Shelf object
|
||||
def create_kobo_tag(shelf):
|
||||
tag = {
|
||||
"Created": convert_to_kobo_timestamp_string(shelf.created),
|
||||
@@ -795,7 +802,7 @@ def HandleStateRequest(book_uuid):
|
||||
if new_book_read_status == ub.ReadBook.STATUS_IN_PROGRESS \
|
||||
and new_book_read_status != book_read.read_status:
|
||||
book_read.times_started_reading += 1
|
||||
book_read.last_time_started_reading = datetime.datetime.utcnow()
|
||||
book_read.last_time_started_reading = datetime.now(timezone.utc)
|
||||
book_read.read_status = new_book_read_status
|
||||
update_results_response["StatusInfoResult"] = {"Result": "Success"}
|
||||
except (KeyError, TypeError, ValueError, StatementError):
|
||||
@@ -903,7 +910,12 @@ def get_current_bookmark_response(current_bookmark):
|
||||
@requires_kobo_auth
|
||||
def HandleCoverImageRequest(book_uuid, width, height, Quality, isGreyscale):
|
||||
try:
|
||||
resolution = None if int(height) > 1000 else COVER_THUMBNAIL_SMALL
|
||||
if int(height) > 1000:
|
||||
resolution = COVER_THUMBNAIL_LARGE
|
||||
elif int(height) > 500:
|
||||
resolution = COVER_THUMBNAIL_MEDIUM
|
||||
else:
|
||||
resolution = COVER_THUMBNAIL_SMALL
|
||||
except ValueError:
|
||||
log.error("Requested height %s of book %s is invalid" % (book_uuid, height))
|
||||
resolution = COVER_THUMBNAIL_SMALL
|
||||
@@ -948,7 +960,7 @@ def HandleBookDeletionRequest(book_uuid):
|
||||
|
||||
# TODO: Implement the following routes
|
||||
@csrf.exempt
|
||||
@kobo.route("/v1/library/<dummy>", methods=["DELETE", "GET"])
|
||||
@kobo.route("/v1/library/<dummy>", methods=["DELETE", "GET", "POST"])
|
||||
def HandleUnimplementedRequest(dummy=None):
|
||||
log.debug("Unimplemented Library Request received: %s (request is forwarded to kobo if configured)",
|
||||
request.base_url)
|
||||
@@ -1035,7 +1047,7 @@ def HandleAuthRequest():
|
||||
log.debug('Kobo Auth request')
|
||||
if config.config_kobo_proxy:
|
||||
try:
|
||||
return redirect_or_proxy_request()
|
||||
return redirect_or_proxy_request(auth=True)
|
||||
except Exception:
|
||||
log.error("Failed to receive or parse response from Kobo's auth endpoint. Falling back to un-proxied mode.")
|
||||
return make_calibre_web_auth_response()
|
||||
@@ -1119,25 +1131,37 @@ def download_book(book_id, book_format):
|
||||
|
||||
def NATIVE_KOBO_RESOURCES():
|
||||
return {
|
||||
"account_page": "https://secure.kobobooks.com/profile",
|
||||
"account_page": "https://www.kobo.com/account/settings",
|
||||
"account_page_rakuten": "https://my.rakuten.co.jp/",
|
||||
"add_device": "https://storeapi.kobo.com/v1/user/add-device",
|
||||
"add_entitlement": "https://storeapi.kobo.com/v1/library/{RevisionIds}",
|
||||
"affiliaterequest": "https://storeapi.kobo.com/v1/affiliate",
|
||||
"assets": "https://storeapi.kobo.com/v1/assets",
|
||||
"audiobook": "https://storeapi.kobo.com/v1/products/audiobooks/{ProductId}",
|
||||
"audiobook_detail_page": "https://www.kobo.com/{region}/{language}/audiobook/{slug}",
|
||||
"audiobook_landing_page": "https://www.kobo.com/{region}/{language}/audiobooks",
|
||||
"audiobook_preview": "https://storeapi.kobo.com/v1/products/audiobooks/{Id}/preview",
|
||||
"audiobook_purchase_withcredit": "https://storeapi.kobo.com/v1/store/audiobook/{Id}",
|
||||
"audiobook_subscription_orange_deal_inclusion_url": "https://authorize.kobo.com/inclusion",
|
||||
"authorproduct_recommendations": "https://storeapi.kobo.com/v1/products/books/authors/recommendations",
|
||||
"autocomplete": "https://storeapi.kobo.com/v1/products/autocomplete",
|
||||
"blackstone_header": {"key": "x-amz-request-payer", "value": "requester"},
|
||||
"blackstone_header": {
|
||||
"key": "x-amz-request-payer",
|
||||
"value": "requester"
|
||||
},
|
||||
"book": "https://storeapi.kobo.com/v1/products/books/{ProductId}",
|
||||
"book_detail_page": "https://store.kobobooks.com/{culture}/ebook/{slug}",
|
||||
"book_detail_page_rakuten": "https://books.rakuten.co.jp/rk/{crossrevisionid}",
|
||||
"book_landing_page": "https://store.kobobooks.com/ebooks",
|
||||
"book_detail_page": "https://www.kobo.com/{region}/{language}/ebook/{slug}",
|
||||
"book_detail_page_rakuten": "http://books.rakuten.co.jp/rk/{crossrevisionid}",
|
||||
"book_landing_page": "https://www.kobo.com/ebooks",
|
||||
"book_subscription": "https://storeapi.kobo.com/v1/products/books/subscriptions",
|
||||
"browse_history": "https://storeapi.kobo.com/v1/user/browsehistory",
|
||||
"categories": "https://storeapi.kobo.com/v1/categories",
|
||||
"categories_page": "https://store.kobobooks.com/ebooks/categories",
|
||||
"categories_page": "https://www.kobo.com/ebooks/categories",
|
||||
"category": "https://storeapi.kobo.com/v1/categories/{CategoryId}",
|
||||
"category_featured_lists": "https://storeapi.kobo.com/v1/categories/{CategoryId}/featured",
|
||||
"category_products": "https://storeapi.kobo.com/v1/categories/{CategoryId}/products",
|
||||
"checkout_borrowed_book": "https://storeapi.kobo.com/v1/library/borrow",
|
||||
"client_authd_referral": "https://authorize.kobo.com/api/AuthenticatedReferral/client/v1/getLink",
|
||||
"configuration_data": "https://storeapi.kobo.com/v1/configuration",
|
||||
"content_access_book": "https://storeapi.kobo.com/v1/products/books/{ProductId}/access",
|
||||
"customer_care_live_chat": "https://v2.zopim.com/widget/livechat.html?key=Y6gwUmnu4OATxN3Tli4Av9bYN319BTdO",
|
||||
@@ -1148,92 +1172,109 @@ def NATIVE_KOBO_RESOURCES():
|
||||
"delete_tag_items": "https://storeapi.kobo.com/v1/library/tags/{TagId}/items/delete",
|
||||
"device_auth": "https://storeapi.kobo.com/v1/auth/device",
|
||||
"device_refresh": "https://storeapi.kobo.com/v1/auth/refresh",
|
||||
"dictionary_host": "https://kbdownload1-a.akamaihd.net",
|
||||
"dictionary_host": "https://ereaderfiles.kobo.com",
|
||||
"discovery_host": "https://discovery.kobobooks.com",
|
||||
"ereaderdevices": "https://storeapi.kobo.com/v2/products/EReaderDeviceFeeds",
|
||||
"eula_page": "https://www.kobo.com/termsofuse?style=onestore",
|
||||
"exchange_auth": "https://storeapi.kobo.com/v1/auth/exchange",
|
||||
"external_book": "https://storeapi.kobo.com/v1/products/books/external/{Ids}",
|
||||
"facebook_sso_page":
|
||||
"https://authorize.kobo.com/signin/provider/Facebook/login?returnUrl=http://store.kobobooks.com/",
|
||||
"facebook_sso_page": "https://authorize.kobo.com/signin/provider/Facebook/login?returnUrl=http://kobo.com/",
|
||||
"featured_list": "https://storeapi.kobo.com/v1/products/featured/{FeaturedListId}",
|
||||
"featured_lists": "https://storeapi.kobo.com/v1/products/featured",
|
||||
"free_books_page": {
|
||||
"EN": "https://www.kobo.com/{region}/{language}/p/free-ebooks",
|
||||
"FR": "https://www.kobo.com/{region}/{language}/p/livres-gratuits",
|
||||
"IT": "https://www.kobo.com/{region}/{language}/p/libri-gratuiti",
|
||||
"NL": "https://www.kobo.com/{region}/{language}/"
|
||||
"List/bekijk-het-overzicht-van-gratis-ebooks/QpkkVWnUw8sxmgjSlCbJRg",
|
||||
"PT": "https://www.kobo.com/{region}/{language}/p/livros-gratis",
|
||||
"NL": "https://www.kobo.com/{region}/{language}/List/bekijk-het-overzicht-van-gratis-ebooks/QpkkVWnUw8sxmgjSlCbJRg",
|
||||
"PT": "https://www.kobo.com/{region}/{language}/p/livros-gratis"
|
||||
},
|
||||
"fte_feedback": "https://storeapi.kobo.com/v1/products/ftefeedback",
|
||||
"funnel_metrics": "https://storeapi.kobo.com/v1/funnelmetrics",
|
||||
"get_download_keys": "https://storeapi.kobo.com/v1/library/downloadkeys",
|
||||
"get_download_link": "https://storeapi.kobo.com/v1/library/downloadlink",
|
||||
"get_tests_request": "https://storeapi.kobo.com/v1/analytics/gettests",
|
||||
"giftcard_epd_redeem_url": "https://www.kobo.com/{storefront}/{language}/redeem-ereader",
|
||||
"giftcard_redeem_url": "https://www.kobo.com/{storefront}/{language}/redeem",
|
||||
"help_page": "https://www.kobo.com/help",
|
||||
"kobo_audiobooks_enabled": "False",
|
||||
"gpb_flow_enabled": "False",
|
||||
"help_page": "http://www.kobo.com/help",
|
||||
"image_host": "//cdn.kobo.com/book-images/",
|
||||
"image_url_quality_template": "https://cdn.kobo.com/book-images/{ImageId}/{Width}/{Height}/{Quality}/{IsGreyscale}/image.jpg",
|
||||
"image_url_template": "https://cdn.kobo.com/book-images/{ImageId}/{Width}/{Height}/false/image.jpg",
|
||||
"kobo_audiobooks_credit_redemption": "False",
|
||||
"kobo_audiobooks_enabled": "True",
|
||||
"kobo_audiobooks_orange_deal_enabled": "False",
|
||||
"kobo_audiobooks_subscriptions_enabled": "False",
|
||||
"kobo_nativeborrow_enabled": "True",
|
||||
"kobo_display_price": "True",
|
||||
"kobo_dropbox_link_account_enabled": "False",
|
||||
"kobo_google_tax": "False",
|
||||
"kobo_googledrive_link_account_enabled": "False",
|
||||
"kobo_nativeborrow_enabled": "False",
|
||||
"kobo_onedrive_link_account_enabled": "False",
|
||||
"kobo_onestorelibrary_enabled": "False",
|
||||
"kobo_privacyCentre_url": "https://www.kobo.com/privacy",
|
||||
"kobo_redeem_enabled": "True",
|
||||
"kobo_shelfie_enabled": "False",
|
||||
"kobo_subscriptions_enabled": "False",
|
||||
"kobo_superpoints_enabled": "False",
|
||||
"kobo_subscriptions_enabled": "True",
|
||||
"kobo_superpoints_enabled": "True",
|
||||
"kobo_wishlist_enabled": "True",
|
||||
"library_book": "https://storeapi.kobo.com/v1/user/library/books/{LibraryItemId}",
|
||||
"library_items": "https://storeapi.kobo.com/v1/user/library",
|
||||
"library_metadata": "https://storeapi.kobo.com/v1/library/{Ids}/metadata",
|
||||
"library_prices": "https://storeapi.kobo.com/v1/user/library/previews/prices",
|
||||
"library_stack": "https://storeapi.kobo.com/v1/user/library/stacks/{LibraryItemId}",
|
||||
"library_search": "https://storeapi.kobo.com/v1/library/search",
|
||||
"library_sync": "https://storeapi.kobo.com/v1/library/sync",
|
||||
"love_dashboard_page": "https://store.kobobooks.com/{culture}/kobosuperpoints",
|
||||
"love_points_redemption_page":
|
||||
"https://store.kobobooks.com/{culture}/KoboSuperPointsRedemption?productId={ProductId}",
|
||||
"magazine_landing_page": "https://store.kobobooks.com/emagazines",
|
||||
"love_dashboard_page": "https://www.kobo.com/{region}/{language}/kobosuperpoints",
|
||||
"love_points_redemption_page": "https://www.kobo.com/{region}/{language}/KoboSuperPointsRedemption?productId={ProductId}",
|
||||
"magazine_landing_page": "https://www.kobo.com/emagazines",
|
||||
"more_sign_in_options": "https://authorize.kobo.com/signin?returnUrl=http://kobo.com/#allProviders",
|
||||
"notebooks": "https://storeapi.kobo.com/api/internal/notebooks",
|
||||
"notifications_registration_issue": "https://storeapi.kobo.com/v1/notifications/registration",
|
||||
"oauth_host": "https://oauth.kobo.com",
|
||||
"overdrive_account": "https://auth.overdrive.com/account",
|
||||
"overdrive_library": "https://{libraryKey}.auth.overdrive.com/library",
|
||||
"overdrive_library_finder_host": "https://libraryfinder.api.overdrive.com",
|
||||
"overdrive_thunder_host": "https://thunder.api.overdrive.com",
|
||||
"password_retrieval_page": "https://www.kobobooks.com/passwordretrieval.html",
|
||||
"password_retrieval_page": "https://www.kobo.com/passwordretrieval.html",
|
||||
"personalizedrecommendations": "https://storeapi.kobo.com/v2/users/personalizedrecommendations",
|
||||
"pocket_link_account_start": "https://authorize.kobo.com/{region}/{language}/linkpocket",
|
||||
"post_analytics_event": "https://storeapi.kobo.com/v1/analytics/event",
|
||||
"ppx_purchasing_url": "https://purchasing.kobo.com",
|
||||
"privacy_page": "https://www.kobo.com/privacypolicy?style=onestore",
|
||||
"product_nextread": "https://storeapi.kobo.com/v1/products/{ProductIds}/nextread",
|
||||
"product_prices": "https://storeapi.kobo.com/v1/products/{ProductIds}/prices",
|
||||
"product_recommendations": "https://storeapi.kobo.com/v1/products/{ProductId}/recommendations",
|
||||
"product_reviews": "https://storeapi.kobo.com/v1/products/{ProductIds}/reviews",
|
||||
"products": "https://storeapi.kobo.com/v1/products",
|
||||
"provider_external_sign_in_page":
|
||||
"https://authorize.kobo.com/ExternalSignIn/{providerName}?returnUrl=http://store.kobobooks.com/",
|
||||
"purchase_buy": "https://www.kobo.com/checkout/createpurchase/",
|
||||
"purchase_buy_templated": "https://www.kobo.com/{culture}/checkout/createpurchase/{ProductId}",
|
||||
"productsv2": "https://storeapi.kobo.com/v2/products",
|
||||
"provider_external_sign_in_page": "https://authorize.kobo.com/ExternalSignIn/{providerName}?returnUrl=http://kobo.com/",
|
||||
"quickbuy_checkout": "https://storeapi.kobo.com/v1/store/quickbuy/{PurchaseId}/checkout",
|
||||
"quickbuy_create": "https://storeapi.kobo.com/v1/store/quickbuy/purchase",
|
||||
"rakuten_token_exchange": "https://storeapi.kobo.com/v1/auth/rakuten_token_exchange",
|
||||
"rating": "https://storeapi.kobo.com/v1/products/{ProductId}/rating/{Rating}",
|
||||
"reading_services_host": "https://readingservices.kobo.com",
|
||||
"reading_state": "https://storeapi.kobo.com/v1/library/{Ids}/state",
|
||||
"redeem_interstitial_page": "https://store.kobobooks.com",
|
||||
"registration_page": "https://authorize.kobo.com/signup?returnUrl=http://store.kobobooks.com/",
|
||||
"redeem_interstitial_page": "https://www.kobo.com",
|
||||
"registration_page": "https://authorize.kobo.com/signup?returnUrl=http://kobo.com/",
|
||||
"related_items": "https://storeapi.kobo.com/v1/products/{Id}/related",
|
||||
"remaining_book_series": "https://storeapi.kobo.com/v1/products/books/series/{SeriesId}",
|
||||
"rename_tag": "https://storeapi.kobo.com/v1/library/tags/{TagId}",
|
||||
"review": "https://storeapi.kobo.com/v1/products/reviews/{ReviewId}",
|
||||
"review_sentiment": "https://storeapi.kobo.com/v1/products/reviews/{ReviewId}/sentiment/{Sentiment}",
|
||||
"shelfie_recommendations": "https://storeapi.kobo.com/v1/user/recommendations/shelfie",
|
||||
"sign_in_page": "https://authorize.kobo.com/signin?returnUrl=http://store.kobobooks.com/",
|
||||
"sign_in_page": "https://authorize.kobo.com/signin?returnUrl=http://kobo.com/",
|
||||
"social_authorization_host": "https://social.kobobooks.com:8443",
|
||||
"social_host": "https://social.kobobooks.com",
|
||||
"stacks_host_productId": "https://store.kobobooks.com/collections/byproductid/",
|
||||
"store_home": "www.kobo.com/{region}/{language}",
|
||||
"store_host": "store.kobobooks.com",
|
||||
"store_newreleases": "https://store.kobobooks.com/{culture}/List/new-releases/961XUjtsU0qxkFItWOutGA",
|
||||
"store_search": "https://store.kobobooks.com/{culture}/Search?Query={query}",
|
||||
"store_top50": "https://store.kobobooks.com/{culture}/ebooks/Top",
|
||||
"store_host": "www.kobo.com",
|
||||
"store_newreleases": "https://www.kobo.com/{region}/{language}/List/new-releases/961XUjtsU0qxkFItWOutGA",
|
||||
"store_search": "https://www.kobo.com/{region}/{language}/Search?Query={query}",
|
||||
"store_top50": "https://www.kobo.com/{region}/{language}/ebooks/Top",
|
||||
"subs_landing_page": "https://www.kobo.com/{region}/{language}/plus",
|
||||
"subs_management_page": "https://www.kobo.com/{region}/{language}/account/subscriptions",
|
||||
"subs_plans_page": "https://www.kobo.com/{region}/{language}/plus/plans",
|
||||
"subs_purchase_buy_templated": "https://www.kobo.com/{region}/{language}/Checkoutoption/{ProductId}/{TierId}",
|
||||
"tag_items": "https://storeapi.kobo.com/v1/library/tags/{TagId}/Items",
|
||||
"tags": "https://storeapi.kobo.com/v1/library/tags",
|
||||
"taste_profile": "https://storeapi.kobo.com/v1/products/tasteprofile",
|
||||
"terms_of_sale_page": "https://authorize.kobo.com/{region}/{language}/terms/termsofsale",
|
||||
"update_accessibility_to_preview": "https://storeapi.kobo.com/v1/library/{EntitlementIds}/preview",
|
||||
"use_one_store": "False",
|
||||
"use_one_store": "True",
|
||||
"user_loyalty_benefits": "https://storeapi.kobo.com/v1/user/loyalty/benefits",
|
||||
"user_platform": "https://storeapi.kobo.com/v1/user/platform",
|
||||
"user_profile": "https://storeapi.kobo.com/v1/user/profile",
|
||||
@@ -1241,6 +1282,6 @@ def NATIVE_KOBO_RESOURCES():
|
||||
"user_recommendations": "https://storeapi.kobo.com/v1/user/recommendations",
|
||||
"user_reviews": "https://storeapi.kobo.com/v1/user/reviews",
|
||||
"user_wishlist": "https://storeapi.kobo.com/v1/user/wishlist",
|
||||
"userguide_host": "https://kbdownload1-a.akamaihd.net",
|
||||
"wishlist_page": "https://store.kobobooks.com/{region}/{language}/account/wishlist",
|
||||
"userguide_host": "https://ereaderfiles.kobo.com",
|
||||
"wishlist_page": "https://www.kobo.com/{region}/{language}/account/wishlist"
|
||||
}
|
||||
|
@@ -22,7 +22,7 @@
|
||||
This module also includes research notes into the auth protocol used by Kobo devices.
|
||||
|
||||
Log-in:
|
||||
When first booting a Kobo device the user must sign into a Kobo (or affiliate) account.
|
||||
When first booting a Kobo device the user must log in to a Kobo (or affiliate) account.
|
||||
Upon successful sign-in, the user is redirected to
|
||||
https://auth.kobobooks.com/CrossDomainSignIn?id=<some id>
|
||||
which serves the following response:
|
||||
@@ -41,7 +41,7 @@ issue for a few years now https://www.mobileread.com/forums/showpost.php?p=34768
|
||||
will still grant access given the userkey.)
|
||||
|
||||
Official Kobo Store Api authorization:
|
||||
* For most of the endpoints we care about (sync, metadata, tags, etc), the userKey is
|
||||
* For most of the endpoints we care about (sync, metadata, tags, etc.), the userKey is
|
||||
passed in the x-kobo-userkey header, and is sufficient to authorize the API call.
|
||||
* Some endpoints (e.g: AnnotationService) instead make use of Bearer tokens pass through
|
||||
an authorization header. To get a BearerToken, the device makes a POST request to the
|
||||
|
@@ -19,7 +19,7 @@
|
||||
|
||||
from .cw_login import current_user
|
||||
from . import ub
|
||||
import datetime
|
||||
from datetime import datetime, timezone
|
||||
from sqlalchemy.sql.expression import or_, and_, true
|
||||
# from sqlalchemy import exc
|
||||
|
||||
@@ -58,7 +58,7 @@ def change_archived_books(book_id, state=None, message=None):
|
||||
archived_book = ub.ArchivedBook(user_id=current_user.id, book_id=book_id)
|
||||
|
||||
archived_book.is_archived = state if state else not archived_book.is_archived
|
||||
archived_book.last_modified = datetime.datetime.utcnow() # toDo. Check utc timestamp
|
||||
archived_book.last_modified = datetime.now(timezone.utc) # toDo. Check utc timestamp
|
||||
|
||||
ub.session.merge(archived_book)
|
||||
ub.session_commit(message)
|
||||
|
@@ -29,7 +29,7 @@ from .constants import CONFIG_DIR as _CONFIG_DIR
|
||||
ACCESS_FORMATTER_GEVENT = Formatter("%(message)s")
|
||||
ACCESS_FORMATTER_TORNADO = Formatter("[%(asctime)s] %(message)s")
|
||||
|
||||
FORMATTER = Formatter("[%(asctime)s] %(levelname)5s {%(name)s:%(lineno)d} %(message)s")
|
||||
FORMATTER = Formatter("[%(asctime)s] %(levelname)5s {%(filename)s:%(lineno)d} %(message)s")
|
||||
DEFAULT_LOG_LEVEL = logging.INFO
|
||||
DEFAULT_LOG_FILE = os.path.join(_CONFIG_DIR, "calibre-web.log")
|
||||
DEFAULT_ACCESS_LOG = os.path.join(_CONFIG_DIR, "access.log")
|
||||
@@ -42,18 +42,12 @@ logging.addLevelName(logging.CRITICAL, "CRIT")
|
||||
|
||||
class _Logger(logging.Logger):
|
||||
|
||||
def error_or_exception(self, message, stacklevel=2, *args, **kwargs):
|
||||
def error_or_exception(self, message, stacklevel=1, *args, **kwargs):
|
||||
is_debug = self.getEffectiveLevel() <= logging.DEBUG
|
||||
if sys.version_info > (3, 7):
|
||||
if is_debug:
|
||||
self.exception(message, stacklevel=stacklevel, *args, **kwargs)
|
||||
else:
|
||||
self.error(message, stacklevel=stacklevel, *args, **kwargs)
|
||||
if not is_debug:
|
||||
self.exception(message, stacklevel=stacklevel, *args, **kwargs)
|
||||
else:
|
||||
if is_debug:
|
||||
self.exception(message, stack_info=True, *args, **kwargs)
|
||||
else:
|
||||
self.error(message, *args, **kwargs)
|
||||
self.error(message, stacklevel=stacklevel, *args, **kwargs)
|
||||
|
||||
def debug_no_auth(self, message, *args, **kwargs):
|
||||
message = message.strip("\r\n")
|
||||
|
@@ -31,6 +31,7 @@ def main():
|
||||
app = create_app()
|
||||
|
||||
from .web import web
|
||||
from .basic import basic
|
||||
from .opds import opds
|
||||
from .admin import admi
|
||||
from .gdrive import gdrive
|
||||
@@ -64,6 +65,7 @@ def main():
|
||||
app.register_blueprint(search)
|
||||
app.register_blueprint(tasks)
|
||||
app.register_blueprint(web)
|
||||
app.register_blueprint(basic)
|
||||
app.register_blueprint(opds)
|
||||
limiter.limit("3/minute", key_func=request_username)(opds)
|
||||
app.register_blueprint(jinjia)
|
||||
|
@@ -38,14 +38,16 @@ class Amazon(Metadata):
|
||||
__name__ = "Amazon"
|
||||
__id__ = "amazon"
|
||||
headers = {'upgrade-insecure-requests': '1',
|
||||
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36',
|
||||
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
|
||||
'sec-gpc': '1',
|
||||
'sec-fetch-site': 'none',
|
||||
'sec-fetch-mode': 'navigate',
|
||||
'sec-fetch-user': '?1',
|
||||
'sec-fetch-dest': 'document',
|
||||
'accept-encoding': 'gzip, deflate, br',
|
||||
'user-agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:130.0) Gecko/20100101 Firefox/130.0',
|
||||
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/png,image/svg+xml,*/*;q=0.8',
|
||||
'Sec-Fetch-Site': 'same-origin',
|
||||
'Sec-Fetch-Mode': 'navigate',
|
||||
'Sec-Fetch-User': '?1',
|
||||
'Sec-Fetch-Dest': 'document',
|
||||
'Upgrade-Insecure-Requests': '1',
|
||||
'Alt-Used' : 'www.amazon.com',
|
||||
'Priority' : 'u=0, i',
|
||||
'accept-encoding': 'gzip, deflate, br, zstd',
|
||||
'accept-language': 'en-US,en;q=0.9'}
|
||||
session = requests.Session()
|
||||
session.headers=headers
|
||||
@@ -53,7 +55,6 @@ class Amazon(Metadata):
|
||||
def search(
|
||||
self, query: str, generic_cover: str = "", locale: str = "en"
|
||||
) -> Optional[List[MetaRecord]]:
|
||||
#timer=time()
|
||||
def inner(link, index) -> [dict, int]:
|
||||
with self.session as session:
|
||||
try:
|
||||
@@ -61,11 +62,11 @@ class Amazon(Metadata):
|
||||
r.raise_for_status()
|
||||
except Exception as ex:
|
||||
log.warning(ex)
|
||||
return None
|
||||
return []
|
||||
long_soup = BS(r.text, "lxml") #~4sec :/
|
||||
soup2 = long_soup.find("div", attrs={"cel_widget_id": "dpx-books-ppd_csm_instrumentation_wrapper"})
|
||||
soup2 = long_soup.find("div", attrs={"cel_widget_id": "dpx-ppd_csm_instrumentation_wrapper"})
|
||||
if soup2 is None:
|
||||
return None
|
||||
return []
|
||||
try:
|
||||
match = MetaRecord(
|
||||
title = "",
|
||||
@@ -88,7 +89,7 @@ class Amazon(Metadata):
|
||||
soup2.find("div", attrs={"data-feature-name": "bookDescription"}).stripped_strings)\
|
||||
.replace("\xa0"," ")[:-9].strip().strip("\n")
|
||||
except (AttributeError, TypeError):
|
||||
return None # if there is no description it is not a book and therefore should be ignored
|
||||
return [] # if there is no description it is not a book and therefore should be ignored
|
||||
try:
|
||||
match.title = soup2.find("span", attrs={"id": "productTitle"}).text
|
||||
except (AttributeError, TypeError):
|
||||
@@ -107,13 +108,13 @@ class Amazon(Metadata):
|
||||
except (AttributeError, ValueError):
|
||||
match.rating = 0
|
||||
try:
|
||||
match.cover = soup2.find("img", attrs={"class": "a-dynamic-image frontImage"})["src"]
|
||||
match.cover = soup2.find("img", attrs={"class": "a-dynamic-image"})["src"]
|
||||
except (AttributeError, TypeError):
|
||||
match.cover = ""
|
||||
return match, index
|
||||
except Exception as e:
|
||||
log.error_or_exception(e)
|
||||
return None
|
||||
return []
|
||||
|
||||
val = list()
|
||||
if self.active:
|
||||
@@ -133,7 +134,7 @@ class Amazon(Metadata):
|
||||
links_list = [next(filter(lambda i: "digital-text" in i["href"], x.findAll("a")))["href"] for x in
|
||||
soup.findAll("div", attrs={"data-component-type": "s-search-result"})]
|
||||
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
|
||||
fut = {executor.submit(inner, link, index) for index, link in enumerate(links_list[:5])}
|
||||
val = list(map(lambda x : x.result() ,concurrent.futures.as_completed(fut)))
|
||||
fut = {executor.submit(inner, link, index) for index, link in enumerate(links_list[:3])}
|
||||
val = list(map(lambda x : x.result(), concurrent.futures.as_completed(fut)))
|
||||
result = list(filter(lambda x: x, val))
|
||||
return [x[0] for x in sorted(result, key=itemgetter(1))] #sort by amazons listing order for best relevance
|
||||
|
@@ -54,7 +54,7 @@ class Google(Metadata):
|
||||
results.raise_for_status()
|
||||
except Exception as e:
|
||||
log.warning(e)
|
||||
return None
|
||||
return []
|
||||
for result in results.json().get("items", []):
|
||||
val.append(
|
||||
self._parse_search_result(
|
||||
|
@@ -32,7 +32,7 @@ class OAuthBackend(SQLAlchemyBackend):
|
||||
Stores and retrieves OAuth tokens using a relational database through
|
||||
the `SQLAlchemy`_ ORM.
|
||||
|
||||
.. _SQLAlchemy: https://www.sqlalchemy.org/
|
||||
_SQLAlchemy: https://www.sqlalchemy.org/
|
||||
"""
|
||||
def __init__(self, model, session, provider_id,
|
||||
user=None, user_id=None, user_required=None, anon_user=None,
|
||||
|
13
cps/opds.py
13
cps/opds.py
@@ -21,10 +21,10 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import datetime
|
||||
import json
|
||||
# import json
|
||||
from urllib.parse import unquote_plus
|
||||
|
||||
from flask import Blueprint, request, render_template, make_response, abort, Response, g
|
||||
from flask import Blueprint, request, render_template, make_response, abort, g, jsonify
|
||||
from flask_babel import get_locale
|
||||
from flask_babel import gettext as _
|
||||
|
||||
@@ -424,11 +424,8 @@ def feed_shelf(book_id):
|
||||
@requires_basic_auth_if_no_ano
|
||||
def opds_download_link(book_id, book_format):
|
||||
if not auth.current_user().role_download():
|
||||
return abort(403)
|
||||
if "Kobo" in request.headers.get('User-Agent'):
|
||||
client = "kobo"
|
||||
else:
|
||||
client = ""
|
||||
return abort(401)
|
||||
client = "kobo" if "Kobo" in request.headers.get('User-Agent') else ""
|
||||
return get_download_link(book_id, book_format.lower(), client)
|
||||
|
||||
|
||||
@@ -454,7 +451,7 @@ def get_database_stats():
|
||||
stat['authors'] = calibre_db.session.query(db.Authors).count()
|
||||
stat['categories'] = calibre_db.session.query(db.Tags).count()
|
||||
stat['series'] = calibre_db.session.query(db.Series).count()
|
||||
return Response(json.dumps(stat), mimetype="application/json")
|
||||
return make_response(jsonify(stat))
|
||||
|
||||
|
||||
@opds.route("/opds/thumb_240_240/<book_id>")
|
||||
|
@@ -41,9 +41,9 @@ class ReverseProxied(object):
|
||||
"""Wrap the application in this middleware and configure the
|
||||
front-end server to add these headers, to let you quietly bind
|
||||
this to a URL other than / and to an HTTP scheme that is
|
||||
different than what is used locally.
|
||||
different from what is used locally.
|
||||
|
||||
Code courtesy of: http://flask.pocoo.org/snippets/35/
|
||||
Code courtesy of: https://flask.pocoo.org/snippets/35/
|
||||
|
||||
In nginx:
|
||||
location /myprefix {
|
||||
|
@@ -24,9 +24,9 @@ from flask_babel import format_date
|
||||
from flask_babel import gettext as _
|
||||
from sqlalchemy.sql.expression import func, not_, and_, or_, text, true
|
||||
from sqlalchemy.sql.functions import coalesce
|
||||
from sqlalchemy import exists
|
||||
|
||||
from . import logger, db, calibre_db, config, ub
|
||||
from .string_helper import strip_whitespaces
|
||||
from .usermanagement import login_required_if_no_ano
|
||||
from .render_template import render_title_template
|
||||
from .pagination import Pagination
|
||||
@@ -244,7 +244,8 @@ def render_adv_search_results(term, offset=None, order=None, limit=None):
|
||||
pagination = None
|
||||
|
||||
cc = calibre_db.get_cc_columns(config, filter_config_custom_read=True)
|
||||
calibre_db.session.connection().connection.connection.create_function("lower", 1, db.lcase)
|
||||
calibre_db.create_functions()
|
||||
# calibre_db.session.connection().connection.connection.create_function("lower", 1, db.lcase)
|
||||
query = calibre_db.generate_linked_query(config.config_read_column, db.Books)
|
||||
q = query.outerjoin(db.books_series_link, db.Books.id == db.books_series_link.c.book)\
|
||||
.outerjoin(db.Series)\
|
||||
@@ -257,21 +258,21 @@ def render_adv_search_results(term, offset=None, order=None, limit=None):
|
||||
tags['include_' + element] = term.get('include_' + element)
|
||||
tags['exclude_' + element] = term.get('exclude_' + element)
|
||||
|
||||
author_name = term.get("author_name")
|
||||
book_title = term.get("book_title")
|
||||
author_name = term.get("authors")
|
||||
book_title = term.get("title")
|
||||
publisher = term.get("publisher")
|
||||
pub_start = term.get("publishstart")
|
||||
pub_end = term.get("publishend")
|
||||
rating_low = term.get("ratinghigh")
|
||||
rating_high = term.get("ratinglow")
|
||||
description = term.get("comment")
|
||||
description = term.get("comments")
|
||||
read_status = term.get("read_status")
|
||||
if author_name:
|
||||
author_name = author_name.strip().lower().replace(',', '|')
|
||||
author_name = strip_whitespaces(author_name).lower().replace(',', '|')
|
||||
if book_title:
|
||||
book_title = book_title.strip().lower()
|
||||
book_title = strip_whitespaces(book_title).lower()
|
||||
if publisher:
|
||||
publisher = publisher.strip().lower()
|
||||
publisher = strip_whitespaces(publisher).lower()
|
||||
|
||||
search_term = []
|
||||
cc_present = False
|
||||
|
@@ -23,7 +23,7 @@ import json
|
||||
import os
|
||||
import sys
|
||||
|
||||
from flask import Blueprint, Response, request, url_for
|
||||
from flask import Blueprint, request, url_for, make_response, jsonify
|
||||
from .cw_login import current_user
|
||||
from flask_babel import get_locale
|
||||
from sqlalchemy.exc import InvalidRequestError, OperationalError
|
||||
@@ -33,7 +33,6 @@ from cps.services.Metadata import Metadata
|
||||
from . import constants, logger, ub, web_server
|
||||
from .usermanagement import user_login_required
|
||||
|
||||
# current_milli_time = lambda: int(round(time() * 1000))
|
||||
|
||||
meta = Blueprint("metadata", __name__)
|
||||
|
||||
@@ -90,7 +89,7 @@ def metadata_provider():
|
||||
provider.append(
|
||||
{"name": c.__name__, "active": ac, "initial": ac, "id": c.__id__}
|
||||
)
|
||||
return Response(json.dumps(provider), mimetype="application/json")
|
||||
return make_response(jsonify(provider))
|
||||
|
||||
|
||||
@meta.route("/metadata/provider", methods=["POST"])
|
||||
@@ -115,9 +114,7 @@ def metadata_change_active_provider(prov_name):
|
||||
provider = next((c for c in cl if c.__id__ == prov_name), None)
|
||||
if provider is not None:
|
||||
data = provider.search(new_state.get("query", ""))
|
||||
return Response(
|
||||
json.dumps([asdict(x) for x in data]), mimetype="application/json"
|
||||
)
|
||||
return make_response(jsonify([asdict(x) for x in data]))
|
||||
return ""
|
||||
|
||||
|
||||
@@ -130,7 +127,7 @@ def metadata_search():
|
||||
locale = get_locale()
|
||||
if query:
|
||||
static_cover = url_for("static", filename="generic_cover.jpg")
|
||||
# start = current_milli_time()
|
||||
# ret = cl[0].search(query, static_cover, locale)
|
||||
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
|
||||
meta = {
|
||||
executor.submit(c.search, query, static_cover, locale): c
|
||||
@@ -139,5 +136,4 @@ def metadata_search():
|
||||
}
|
||||
for future in concurrent.futures.as_completed(meta):
|
||||
data.extend([asdict(x) for x in future.result() if x])
|
||||
# log.info({'Time elapsed {}'.format(current_milli_time()-start)})
|
||||
return Response(json.dumps(data), mimetype="application/json")
|
||||
return make_response(jsonify(data))
|
||||
|
@@ -42,6 +42,7 @@ class BackgroundScheduler:
|
||||
if cls._instance is None:
|
||||
cls._instance = super(BackgroundScheduler, cls).__new__(cls)
|
||||
cls.log = logger.create()
|
||||
logger.logging.getLogger('tzlocal').setLevel(logger.logging.WARNING)
|
||||
cls.scheduler = BScheduler()
|
||||
cls.scheduler.start()
|
||||
|
||||
|
@@ -92,7 +92,7 @@ def send_messsage(token, msg):
|
||||
if creds and creds.expired and creds.refresh_token:
|
||||
creds.refresh(Request())
|
||||
service = build('gmail', 'v1', credentials=creds)
|
||||
message_as_bytes = msg.as_bytes() # the message should converted from string to bytes.
|
||||
message_as_bytes = msg.as_bytes() # the message should be converted from string to bytes.
|
||||
message_as_base64 = base64.urlsafe_b64encode(message_as_bytes) # encode in base64 (printable letters coding)
|
||||
raw = message_as_base64.decode() # convert to something JSON serializable
|
||||
body = {'raw': raw}
|
||||
|
@@ -117,7 +117,7 @@ def get_author_info(author_name):
|
||||
|
||||
|
||||
def get_other_books(author_info, library_books=None):
|
||||
# Get all identifiers (ISBN, Goodreads, etc) and filter author's books by that list so we show fewer duplicates
|
||||
# Get all identifiers (ISBN, Goodreads, etc.) and filter author's books by that list so we show fewer duplicates
|
||||
# Note: Not all images will be shown, even though they're available on Goodreads.com.
|
||||
# See https://www.goodreads.com/topic/show/18213769-goodreads-book-images
|
||||
|
||||
|
@@ -199,7 +199,7 @@ class CalibreTask:
|
||||
self.run(*args)
|
||||
except Exception as ex:
|
||||
self._handleError(str(ex))
|
||||
log.error_or_exception(ex)
|
||||
log.exception(ex)
|
||||
|
||||
self.end_time = datetime.now()
|
||||
|
||||
@@ -235,7 +235,7 @@ class CalibreTask:
|
||||
|
||||
@property
|
||||
def dead(self):
|
||||
"""Determines whether or not this task can be garbage collected
|
||||
"""Determines whether this task can be garbage collected
|
||||
|
||||
We have a separate dictating this because there may be certain tasks that want to override this
|
||||
"""
|
||||
|
66
cps/shelf.py
66
cps/shelf.py
@@ -21,7 +21,7 @@
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import sys
|
||||
from datetime import datetime
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from flask import Blueprint, flash, redirect, request, url_for, abort
|
||||
from flask_babel import gettext as _
|
||||
@@ -80,7 +80,7 @@ def add_to_shelf(shelf_id, book_id):
|
||||
return "%s is a invalid Book Id. Could not be added to Shelf" % book_id, 400
|
||||
|
||||
shelf.books.append(ub.BookShelf(shelf=shelf.id, book_id=book_id, order=maxOrder + 1))
|
||||
shelf.last_modified = datetime.utcnow()
|
||||
shelf.last_modified = datetime.now(timezone.utc)
|
||||
try:
|
||||
ub.session.merge(shelf)
|
||||
ub.session.commit()
|
||||
@@ -139,7 +139,7 @@ def search_to_shelf(shelf_id):
|
||||
for book in books_for_shelf:
|
||||
maxOrder += 1
|
||||
shelf.books.append(ub.BookShelf(shelf=shelf.id, book_id=book, order=maxOrder))
|
||||
shelf.last_modified = datetime.utcnow()
|
||||
shelf.last_modified = datetime.now(timezone.utc)
|
||||
try:
|
||||
ub.session.merge(shelf)
|
||||
ub.session.commit()
|
||||
@@ -185,7 +185,7 @@ def remove_from_shelf(shelf_id, book_id):
|
||||
|
||||
try:
|
||||
ub.session.delete(book_shelf)
|
||||
shelf.last_modified = datetime.utcnow()
|
||||
shelf.last_modified = datetime.now(timezone.utc)
|
||||
ub.session.commit()
|
||||
except (OperationalError, InvalidRequestError) as e:
|
||||
ub.session.rollback()
|
||||
@@ -250,7 +250,7 @@ def show_simpleshelf(shelf_id):
|
||||
return render_show_shelf(2, shelf_id, 1, None)
|
||||
|
||||
|
||||
@shelf.route("/shelf/<int:shelf_id>", defaults={"sort_param": "order", 'page': 1})
|
||||
@shelf.route("/shelf/<int:shelf_id>", defaults={"sort_param": "stored", 'page': 1})
|
||||
@shelf.route("/shelf/<int:shelf_id>/<sort_param>", defaults={'page': 1})
|
||||
@shelf.route("/shelf/<int:shelf_id>/<sort_param>/<int:page>")
|
||||
@login_required_if_no_ano
|
||||
@@ -271,7 +271,7 @@ def order_shelf(shelf_id):
|
||||
for book in books_in_shelf:
|
||||
setattr(book, 'order', to_save[str(book.book_id)])
|
||||
counter += 1
|
||||
# if order different from before -> shelf.last_modified = datetime.utcnow()
|
||||
# if order different from before -> shelf.last_modified = datetime.now(timezone.utc)
|
||||
try:
|
||||
ub.session.commit()
|
||||
except (OperationalError, InvalidRequestError) as e:
|
||||
@@ -418,29 +418,37 @@ def change_shelf_order(shelf_id, order):
|
||||
|
||||
def render_show_shelf(shelf_type, shelf_id, page_no, sort_param):
|
||||
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||
|
||||
status = current_user.get_view_property("shelf", 'man')
|
||||
# check user is allowed to access shelf
|
||||
if shelf and check_shelf_view_permissions(shelf):
|
||||
if shelf_type == 1:
|
||||
# order = [ub.BookShelf.order.asc()]
|
||||
if sort_param == 'pubnew':
|
||||
change_shelf_order(shelf_id, [db.Books.pubdate.desc()])
|
||||
if sort_param == 'pubold':
|
||||
change_shelf_order(shelf_id, [db.Books.pubdate])
|
||||
if sort_param == 'abc':
|
||||
change_shelf_order(shelf_id, [db.Books.sort])
|
||||
if sort_param == 'zyx':
|
||||
change_shelf_order(shelf_id, [db.Books.sort.desc()])
|
||||
if sort_param == 'new':
|
||||
change_shelf_order(shelf_id, [db.Books.timestamp.desc()])
|
||||
if sort_param == 'old':
|
||||
change_shelf_order(shelf_id, [db.Books.timestamp])
|
||||
if sort_param == 'authaz':
|
||||
change_shelf_order(shelf_id, [db.Books.author_sort.asc(), db.Series.name, db.Books.series_index])
|
||||
if sort_param == 'authza':
|
||||
change_shelf_order(shelf_id, [db.Books.author_sort.desc(),
|
||||
db.Series.name.desc(),
|
||||
db.Books.series_index.desc()])
|
||||
if status != 'on':
|
||||
if sort_param == 'stored':
|
||||
sort_param = current_user.get_view_property("shelf", 'stored')
|
||||
else:
|
||||
current_user.set_view_property("shelf", 'stored', sort_param)
|
||||
if sort_param == 'pubnew':
|
||||
change_shelf_order(shelf_id, [db.Books.pubdate.desc()])
|
||||
if sort_param == 'pubold':
|
||||
change_shelf_order(shelf_id, [db.Books.pubdate])
|
||||
if sort_param == 'shelfnew':
|
||||
change_shelf_order(shelf_id, [ub.BookShelf.date_added.desc()])
|
||||
if sort_param == 'shelfold':
|
||||
change_shelf_order(shelf_id, [ub.BookShelf.date_added])
|
||||
if sort_param == 'abc':
|
||||
change_shelf_order(shelf_id, [db.Books.sort])
|
||||
if sort_param == 'zyx':
|
||||
change_shelf_order(shelf_id, [db.Books.sort.desc()])
|
||||
if sort_param == 'new':
|
||||
change_shelf_order(shelf_id, [db.Books.timestamp.desc()])
|
||||
if sort_param == 'old':
|
||||
change_shelf_order(shelf_id, [db.Books.timestamp])
|
||||
if sort_param == 'authaz':
|
||||
change_shelf_order(shelf_id, [db.Books.author_sort.asc(), db.Series.name, db.Books.series_index])
|
||||
if sort_param == 'authza':
|
||||
change_shelf_order(shelf_id, [db.Books.author_sort.desc(),
|
||||
db.Series.name.desc(),
|
||||
db.Books.series_index.desc()])
|
||||
page = "shelf.html"
|
||||
pagesize = 0
|
||||
else:
|
||||
@@ -453,7 +461,7 @@ def render_show_shelf(shelf_type, shelf_id, page_no, sort_param):
|
||||
[ub.BookShelf.order.asc()],
|
||||
True, config.config_read_column,
|
||||
ub.BookShelf, ub.BookShelf.book_id == db.Books.id)
|
||||
# delete chelf entries where book is not existent anymore, can happen if book is deleted outside calibre-web
|
||||
# delete shelf entries where book is not existent anymore, can happen if book is deleted outside calibre-web
|
||||
wrong_entries = calibre_db.session.query(ub.BookShelf) \
|
||||
.join(db.Books, ub.BookShelf.book_id == db.Books.id, isouter=True) \
|
||||
.filter(db.Books.id == None).all()
|
||||
@@ -472,7 +480,9 @@ def render_show_shelf(shelf_type, shelf_id, page_no, sort_param):
|
||||
pagination=pagination,
|
||||
title=_("Shelf: '%(name)s'", name=shelf.name),
|
||||
shelf=shelf,
|
||||
page="shelf")
|
||||
page="shelf",
|
||||
status=status,
|
||||
order=sort_param)
|
||||
else:
|
||||
flash(_("Error opening shelf. Shelf does not exist or is not accessible"), category="error")
|
||||
return redirect(url_for("web.index"))
|
||||
|
108
cps/static/css/basic.css
Normal file
108
cps/static/css/basic.css
Normal file
@@ -0,0 +1,108 @@
|
||||
body {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
nav {
|
||||
height: 75px;
|
||||
padding: 5px 20px;
|
||||
width: 100%;
|
||||
display: table;
|
||||
box-sizing: border-box;
|
||||
border-bottom: 1px solid black;
|
||||
}
|
||||
|
||||
nav > * {
|
||||
display: inline-block;
|
||||
display: table-cell;
|
||||
vertical-align: middle;
|
||||
float: none;
|
||||
text-align: center;
|
||||
width: auto;
|
||||
}
|
||||
|
||||
nav > *:first-child {
|
||||
text-align: left;
|
||||
width: 1%;
|
||||
}
|
||||
|
||||
.theme{
|
||||
text-align: center;
|
||||
margin: 10px;
|
||||
}
|
||||
nav > *:last-child {
|
||||
text-align: right;
|
||||
width: 1%;
|
||||
}
|
||||
|
||||
nav > a {
|
||||
color: black;
|
||||
margin: 0 20px;
|
||||
}
|
||||
|
||||
.search {
|
||||
margin: auto auto;
|
||||
}
|
||||
|
||||
form > input {
|
||||
width: 18ch;
|
||||
padding-left: 4px;
|
||||
}
|
||||
|
||||
form > * {
|
||||
height: 50px;
|
||||
background-color: white;
|
||||
border-radius: 0;
|
||||
border: 1px solid #ccc;
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
display: inline-block;
|
||||
vertical-align: top;
|
||||
}
|
||||
|
||||
form > span {
|
||||
margin-left: -5px;
|
||||
}
|
||||
|
||||
button {
|
||||
border: none;
|
||||
padding: 0 10px;
|
||||
margin: 0;
|
||||
width: 160px;
|
||||
height: 100%;
|
||||
background-color: white;
|
||||
}
|
||||
|
||||
.body {
|
||||
padding: 5px 20px;
|
||||
}
|
||||
|
||||
a {
|
||||
color: black;
|
||||
}
|
||||
|
||||
img {
|
||||
width: 150px;
|
||||
height: 250px;
|
||||
object-fit: cover;
|
||||
}
|
||||
|
||||
.listing {
|
||||
white-space: nowrap;
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
margin-right: 20px;
|
||||
}
|
||||
|
||||
.pagination {
|
||||
padding: 10px 0;
|
||||
height: 20px;
|
||||
font-weight: 700;
|
||||
}
|
||||
|
||||
.pagination > div {
|
||||
float: left;
|
||||
}
|
||||
|
||||
.pagination > div:last-child {
|
||||
float: right;
|
||||
}
|
@@ -3268,6 +3268,10 @@ div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] > div.
|
||||
left: auto !important
|
||||
}
|
||||
|
||||
ul.dropdown-menu.offscreen {
|
||||
margin: 2px -160px 0
|
||||
}
|
||||
|
||||
div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] .dropdown-menu.offscreen {
|
||||
position: fixed;
|
||||
top: 120px;
|
||||
@@ -4333,6 +4337,7 @@ body.advanced_search > div.container-fluid > div.row-fluid > div.col-sm-10 > div
|
||||
|
||||
.navbar-right > li > ul.dropdown-menu.offscreen {
|
||||
right: -10px
|
||||
|
||||
}
|
||||
|
||||
.login .plexBack, body.login > div.container-fluid > div.row-fluid > div.col-sm-2, body.login > div.navbar.navbar-default.navbar-static-top > div > form {
|
||||
@@ -7148,12 +7153,11 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.
|
||||
}
|
||||
|
||||
body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3, body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3 {
|
||||
max-width: 130px;
|
||||
width: 130px;
|
||||
height: 180px;
|
||||
margin: 0;
|
||||
position: relative;
|
||||
max-width: unset;
|
||||
width: 100%;
|
||||
height: unset;
|
||||
padding: 15px;
|
||||
position: absolute
|
||||
}
|
||||
|
||||
body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9, body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9 {
|
||||
@@ -7162,10 +7166,6 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.
|
||||
width: 100%
|
||||
}
|
||||
|
||||
body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9 > .form-group:nth-child(1), body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9 > .form-group:nth-child(2), body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9 > .form-group:nth-child(1), body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > form > div.col-sm-9 > .form-group:nth-child(2) {
|
||||
padding-left: 120px
|
||||
}
|
||||
|
||||
#deleteButton, body.editbook > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3 > div.text-center > #delete, body.upload > div.container-fluid > div.row-fluid > div.col-sm-10 > div.col-sm-3 > div.text-center > #delete {
|
||||
top: 48px;
|
||||
height: 42px
|
||||
@@ -7951,3 +7951,5 @@ div.comments[data-readmore] {
|
||||
transition: height 300ms;
|
||||
overflow: hidden
|
||||
}
|
||||
|
||||
.dropdown-menu > .offscreen
|
||||
|
@@ -15,5 +15,5 @@
|
||||
|
||||
.blackTheme {
|
||||
background: #000;
|
||||
color: #fff
|
||||
color: #fff;
|
||||
}
|
1
cps/static/css/libs/typeahead.css
vendored
1
cps/static/css/libs/typeahead.css
vendored
@@ -157,7 +157,6 @@ fieldset[disabled] .twitter-typeahead .tt-input {
|
||||
list-style: none;
|
||||
font-size: 14px;
|
||||
background-color: #ffffff;
|
||||
border: 1px solid #cccccc;
|
||||
border: 1px solid rgba(0, 0, 0, 0.15);
|
||||
border-radius: 4px;
|
||||
-webkit-box-shadow: 0 6px 12px rgba(0, 0, 0, 0.175);
|
||||
|
@@ -77,7 +77,6 @@ body {
|
||||
}
|
||||
|
||||
#panels a {
|
||||
visibility: hidden;
|
||||
width: 18px;
|
||||
height: 20px;
|
||||
overflow: hidden;
|
||||
@@ -512,12 +511,12 @@ input:-moz-placeholder { color: #454545; }
|
||||
position: fixed;
|
||||
top: 50%;
|
||||
left: 50%;
|
||||
width: 630px;
|
||||
transform: translate(-50%, -50%);
|
||||
width: 100vw;
|
||||
height: auto;
|
||||
max-width: 630px;
|
||||
z-index: 2000;
|
||||
visibility: hidden;
|
||||
margin-left: -320px;
|
||||
margin-top: -160px;
|
||||
}
|
||||
|
||||
.overlay {
|
||||
|
0
cps/static/js/caliBlur.js
Executable file → Normal file
0
cps/static/js/caliBlur.js
Executable file → Normal file
@@ -3,9 +3,9 @@
|
||||
*/
|
||||
/* global Bloodhound, language, Modernizr, tinymce, getPath */
|
||||
|
||||
if ($("#description").length) {
|
||||
if ($("#comments").length && typeof tinymce !== "undefined") {
|
||||
tinymce.init({
|
||||
selector: "#description",
|
||||
selector: "#comments",
|
||||
plugins: 'code',
|
||||
branding: false,
|
||||
menubar: "edit view format",
|
||||
@@ -93,7 +93,7 @@ var authors = new Bloodhound({
|
||||
},
|
||||
});
|
||||
|
||||
$(".form-group #bookAuthor").typeahead(
|
||||
$(".form-group #authors").typeahead(
|
||||
{
|
||||
highlight: true,
|
||||
minLength: 1,
|
||||
@@ -243,13 +243,13 @@ $("#search").on("change input.typeahead:selected", function(event) {
|
||||
});
|
||||
});
|
||||
|
||||
$("#btn-upload-format").on("change", function () {
|
||||
/*$("#btn-upload-format").on("change", function () {
|
||||
var filename = $(this).val();
|
||||
if (filename.substring(3, 11) === "fakepath") {
|
||||
filename = filename.substring(12);
|
||||
} // Remove c:\fake at beginning from localhost chrome
|
||||
$("#upload-format").text(filename);
|
||||
});
|
||||
});*/
|
||||
|
||||
$("#btn-upload-cover").on("change", function () {
|
||||
var filename = $(this).val();
|
||||
@@ -261,8 +261,8 @@ $("#btn-upload-cover").on("change", function () {
|
||||
|
||||
$("#xchange").click(function () {
|
||||
this.blur();
|
||||
var title = $("#book_title").val();
|
||||
$("#book_title").val($("#bookAuthor").val());
|
||||
$("#bookAuthor").val(title);
|
||||
var title = $("#title").val();
|
||||
$("#title").val($("#authors").val());
|
||||
$("#authors").val(title);
|
||||
});
|
||||
|
||||
|
@@ -38,12 +38,12 @@ $(function () {
|
||||
}
|
||||
|
||||
function populateForm (book) {
|
||||
tinymce.get("description").setContent(book.description);
|
||||
tinymce.get("comments").setContent(book.description);
|
||||
var uniqueTags = getUniqueValues('tags', book)
|
||||
var uniqueLanguages = getUniqueValues('languages', book)
|
||||
var ampSeparatedAuthors = (book.authors || []).join(" & ");
|
||||
$("#bookAuthor").val(ampSeparatedAuthors);
|
||||
$("#book_title").val(book.title);
|
||||
$("#authors").val(ampSeparatedAuthors);
|
||||
$("#title").val(book.title);
|
||||
$("#tags").val(uniqueTags.join(", "));
|
||||
$("#languages").val(uniqueLanguages.join(", "));
|
||||
$("#rating").data("rating").setValue(Math.round(book.rating));
|
||||
@@ -172,7 +172,7 @@ $(function () {
|
||||
|
||||
$("#get_meta").click(function () {
|
||||
populate_provider();
|
||||
var bookTitle = $("#book_title").val();
|
||||
var bookTitle = $("#title").val();
|
||||
$("#keyword").val(bookTitle);
|
||||
keyword = bookTitle;
|
||||
doSearch(bookTitle);
|
||||
|
1
cps/static/js/libs/bootstrap-datepicker/locales/bootstrap-datepicker.sl.min.js
vendored
Normal file
1
cps/static/js/libs/bootstrap-datepicker/locales/bootstrap-datepicker.sl.min.js
vendored
Normal file
@@ -0,0 +1 @@
|
||||
!function(a){a.fn.datepicker.dates.sl={days:["Nedelja","Ponedeljek","Torek","Sreda","Četrtek","Petek","Sobota"],daysShort:["Ned","Pon","Tor","Sre","Čet","Pet","Sob"],daysMin:["Ne","Po","To","Sr","Če","Pe","So"],months:["Januar","Februar","Marec","April","Maj","Junij","Julij","Avgust","September","Oktober","November","December"],monthsShort:["Jan","Feb","Mar","Apr","Maj","Jun","Jul","Avg","Sep","Okt","Nov","Dec"],today:"Danes",weekStart:1}}(jQuery);
|
5322
cps/static/js/libs/pdf.mjs → cps/static/js/libs/pdf.js
vendored
5322
cps/static/js/libs/pdf.mjs → cps/static/js/libs/pdf.js
vendored
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
2
cps/static/js/libs/plugins.js
vendored
2
cps/static/js/libs/plugins.js
vendored
File diff suppressed because one or more lines are too long
@@ -183,7 +183,7 @@ try {
|
||||
];
|
||||
$.each(sequences, function(ignore, sequence) {
|
||||
for (j = 0; j < word.length - 2; j += 1) {
|
||||
// iterate the word trough a sliding window of size 3:
|
||||
// iterate the word through a sliding window of size 3:
|
||||
if (
|
||||
sequence.indexOf(
|
||||
word.toLowerCase().substring(j, j + 3)
|
||||
|
@@ -2,7 +2,7 @@
|
||||
* @licstart The following is the entire license notice for the
|
||||
* JavaScript code in this page
|
||||
*
|
||||
* Copyright 2023 Mozilla Foundation
|
||||
* Copyright 2024 Mozilla Foundation
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
@@ -446,7 +446,7 @@ class ProgressBar {
|
||||
}
|
||||
}
|
||||
setDisableAutoFetch(delay = 5000) {
|
||||
if (isNaN(this.#percent)) {
|
||||
if (this.#percent === 100 || isNaN(this.#percent)) {
|
||||
return;
|
||||
}
|
||||
if (this.#disableAutoFetchTimeout) {
|
||||
@@ -535,15 +535,20 @@ function toggleExpandedBtn(button, toggle, view = null) {
|
||||
|
||||
;// CONCATENATED MODULE: ./web/app_options.js
|
||||
{
|
||||
var compatibilityParams = Object.create(null);
|
||||
var compatParams = new Map();
|
||||
const userAgent = navigator.userAgent || "";
|
||||
const platform = navigator.platform || "";
|
||||
const maxTouchPoints = navigator.maxTouchPoints || 1;
|
||||
const isAndroid = /Android/.test(userAgent);
|
||||
const isIOS = /\b(iPad|iPhone|iPod)(?=;)/.test(userAgent) || platform === "MacIntel" && maxTouchPoints > 1;
|
||||
(function checkCanvasSizeLimitation() {
|
||||
(function () {
|
||||
if (isIOS || isAndroid) {
|
||||
compatibilityParams.maxCanvasPixels = 5242880;
|
||||
compatParams.set("maxCanvasPixels", 5242880);
|
||||
}
|
||||
})();
|
||||
(function () {
|
||||
if (isAndroid) {
|
||||
compatParams.set("useSystemFonts", false);
|
||||
}
|
||||
})();
|
||||
}
|
||||
@@ -552,9 +557,21 @@ const OptionKind = {
|
||||
VIEWER: 0x02,
|
||||
API: 0x04,
|
||||
WORKER: 0x08,
|
||||
EVENT_DISPATCH: 0x10,
|
||||
PREFERENCE: 0x80
|
||||
};
|
||||
const Type = {
|
||||
BOOLEAN: 0x01,
|
||||
NUMBER: 0x02,
|
||||
OBJECT: 0x04,
|
||||
STRING: 0x08,
|
||||
UNDEFINED: 0x10
|
||||
};
|
||||
const defaultOptions = {
|
||||
allowedGlobalEvents: {
|
||||
value: null,
|
||||
kind: OptionKind.BROWSER
|
||||
},
|
||||
canvasMaxAreaInBytes: {
|
||||
value: -1,
|
||||
kind: OptionKind.BROWSER + OptionKind.API
|
||||
@@ -563,6 +580,16 @@ const defaultOptions = {
|
||||
value: false,
|
||||
kind: OptionKind.BROWSER
|
||||
},
|
||||
localeProperties: {
|
||||
value: {
|
||||
lang: navigator.language || "en-US"
|
||||
},
|
||||
kind: OptionKind.BROWSER
|
||||
},
|
||||
nimbusDataStr: {
|
||||
value: "",
|
||||
kind: OptionKind.BROWSER
|
||||
},
|
||||
supportsCaretBrowsingMode: {
|
||||
value: false,
|
||||
kind: OptionKind.BROWSER
|
||||
@@ -587,6 +614,14 @@ const defaultOptions = {
|
||||
value: true,
|
||||
kind: OptionKind.BROWSER
|
||||
},
|
||||
toolbarDensity: {
|
||||
value: 0,
|
||||
kind: OptionKind.BROWSER + OptionKind.EVENT_DISPATCH
|
||||
},
|
||||
altTextLearnMoreUrl: {
|
||||
value: "",
|
||||
kind: OptionKind.VIEWER + OptionKind.PREFERENCE
|
||||
},
|
||||
annotationEditorMode: {
|
||||
value: 0,
|
||||
kind: OptionKind.VIEWER + OptionKind.PREFERENCE
|
||||
@@ -619,6 +654,14 @@ const defaultOptions = {
|
||||
value: false,
|
||||
kind: OptionKind.VIEWER + OptionKind.PREFERENCE
|
||||
},
|
||||
enableAltText: {
|
||||
value: false,
|
||||
kind: OptionKind.VIEWER + OptionKind.PREFERENCE
|
||||
},
|
||||
enableGuessAltText: {
|
||||
value: true,
|
||||
kind: OptionKind.VIEWER + OptionKind.PREFERENCE
|
||||
},
|
||||
enableHighlightEditor: {
|
||||
value: false,
|
||||
kind: OptionKind.VIEWER + OptionKind.PREFERENCE
|
||||
@@ -627,10 +670,6 @@ const defaultOptions = {
|
||||
value: false,
|
||||
kind: OptionKind.VIEWER + OptionKind.PREFERENCE
|
||||
},
|
||||
enableML: {
|
||||
value: false,
|
||||
kind: OptionKind.VIEWER + OptionKind.PREFERENCE
|
||||
},
|
||||
enablePermissions: {
|
||||
value: false,
|
||||
kind: OptionKind.VIEWER + OptionKind.PREFERENCE
|
||||
@@ -643,8 +682,8 @@ const defaultOptions = {
|
||||
value: true,
|
||||
kind: OptionKind.VIEWER + OptionKind.PREFERENCE
|
||||
},
|
||||
enableStampEditor: {
|
||||
value: true,
|
||||
enableUpdatedAddImage: {
|
||||
value: false,
|
||||
kind: OptionKind.VIEWER + OptionKind.PREFERENCE
|
||||
},
|
||||
externalLinkRel: {
|
||||
@@ -775,6 +814,11 @@ const defaultOptions = {
|
||||
value: "../web/standard_fonts/",
|
||||
kind: OptionKind.API
|
||||
},
|
||||
useSystemFonts: {
|
||||
value: undefined,
|
||||
kind: OptionKind.API,
|
||||
type: Type.BOOLEAN + Type.UNDEFINED
|
||||
},
|
||||
verbosity: {
|
||||
value: 1,
|
||||
kind: OptionKind.API
|
||||
@@ -784,7 +828,7 @@ const defaultOptions = {
|
||||
kind: OptionKind.WORKER
|
||||
},
|
||||
workerSrc: {
|
||||
value: "../build/pdf.worker.mjs",
|
||||
value: "../build/pdf.worker.js",
|
||||
kind: OptionKind.WORKER
|
||||
}
|
||||
};
|
||||
@@ -807,62 +851,80 @@ const defaultOptions = {
|
||||
value: false,
|
||||
kind: OptionKind.VIEWER
|
||||
};
|
||||
defaultOptions.locale = {
|
||||
value: navigator.language || "en-US",
|
||||
kind: OptionKind.VIEWER
|
||||
};
|
||||
}
|
||||
const userOptions = Object.create(null);
|
||||
const userOptions = new Map();
|
||||
{
|
||||
for (const name in compatibilityParams) {
|
||||
userOptions[name] = compatibilityParams[name];
|
||||
for (const [name, value] of compatParams) {
|
||||
userOptions.set(name, value);
|
||||
}
|
||||
}
|
||||
class AppOptions {
|
||||
static eventBus;
|
||||
constructor() {
|
||||
throw new Error("Cannot initialize AppOptions.");
|
||||
}
|
||||
static get(name) {
|
||||
return userOptions[name] ?? defaultOptions[name]?.value ?? undefined;
|
||||
return userOptions.has(name) ? userOptions.get(name) : defaultOptions[name]?.value;
|
||||
}
|
||||
static getAll(kind = null, defaultOnly = false) {
|
||||
const options = Object.create(null);
|
||||
for (const name in defaultOptions) {
|
||||
const defaultOption = defaultOptions[name];
|
||||
if (kind && !(kind & defaultOption.kind)) {
|
||||
const defaultOpt = defaultOptions[name];
|
||||
if (kind && !(kind & defaultOpt.kind)) {
|
||||
continue;
|
||||
}
|
||||
options[name] = defaultOnly ? defaultOption.value : userOptions[name] ?? defaultOption.value;
|
||||
options[name] = !defaultOnly && userOptions.has(name) ? userOptions.get(name) : defaultOpt.value;
|
||||
}
|
||||
return options;
|
||||
}
|
||||
static set(name, value) {
|
||||
userOptions[name] = value;
|
||||
this.setAll({
|
||||
[name]: value
|
||||
});
|
||||
}
|
||||
static setAll(options, init = false) {
|
||||
if (init) {
|
||||
if (this.get("disablePreferences")) {
|
||||
return;
|
||||
}
|
||||
for (const name in userOptions) {
|
||||
if (compatibilityParams[name] !== undefined) {
|
||||
continue;
|
||||
}
|
||||
console.warn("setAll: The Preferences may override manually set AppOptions; " + 'please use the "disablePreferences"-option in order to prevent that.');
|
||||
break;
|
||||
}
|
||||
}
|
||||
static setAll(options, prefs = false) {
|
||||
let events;
|
||||
for (const name in options) {
|
||||
userOptions[name] = options[name];
|
||||
const defaultOpt = defaultOptions[name],
|
||||
userOpt = options[name];
|
||||
if (!defaultOpt || !(typeof userOpt === typeof defaultOpt.value || Type[(typeof userOpt).toUpperCase()] & defaultOpt.type)) {
|
||||
continue;
|
||||
}
|
||||
const {
|
||||
kind
|
||||
} = defaultOpt;
|
||||
if (prefs && !(kind & OptionKind.BROWSER || kind & OptionKind.PREFERENCE)) {
|
||||
continue;
|
||||
}
|
||||
if (this.eventBus && kind & OptionKind.EVENT_DISPATCH) {
|
||||
(events ||= new Map()).set(name, userOpt);
|
||||
}
|
||||
userOptions.set(name, userOpt);
|
||||
}
|
||||
if (events) {
|
||||
for (const [name, value] of events) {
|
||||
this.eventBus.dispatch(name.toLowerCase(), {
|
||||
source: this,
|
||||
value
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
static remove(name) {
|
||||
delete userOptions[name];
|
||||
const val = compatibilityParams[name];
|
||||
if (val !== undefined) {
|
||||
userOptions[name] = val;
|
||||
}
|
||||
{
|
||||
AppOptions._checkDisablePreferences = () => {
|
||||
if (AppOptions.get("disablePreferences")) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
for (const [name] of userOptions) {
|
||||
if (compatParams.has(name)) {
|
||||
continue;
|
||||
}
|
||||
console.warn("The Preferences may override manually set AppOptions; " + 'please use the "disablePreferences"-option to prevent that.');
|
||||
break;
|
||||
}
|
||||
return false;
|
||||
};
|
||||
}
|
||||
|
||||
;// CONCATENATED MODULE: ./web/pdf_link_service.js
|
||||
@@ -1171,26 +1233,27 @@ class PDFLinkService {
|
||||
if (!(typeof zoom === "object" && typeof zoom?.name === "string")) {
|
||||
return false;
|
||||
}
|
||||
const argsLen = args.length;
|
||||
let allowNull = true;
|
||||
switch (zoom.name) {
|
||||
case "XYZ":
|
||||
if (args.length !== 3) {
|
||||
if (argsLen < 2 || argsLen > 3) {
|
||||
return false;
|
||||
}
|
||||
break;
|
||||
case "Fit":
|
||||
case "FitB":
|
||||
return args.length === 0;
|
||||
return argsLen === 0;
|
||||
case "FitH":
|
||||
case "FitBH":
|
||||
case "FitV":
|
||||
case "FitBV":
|
||||
if (args.length !== 1) {
|
||||
if (argsLen > 1) {
|
||||
return false;
|
||||
}
|
||||
break;
|
||||
case "FitR":
|
||||
if (args.length !== 4) {
|
||||
if (argsLen !== 4) {
|
||||
return false;
|
||||
}
|
||||
allowNull = false;
|
||||
@@ -1240,7 +1303,6 @@ const {
|
||||
noContextMenu,
|
||||
normalizeUnicode,
|
||||
OPS,
|
||||
Outliner,
|
||||
PasswordResponses,
|
||||
PDFDataRangeTransport,
|
||||
PDFDateString,
|
||||
@@ -1248,12 +1310,10 @@ const {
|
||||
PermissionFlag,
|
||||
PixelsPerInch,
|
||||
RenderingCancelledException,
|
||||
renderTextLayer,
|
||||
setLayerDimensions,
|
||||
shadow,
|
||||
TextLayer,
|
||||
UnexpectedResponseException,
|
||||
updateTextLayer,
|
||||
Util,
|
||||
VerbosityLevel,
|
||||
version,
|
||||
@@ -1401,40 +1461,28 @@ class BaseExternalServices {
|
||||
updateEditorStates(data) {
|
||||
throw new Error("Not implemented: updateEditorStates");
|
||||
}
|
||||
async getNimbusExperimentData() {}
|
||||
async getGlobalEventNames() {
|
||||
return null;
|
||||
}
|
||||
dispatchGlobalEvent(_event) {}
|
||||
}
|
||||
|
||||
;// CONCATENATED MODULE: ./web/preferences.js
|
||||
|
||||
class BasePreferences {
|
||||
#browserDefaults = Object.freeze({
|
||||
canvasMaxAreaInBytes: -1,
|
||||
isInAutomation: false,
|
||||
supportsCaretBrowsingMode: false,
|
||||
supportsDocumentFonts: true,
|
||||
supportsIntegratedFind: false,
|
||||
supportsMouseWheelZoomCtrlKey: true,
|
||||
supportsMouseWheelZoomMetaKey: true,
|
||||
supportsPinchToZoom: true
|
||||
});
|
||||
#defaults = Object.freeze({
|
||||
altTextLearnMoreUrl: "",
|
||||
annotationEditorMode: 0,
|
||||
annotationMode: 2,
|
||||
cursorToolOnLoad: 0,
|
||||
defaultZoomDelay: 400,
|
||||
defaultZoomValue: "",
|
||||
disablePageLabels: false,
|
||||
enableAltText: false,
|
||||
enableGuessAltText: true,
|
||||
enableHighlightEditor: false,
|
||||
enableHighlightFloatingButton: false,
|
||||
enableML: false,
|
||||
enablePermissions: false,
|
||||
enablePrintAutoRotate: true,
|
||||
enableScripting: true,
|
||||
enableStampEditor: true,
|
||||
enableUpdatedAddImage: false,
|
||||
externalLinkTarget: 0,
|
||||
highlightEditorColors: "yellow=#FFFF98,green=#53FFBC,blue=#80EBFF,pink=#FFCBE6,red=#FF4F5F",
|
||||
historyUpdateUrl: false,
|
||||
@@ -1456,7 +1504,6 @@ class BasePreferences {
|
||||
enableXfa: true,
|
||||
viewerCssTheme: 0
|
||||
});
|
||||
#prefs = Object.create(null);
|
||||
#initializedPromise = null;
|
||||
constructor() {
|
||||
if (this.constructor === BasePreferences) {
|
||||
@@ -1466,16 +1513,13 @@ class BasePreferences {
|
||||
browserPrefs,
|
||||
prefs
|
||||
}) => {
|
||||
const options = Object.create(null);
|
||||
for (const [name, val] of Object.entries(this.#browserDefaults)) {
|
||||
const prefVal = browserPrefs?.[name];
|
||||
options[name] = typeof prefVal === typeof val ? prefVal : val;
|
||||
if (AppOptions._checkDisablePreferences()) {
|
||||
return;
|
||||
}
|
||||
for (const [name, val] of Object.entries(this.#defaults)) {
|
||||
const prefVal = prefs?.[name];
|
||||
options[name] = this.#prefs[name] = typeof prefVal === typeof val ? prefVal : val;
|
||||
}
|
||||
AppOptions.setAll(options, true);
|
||||
AppOptions.setAll({
|
||||
...browserPrefs,
|
||||
...prefs
|
||||
}, true);
|
||||
});
|
||||
}
|
||||
async _writeToStorage(prefObj) {
|
||||
@@ -1484,58 +1528,21 @@ class BasePreferences {
|
||||
async _readFromStorage(prefObj) {
|
||||
throw new Error("Not implemented: _readFromStorage");
|
||||
}
|
||||
#updatePref({
|
||||
name,
|
||||
value
|
||||
}) {
|
||||
throw new Error("Not implemented: #updatePref");
|
||||
}
|
||||
async reset() {
|
||||
await this.#initializedPromise;
|
||||
const oldPrefs = structuredClone(this.#prefs);
|
||||
this.#prefs = Object.create(null);
|
||||
try {
|
||||
await this._writeToStorage(this.#defaults);
|
||||
} catch (reason) {
|
||||
this.#prefs = oldPrefs;
|
||||
throw reason;
|
||||
}
|
||||
AppOptions.setAll(this.#defaults, true);
|
||||
await this._writeToStorage(this.#defaults);
|
||||
}
|
||||
async set(name, value) {
|
||||
await this.#initializedPromise;
|
||||
const defaultValue = this.#defaults[name],
|
||||
oldPrefs = structuredClone(this.#prefs);
|
||||
if (defaultValue === undefined) {
|
||||
throw new Error(`Set preference: "${name}" is undefined.`);
|
||||
} else if (value === undefined) {
|
||||
throw new Error("Set preference: no value is specified.");
|
||||
}
|
||||
const valueType = typeof value,
|
||||
defaultType = typeof defaultValue;
|
||||
if (valueType !== defaultType) {
|
||||
if (valueType === "number" && defaultType === "string") {
|
||||
value = value.toString();
|
||||
} else {
|
||||
throw new Error(`Set preference: "${value}" is a ${valueType}, expected a ${defaultType}.`);
|
||||
}
|
||||
} else if (valueType === "number" && !Number.isInteger(value)) {
|
||||
throw new Error(`Set preference: "${value}" must be an integer.`);
|
||||
}
|
||||
this.#prefs[name] = value;
|
||||
try {
|
||||
await this._writeToStorage(this.#prefs);
|
||||
} catch (reason) {
|
||||
this.#prefs = oldPrefs;
|
||||
throw reason;
|
||||
}
|
||||
AppOptions.setAll({
|
||||
[name]: value
|
||||
}, true);
|
||||
await this._writeToStorage(AppOptions.getAll(OptionKind.PREFERENCE));
|
||||
}
|
||||
async get(name) {
|
||||
await this.#initializedPromise;
|
||||
const defaultValue = this.#defaults[name];
|
||||
if (defaultValue === undefined) {
|
||||
throw new Error(`Get preference: "${name}" is undefined.`);
|
||||
}
|
||||
return this.#prefs[name] ?? defaultValue;
|
||||
return AppOptions.get(name);
|
||||
}
|
||||
get initializedPromise() {
|
||||
return this.#initializedPromise;
|
||||
@@ -3098,13 +3105,19 @@ class Preferences extends BasePreferences {
|
||||
}
|
||||
class ExternalServices extends BaseExternalServices {
|
||||
async createL10n() {
|
||||
return new genericl10n_GenericL10n(AppOptions.get("locale"));
|
||||
return new genericl10n_GenericL10n(AppOptions.get("localeProperties")?.lang);
|
||||
}
|
||||
createScripting() {
|
||||
return new GenericScripting(AppOptions.get("sandboxBundleSrc"));
|
||||
}
|
||||
}
|
||||
class MLManager {
|
||||
async isEnabledFor(_name) {
|
||||
return false;
|
||||
}
|
||||
async deleteModel(_service) {
|
||||
return null;
|
||||
}
|
||||
async guess() {
|
||||
return null;
|
||||
}
|
||||
@@ -8411,6 +8424,9 @@ class AnnotationLayerBuilder {
|
||||
}
|
||||
this.div.hidden = true;
|
||||
}
|
||||
hasEditableAnnotations() {
|
||||
return !!this.annotationLayer?.hasEditableAnnotations();
|
||||
}
|
||||
#updatePresentationModeState(state) {
|
||||
if (!this.div) {
|
||||
return;
|
||||
@@ -9142,6 +9158,7 @@ class PDFPageView {
|
||||
#annotationMode = AnnotationMode.ENABLE_FORMS;
|
||||
#enableHWA = false;
|
||||
#hasRestrictedScaling = false;
|
||||
#isEditing = false;
|
||||
#layerProperties = null;
|
||||
#loadingId = null;
|
||||
#previousRotation = null;
|
||||
@@ -9296,6 +9313,9 @@ class PDFPageView {
|
||||
this.reset();
|
||||
this.pdfPage?.cleanup();
|
||||
}
|
||||
hasEditableAnnotations() {
|
||||
return !!this.annotationLayer?.hasEditableAnnotations();
|
||||
}
|
||||
get _textHighlighter() {
|
||||
return shadow(this, "_textHighlighter", new TextHighlighter({
|
||||
pageIndex: this.id - 1,
|
||||
@@ -9472,6 +9492,19 @@ class PDFPageView {
|
||||
this._resetZoomLayer();
|
||||
}
|
||||
}
|
||||
toggleEditingMode(isEditing) {
|
||||
if (!this.hasEditableAnnotations()) {
|
||||
return;
|
||||
}
|
||||
this.#isEditing = isEditing;
|
||||
this.reset({
|
||||
keepZoomLayer: true,
|
||||
keepAnnotationLayer: true,
|
||||
keepAnnotationEditorLayer: true,
|
||||
keepXfaLayer: true,
|
||||
keepTextLayer: true
|
||||
});
|
||||
}
|
||||
update({
|
||||
scale = 0,
|
||||
rotation = null,
|
||||
@@ -9822,7 +9855,8 @@ class PDFPageView {
|
||||
annotationMode: this.#annotationMode,
|
||||
optionalContentConfigPromise: this._optionalContentConfigPromise,
|
||||
annotationCanvasMap: this._annotationCanvasMap,
|
||||
pageColors
|
||||
pageColors,
|
||||
isEditing: this.#isEditing
|
||||
};
|
||||
const renderTask = this.renderTask = pdfPage.render(renderContext);
|
||||
renderTask.onContinue = renderContinueCallback;
|
||||
@@ -9982,8 +10016,11 @@ class PDFViewer {
|
||||
#enableHWA = false;
|
||||
#enableHighlightFloatingButton = false;
|
||||
#enablePermissions = false;
|
||||
#enableUpdatedAddImage = false;
|
||||
#eventAbortController = null;
|
||||
#mlManager = null;
|
||||
#onPageRenderedCallback = null;
|
||||
#switchAnnotationEditorModeTimeoutId = null;
|
||||
#getAllTextInProgress = false;
|
||||
#hiddenCopyElement = null;
|
||||
#interruptCopyCondition = false;
|
||||
@@ -9993,7 +10030,7 @@ class PDFViewer {
|
||||
#scaleTimeoutId = null;
|
||||
#textLayerMode = TextLayerMode.ENABLE;
|
||||
constructor(options) {
|
||||
const viewerVersion = "4.4.168";
|
||||
const viewerVersion = "4.5.136";
|
||||
if (version !== viewerVersion) {
|
||||
throw new Error(`The API version "${version}" does not match the Viewer version "${viewerVersion}".`);
|
||||
}
|
||||
@@ -10020,6 +10057,7 @@ class PDFViewer {
|
||||
this.#annotationEditorMode = options.annotationEditorMode ?? AnnotationEditorType.NONE;
|
||||
this.#annotationEditorHighlightColors = options.annotationEditorHighlightColors || null;
|
||||
this.#enableHighlightFloatingButton = options.enableHighlightFloatingButton === true;
|
||||
this.#enableUpdatedAddImage = options.enableUpdatedAddImage === true;
|
||||
this.imageResourcesPath = options.imageResourcesPath || "";
|
||||
this.enablePrintAutoRotate = options.enablePrintAutoRotate || false;
|
||||
this.removePageBorders = options.removePageBorders || false;
|
||||
@@ -10425,7 +10463,7 @@ class PDFViewer {
|
||||
if (pdfDocument.isPureXfa) {
|
||||
console.warn("Warning: XFA-editing is not implemented.");
|
||||
} else if (isValidAnnotationEditorMode(mode)) {
|
||||
this.#annotationEditorUIManager = new AnnotationEditorUIManager(this.container, viewer, this.#altTextManager, eventBus, pdfDocument, pageColors, this.#annotationEditorHighlightColors, this.#enableHighlightFloatingButton, this.#mlManager);
|
||||
this.#annotationEditorUIManager = new AnnotationEditorUIManager(this.container, viewer, this.#altTextManager, eventBus, pdfDocument, pageColors, this.#annotationEditorHighlightColors, this.#enableHighlightFloatingButton, this.#enableUpdatedAddImage, this.#mlManager);
|
||||
eventBus.dispatch("annotationeditoruimanager", {
|
||||
source: this,
|
||||
uiManager: this.#annotationEditorUIManager
|
||||
@@ -10584,6 +10622,7 @@ class PDFViewer {
|
||||
this.viewer.removeAttribute("lang");
|
||||
this.#hiddenCopyElement?.remove();
|
||||
this.#hiddenCopyElement = null;
|
||||
this.#cleanupSwitchAnnotationEditorMode();
|
||||
}
|
||||
#ensurePageViewVisible() {
|
||||
if (this._scrollMode !== ScrollMode.PAGE) {
|
||||
@@ -10956,6 +10995,34 @@ class PDFViewer {
|
||||
location: this._location
|
||||
});
|
||||
}
|
||||
#switchToEditAnnotationMode() {
|
||||
const visible = this._getVisiblePages();
|
||||
const pagesToRefresh = [];
|
||||
const {
|
||||
ids,
|
||||
views
|
||||
} = visible;
|
||||
for (const page of views) {
|
||||
const {
|
||||
view
|
||||
} = page;
|
||||
if (!view.hasEditableAnnotations()) {
|
||||
ids.delete(view.id);
|
||||
continue;
|
||||
}
|
||||
pagesToRefresh.push(page);
|
||||
}
|
||||
if (pagesToRefresh.length === 0) {
|
||||
return null;
|
||||
}
|
||||
this.renderingQueue.renderHighestPriority({
|
||||
first: pagesToRefresh[0],
|
||||
last: pagesToRefresh.at(-1),
|
||||
views: pagesToRefresh,
|
||||
ids
|
||||
});
|
||||
return ids;
|
||||
}
|
||||
containsElement(element) {
|
||||
return this.container.contains(element);
|
||||
}
|
||||
@@ -11388,6 +11455,16 @@ class PDFViewer {
|
||||
get containerTopLeft() {
|
||||
return this.#containerTopLeft ||= [this.container.offsetTop, this.container.offsetLeft];
|
||||
}
|
||||
#cleanupSwitchAnnotationEditorMode() {
|
||||
if (this.#onPageRenderedCallback) {
|
||||
this.eventBus._off("pagerendered", this.#onPageRenderedCallback);
|
||||
this.#onPageRenderedCallback = null;
|
||||
}
|
||||
if (this.#switchAnnotationEditorModeTimeoutId !== null) {
|
||||
clearTimeout(this.#switchAnnotationEditorModeTimeoutId);
|
||||
this.#switchAnnotationEditorModeTimeoutId = null;
|
||||
}
|
||||
}
|
||||
get annotationEditorMode() {
|
||||
return this.#annotationEditorUIManager ? this.#annotationEditorMode : AnnotationEditorType.DISABLE;
|
||||
}
|
||||
@@ -11408,12 +11485,47 @@ class PDFViewer {
|
||||
if (!this.pdfDocument) {
|
||||
return;
|
||||
}
|
||||
this.#annotationEditorMode = mode;
|
||||
this.eventBus.dispatch("annotationeditormodechanged", {
|
||||
source: this,
|
||||
mode
|
||||
});
|
||||
this.#annotationEditorUIManager.updateMode(mode, editId, isFromKeyboard);
|
||||
const {
|
||||
eventBus
|
||||
} = this;
|
||||
const updater = () => {
|
||||
this.#cleanupSwitchAnnotationEditorMode();
|
||||
this.#annotationEditorMode = mode;
|
||||
this.#annotationEditorUIManager.updateMode(mode, editId, isFromKeyboard);
|
||||
eventBus.dispatch("annotationeditormodechanged", {
|
||||
source: this,
|
||||
mode
|
||||
});
|
||||
};
|
||||
if (mode === AnnotationEditorType.NONE || this.#annotationEditorMode === AnnotationEditorType.NONE) {
|
||||
const isEditing = mode !== AnnotationEditorType.NONE;
|
||||
if (!isEditing) {
|
||||
this.pdfDocument.annotationStorage.resetModifiedIds();
|
||||
}
|
||||
for (const pageView of this._pages) {
|
||||
pageView.toggleEditingMode(isEditing);
|
||||
}
|
||||
const idsToRefresh = this.#switchToEditAnnotationMode();
|
||||
if (isEditing && idsToRefresh) {
|
||||
this.#cleanupSwitchAnnotationEditorMode();
|
||||
this.#onPageRenderedCallback = ({
|
||||
pageNumber
|
||||
}) => {
|
||||
idsToRefresh.delete(pageNumber);
|
||||
if (idsToRefresh.size === 0) {
|
||||
this.#switchAnnotationEditorModeTimeoutId = setTimeout(updater, 0);
|
||||
}
|
||||
};
|
||||
const {
|
||||
signal
|
||||
} = this.#eventAbortController;
|
||||
eventBus._on("pagerendered", this.#onPageRenderedCallback, {
|
||||
signal
|
||||
});
|
||||
return;
|
||||
}
|
||||
}
|
||||
updater();
|
||||
}
|
||||
set annotationEditorParams({
|
||||
type,
|
||||
@@ -11721,7 +11833,7 @@ class SecondaryToolbar {
|
||||
|
||||
class Toolbar {
|
||||
#opts;
|
||||
constructor(options, eventBus) {
|
||||
constructor(options, eventBus, toolbarDensity = 0) {
|
||||
this.#opts = options;
|
||||
this.eventBus = eventBus;
|
||||
const buttons = [{
|
||||
@@ -11806,8 +11918,13 @@ class Toolbar {
|
||||
break;
|
||||
}
|
||||
});
|
||||
eventBus._on("toolbardensity", this.#updateToolbarDensity.bind(this));
|
||||
this.#updateToolbarDensity({
|
||||
value: toolbarDensity
|
||||
});
|
||||
this.reset();
|
||||
}
|
||||
#updateToolbarDensity() {}
|
||||
#setAnnotationEditorUIManager(uiManager, parentContainer) {
|
||||
const colorPicker = new ColorPicker({
|
||||
uiManager
|
||||
@@ -12078,7 +12195,6 @@ class ViewHistory {
|
||||
|
||||
|
||||
const FORCE_PAGES_LOADED_TIMEOUT = 10000;
|
||||
const WHEEL_ZOOM_DISABLED_TIMEOUT = 1000;
|
||||
const ViewOnLoad = {
|
||||
UNKNOWN: -1,
|
||||
PREVIOUS: 0,
|
||||
@@ -12110,18 +12226,17 @@ const PDFViewerApplication = {
|
||||
store: null,
|
||||
downloadManager: null,
|
||||
overlayManager: null,
|
||||
preferences: null,
|
||||
preferences: new Preferences(),
|
||||
toolbar: null,
|
||||
secondaryToolbar: null,
|
||||
eventBus: null,
|
||||
l10n: null,
|
||||
annotationEditorParams: null,
|
||||
isInitialViewSet: false,
|
||||
downloadComplete: false,
|
||||
isViewerEmbedded: window.parent !== window,
|
||||
url: "",
|
||||
baseUrl: "",
|
||||
_allowedGlobalEventsPromise: null,
|
||||
mlManager: null,
|
||||
_downloadUrl: "",
|
||||
_eventBusAbortController: null,
|
||||
_windowAbortController: null,
|
||||
@@ -12141,11 +12256,9 @@ const PDFViewerApplication = {
|
||||
_printAnnotationStoragePromise: null,
|
||||
_touchInfo: null,
|
||||
_isCtrlKeyDown: false,
|
||||
_nimbusDataPromise: null,
|
||||
_caretBrowsing: null,
|
||||
_isScrolling: false,
|
||||
async initialize(appConfig) {
|
||||
let l10nPromise;
|
||||
this.appConfig = appConfig;
|
||||
try {
|
||||
await this.preferences.initializedPromise;
|
||||
@@ -12167,8 +12280,7 @@ const PDFViewerApplication = {
|
||||
if (mode) {
|
||||
document.documentElement.classList.add(mode);
|
||||
}
|
||||
l10nPromise = this.externalServices.createL10n();
|
||||
this.l10n = await l10nPromise;
|
||||
this.l10n = await this.externalServices.createL10n();
|
||||
document.getElementsByTagName("html")[0].dir = this.l10n.getDirection();
|
||||
this.l10n.translate(appConfig.appContainer || document.documentElement);
|
||||
if (this.isViewerEmbedded && AppOptions.get("externalLinkTarget") === LinkTarget.NONE) {
|
||||
@@ -12257,7 +12369,9 @@ const PDFViewerApplication = {
|
||||
}
|
||||
}
|
||||
if (params.has("locale")) {
|
||||
AppOptions.set("locale", params.get("locale"));
|
||||
AppOptions.set("localeProperties", {
|
||||
lang: params.get("locale")
|
||||
});
|
||||
}
|
||||
},
|
||||
async _initializeViewerComponents() {
|
||||
@@ -12318,6 +12432,7 @@ const PDFViewerApplication = {
|
||||
annotationEditorMode,
|
||||
annotationEditorHighlightColors: AppOptions.get("highlightEditorColors"),
|
||||
enableHighlightFloatingButton: AppOptions.get("enableHighlightFloatingButton"),
|
||||
enableUpdatedAddImage: AppOptions.get("enableUpdatedAddImage"),
|
||||
imageResourcesPath: AppOptions.get("imageResourcesPath"),
|
||||
enablePrintAutoRotate: AppOptions.get("enablePrintAutoRotate"),
|
||||
maxCanvasPixels: AppOptions.get("maxCanvasPixels"),
|
||||
@@ -12355,9 +12470,6 @@ const PDFViewerApplication = {
|
||||
}
|
||||
if (appConfig.annotationEditorParams) {
|
||||
if (annotationEditorMode !== AnnotationEditorType.DISABLE) {
|
||||
if (AppOptions.get("enableStampEditor")) {
|
||||
appConfig.toolbar?.editorStampButton?.classList.remove("hidden");
|
||||
}
|
||||
const editorHighlightButton = appConfig.toolbar?.editorHighlightButton;
|
||||
if (editorHighlightButton && AppOptions.get("enableHighlightEditor")) {
|
||||
editorHighlightButton.hidden = false;
|
||||
@@ -12380,7 +12492,7 @@ const PDFViewerApplication = {
|
||||
});
|
||||
}
|
||||
if (appConfig.toolbar) {
|
||||
this.toolbar = new Toolbar(appConfig.toolbar, eventBus);
|
||||
this.toolbar = new Toolbar(appConfig.toolbar, eventBus, AppOptions.get("toolbarDensity"));
|
||||
}
|
||||
if (appConfig.secondaryToolbar) {
|
||||
this.secondaryToolbar = new SecondaryToolbar(appConfig.secondaryToolbar, eventBus);
|
||||
@@ -12437,7 +12549,6 @@ const PDFViewerApplication = {
|
||||
}
|
||||
},
|
||||
async run(config) {
|
||||
this.preferences = new Preferences();
|
||||
await this.initialize(config);
|
||||
const {
|
||||
appConfig,
|
||||
@@ -12514,9 +12625,6 @@ const PDFViewerApplication = {
|
||||
get externalServices() {
|
||||
return shadow(this, "externalServices", new ExternalServices());
|
||||
},
|
||||
get mlManager() {
|
||||
return shadow(this, "mlManager", AppOptions.get("enableML") === true ? new MLManager() : null);
|
||||
},
|
||||
get initialized() {
|
||||
return this._initializedCapability.settled;
|
||||
},
|
||||
@@ -12597,12 +12705,10 @@ const PDFViewerApplication = {
|
||||
let title = pdfjs_getPdfFilenameFromUrl(url, "");
|
||||
if (!title) {
|
||||
try {
|
||||
title = decodeURIComponent(getFilenameFromUrl(url)) || url;
|
||||
} catch {
|
||||
title = url;
|
||||
}
|
||||
title = decodeURIComponent(getFilenameFromUrl(url));
|
||||
} catch {}
|
||||
}
|
||||
this.setTitle(title);
|
||||
this.setTitle(title || url);
|
||||
},
|
||||
setTitle(title = this._title) {
|
||||
this._title = title;
|
||||
@@ -12648,7 +12754,6 @@ const PDFViewerApplication = {
|
||||
this.pdfLinkService.externalLinkEnabled = true;
|
||||
this.store = null;
|
||||
this.isInitialViewSet = false;
|
||||
this.downloadComplete = false;
|
||||
this.url = "";
|
||||
this.baseUrl = "";
|
||||
this._downloadUrl = "";
|
||||
@@ -12724,9 +12829,7 @@ const PDFViewerApplication = {
|
||||
async download(options = {}) {
|
||||
let data;
|
||||
try {
|
||||
if (this.downloadComplete) {
|
||||
data = await this.pdfDocument.getData();
|
||||
}
|
||||
data = await this.pdfDocument.getData();
|
||||
} catch {}
|
||||
this.downloadManager.download(data, this._downloadUrl, this._docFilename, options);
|
||||
},
|
||||
@@ -12793,11 +12896,8 @@ const PDFViewerApplication = {
|
||||
return message;
|
||||
},
|
||||
progress(level) {
|
||||
if (!this.loadingBar || this.downloadComplete) {
|
||||
return;
|
||||
}
|
||||
const percent = Math.round(level * 100);
|
||||
if (percent <= this.loadingBar.percent) {
|
||||
if (!this.loadingBar || percent <= this.loadingBar.percent) {
|
||||
return;
|
||||
}
|
||||
this.loadingBar.percent = percent;
|
||||
@@ -12811,7 +12911,6 @@ const PDFViewerApplication = {
|
||||
length
|
||||
}) => {
|
||||
this._contentLength = length;
|
||||
this.downloadComplete = true;
|
||||
this.loadingBar?.hide();
|
||||
firstPagePromise.then(() => {
|
||||
this.eventBus.dispatch("documentloaded", {
|
||||
@@ -13413,9 +13512,6 @@ const PDFViewerApplication = {
|
||||
});
|
||||
}
|
||||
addWindowResolutionChange();
|
||||
window.addEventListener("visibilitychange", webViewerVisibilityChange, {
|
||||
signal
|
||||
});
|
||||
window.addEventListener("wheel", webViewerWheel, {
|
||||
passive: false,
|
||||
signal
|
||||
@@ -13730,7 +13826,7 @@ function webViewerHashchange(evt) {
|
||||
}
|
||||
}
|
||||
{
|
||||
/*var webViewerFileInputChange = function (evt) {
|
||||
var webViewerFileInputChange = function (evt) {
|
||||
if (PDFViewerApplication.pdfViewer?.isInPresentationMode) {
|
||||
return;
|
||||
}
|
||||
@@ -13742,7 +13838,7 @@ function webViewerHashchange(evt) {
|
||||
};
|
||||
var webViewerOpenFile = function (evt) {
|
||||
PDFViewerApplication._openFileInput?.click();
|
||||
};*/
|
||||
};
|
||||
}
|
||||
function webViewerPresentationMode() {
|
||||
PDFViewerApplication.requestPresentationMode();
|
||||
@@ -13876,20 +13972,6 @@ function webViewerPageChanging({
|
||||
function webViewerResolutionChange(evt) {
|
||||
PDFViewerApplication.pdfViewer.refresh();
|
||||
}
|
||||
function webViewerVisibilityChange(evt) {
|
||||
if (document.visibilityState === "visible") {
|
||||
setZoomDisabledTimeout();
|
||||
}
|
||||
}
|
||||
let zoomDisabledTimeout = null;
|
||||
function setZoomDisabledTimeout() {
|
||||
if (zoomDisabledTimeout) {
|
||||
clearTimeout(zoomDisabledTimeout);
|
||||
}
|
||||
zoomDisabledTimeout = setTimeout(function () {
|
||||
zoomDisabledTimeout = null;
|
||||
}, WHEEL_ZOOM_DISABLED_TIMEOUT);
|
||||
}
|
||||
function webViewerWheel(evt) {
|
||||
const {
|
||||
pdfViewer,
|
||||
@@ -13907,7 +13989,7 @@ function webViewerWheel(evt) {
|
||||
const origin = [evt.clientX, evt.clientY];
|
||||
if (isPinchToZoom || evt.ctrlKey && supportsMouseWheelZoomCtrlKey || evt.metaKey && supportsMouseWheelZoomMetaKey) {
|
||||
evt.preventDefault();
|
||||
if (PDFViewerApplication._isScrolling || zoomDisabledTimeout || document.visibilityState === "hidden" || PDFViewerApplication.overlayManager.active) {
|
||||
if (PDFViewerApplication._isScrolling || document.visibilityState === "hidden" || PDFViewerApplication.overlayManager.active) {
|
||||
return;
|
||||
}
|
||||
if (isPinchToZoom && supportsPinchToZoom) {
|
||||
@@ -14335,14 +14417,20 @@ function webViewerReportTelemetry({
|
||||
}) {
|
||||
PDFViewerApplication.externalServices.reportTelemetry(details);
|
||||
}
|
||||
function webViewerSetPreference({
|
||||
name,
|
||||
value
|
||||
}) {
|
||||
PDFViewerApplication.preferences.set(name, value);
|
||||
}
|
||||
|
||||
;// CONCATENATED MODULE: ./web/viewer.js
|
||||
|
||||
|
||||
|
||||
|
||||
const pdfjsVersion = "4.4.168";
|
||||
const pdfjsBuild = "19fbc8998";
|
||||
const pdfjsVersion = "4.5.136";
|
||||
const pdfjsBuild = "3a21f03b0";
|
||||
const AppConstants = {
|
||||
LinkTarget: LinkTarget,
|
||||
RenderingStates: RenderingStates,
|
@@ -49,7 +49,7 @@ function elementSorter(a, b) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Generic control/related handler to show/hide fields based on a checkbox' value
|
||||
// Generic control/related handler to show/hide fields based on a 'checkbox' value
|
||||
// e.g.
|
||||
// <input type="checkbox" data-control="stuff-to-show">
|
||||
// <div data-related="stuff-to-show">...</div>
|
||||
@@ -63,7 +63,7 @@ $(document).on("change", "input[type=\"checkbox\"][data-control]", function () {
|
||||
});
|
||||
});
|
||||
|
||||
// Generic control/related handler to show/hide fields based on a select' value
|
||||
// Generic control/related handler to show/hide fields based on a 'select' value
|
||||
$(document).on("change", "select[data-control]", function() {
|
||||
var $this = $(this);
|
||||
var name = $this.data("control");
|
||||
@@ -79,7 +79,7 @@ $(document).on("change", "select[data-control]", function() {
|
||||
}
|
||||
});
|
||||
|
||||
// Generic control/related handler to show/hide fields based on a select' value
|
||||
// Generic control/related handler to show/hide fields based on a 'select' value
|
||||
// this one is made to show all values if select value is not 0
|
||||
$(document).on("change", "select[data-controlall]", function() {
|
||||
var $this = $(this);
|
||||
@@ -130,8 +130,13 @@ $(".container-fluid").bind('drop', function (e) {
|
||||
}
|
||||
});
|
||||
if (dt.files.length) {
|
||||
$("#btn-upload")[0].files = dt.files;
|
||||
$("#form-upload").submit();
|
||||
if($("#btn-upload-format").length) {
|
||||
$("#btn-upload-format")[0].files = dt.files;
|
||||
$("#form-upload-format").submit();
|
||||
} else {
|
||||
$("#btn-upload")[0].files = dt.files;
|
||||
$("#form-upload").submit();
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
@@ -140,12 +145,25 @@ $("#btn-upload").change(function() {
|
||||
$("#form-upload").submit();
|
||||
});
|
||||
|
||||
$("#btn-upload-format").change(function() {
|
||||
$("#form-upload-format").submit();
|
||||
});
|
||||
|
||||
|
||||
$("#form-upload").uploadprogress({
|
||||
redirect_url: getPath() + "/", //"{{ url_for('web.index')}}",
|
||||
uploadedMsg: $("#form-upload").data("message"), //"{{_('Upload done, processing, please wait...')}}",
|
||||
modalTitle: $("#form-upload").data("title"), //"{{_('Uploading...')}}",
|
||||
modalFooter: $("#form-upload").data("footer"), //"{{_('Close')}}",
|
||||
modalTitleFailed: $("#form-upload").data("failed") //"{{_('Error')}}"
|
||||
redirect_url: getPath() + "/",
|
||||
uploadedMsg: $("#form-upload").data("message"),
|
||||
modalTitle: $("#form-upload").data("title"),
|
||||
modalFooter: $("#form-upload").data("footer"),
|
||||
modalTitleFailed: $("#form-upload").data("failed")
|
||||
});
|
||||
|
||||
$("#form-upload-format").uploadprogress({
|
||||
redirect_url: getPath() + "/",
|
||||
uploadedMsg: $("#form-upload-format").data("message"),
|
||||
modalTitle: $("#form-upload-format").data("title"),
|
||||
modalFooter: $("#form-upload-format").data("footer"),
|
||||
modalTitleFailed: $("#form-upload-format").data("failed")
|
||||
});
|
||||
|
||||
$(document).ready(function() {
|
||||
@@ -604,6 +622,7 @@ $(function() {
|
||||
});
|
||||
|
||||
$("#toggle_order_shelf").click(function() {
|
||||
$("#toggle_order_shelf").toggleClass("dummy");
|
||||
$("#new").toggleClass("disabled");
|
||||
$("#old").toggleClass("disabled");
|
||||
$("#asc").toggleClass("disabled");
|
||||
@@ -612,9 +631,20 @@ $(function() {
|
||||
$("#auth_za").toggleClass("disabled");
|
||||
$("#pub_new").toggleClass("disabled");
|
||||
$("#pub_old").toggleClass("disabled");
|
||||
$("#shelf_new").toggleClass("disabled");
|
||||
$("#shelf_old").toggleClass("disabled");
|
||||
var alternative_text = $("#toggle_order_shelf").data('alt-text');
|
||||
var status = $("#toggle_order_shelf").hasClass("dummy") ? "on" : "off";
|
||||
$("#toggle_order_shelf").data('alt-text', $("#toggle_order_shelf").html());
|
||||
$("#toggle_order_shelf").html(alternative_text);
|
||||
|
||||
$.ajax({
|
||||
method:"post",
|
||||
contentType: "application/json; charset=utf-8",
|
||||
dataType: "json",
|
||||
url: getPath() + "/ajax/view",
|
||||
data: "{\"shelf\": {\"man\": \"" + status + "\"}}",
|
||||
});
|
||||
});
|
||||
|
||||
$("#btndeluser").click(function() {
|
||||
@@ -696,20 +726,20 @@ $(function() {
|
||||
url: getPath() + "/ajax/simulatedbchange",
|
||||
data: {config_calibre_dir: $("#config_calibre_dir").val(), csrf_token: $("input[name='csrf_token']").val()},
|
||||
success: function success(data) {
|
||||
if ( data.change ) {
|
||||
if ( data.valid ) {
|
||||
if ( !data.valid ) {
|
||||
$("#InvalidDialog").modal('show');
|
||||
}
|
||||
else{
|
||||
if ( data.change ) {
|
||||
confirmDialog(
|
||||
"db_submit",
|
||||
"GeneralChangeModal",
|
||||
0,
|
||||
changeDbSettings
|
||||
);
|
||||
}
|
||||
else {
|
||||
$("#InvalidDialog").modal('show');
|
||||
}
|
||||
} else {
|
||||
} else {
|
||||
changeDbSettings();
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
@@ -1,54 +0,0 @@
|
||||
/**
|
||||
* waits until queue is finished, meaning the book is done loading
|
||||
* @param callback
|
||||
*/
|
||||
function qFinished(callback){
|
||||
let timeout=setInterval(()=>{
|
||||
if(reader.rendition.q.running===undefined)
|
||||
clearInterval(timeout);
|
||||
callback();
|
||||
},300
|
||||
)
|
||||
}
|
||||
|
||||
function calculateProgress(){
|
||||
let data=reader.rendition.location.end;
|
||||
return Math.round(epub.locations.percentageFromCfi(data.cfi)*100);
|
||||
}
|
||||
|
||||
// register new event emitter locationchange that fires on urlchange
|
||||
// source: https://stackoverflow.com/a/52809105/21941129
|
||||
(() => {
|
||||
let oldPushState = history.pushState;
|
||||
history.pushState = function pushState() {
|
||||
let ret = oldPushState.apply(this, arguments);
|
||||
window.dispatchEvent(new Event('locationchange'));
|
||||
return ret;
|
||||
};
|
||||
|
||||
let oldReplaceState = history.replaceState;
|
||||
history.replaceState = function replaceState() {
|
||||
let ret = oldReplaceState.apply(this, arguments);
|
||||
window.dispatchEvent(new Event('locationchange'));
|
||||
return ret;
|
||||
};
|
||||
|
||||
window.addEventListener('popstate', () => {
|
||||
window.dispatchEvent(new Event('locationchange'));
|
||||
});
|
||||
})();
|
||||
|
||||
window.addEventListener('locationchange',()=>{
|
||||
let newPos=calculateProgress();
|
||||
progressDiv.textContent=newPos+"%";
|
||||
});
|
||||
|
||||
var epub=ePub(calibre.bookUrl)
|
||||
|
||||
let progressDiv=document.getElementById("progress");
|
||||
|
||||
qFinished(()=>{
|
||||
epub.locations.generate().then(()=> {
|
||||
window.dispatchEvent(new Event('locationchange'))
|
||||
});
|
||||
})
|
@@ -52,6 +52,32 @@ var reader;
|
||||
}
|
||||
});
|
||||
|
||||
// Update progress percentage
|
||||
let progressDiv = document.getElementById("progress");
|
||||
reader.book.ready.then((()=>{
|
||||
let locations_key = reader.book.key()+'-locations';
|
||||
let stored_locations = localStorage.getItem(locations_key);
|
||||
let make_locations, save_locations;
|
||||
if (stored_locations) {
|
||||
make_locations = Promise.resolve(reader.book.locations.load(stored_locations));
|
||||
// No-op because locations are already saved
|
||||
save_locations = ()=>{};
|
||||
} else {
|
||||
make_locations = reader.book.locations.generate();
|
||||
save_locations = ()=>{
|
||||
localStorage.setItem(locations_key, reader.book.locations.save());
|
||||
};
|
||||
}
|
||||
make_locations.then(()=>{
|
||||
reader.rendition.on('relocated', (location)=>{
|
||||
let percentage = Math.round(location.end.percentage*100);
|
||||
progressDiv.textContent=percentage+"%";
|
||||
});
|
||||
reader.rendition.reportLocation();
|
||||
progressDiv.style.visibility = "visible";
|
||||
}).then(save_locations);
|
||||
}));
|
||||
|
||||
/**
|
||||
* @param {string} action - Add or remove bookmark
|
||||
* @param {string|int} location - Location or zero
|
||||
|
21
cps/static/js/reading/locationchange-polyfill.js
Normal file
21
cps/static/js/reading/locationchange-polyfill.js
Normal file
@@ -0,0 +1,21 @@
|
||||
// register new event emitter locationchange that fires on urlchange
|
||||
// source: https://stackoverflow.com/a/52809105/21941129
|
||||
(() => {
|
||||
let oldPushState = history.pushState;
|
||||
history.pushState = function pushState() {
|
||||
let ret = oldPushState.apply(this, arguments);
|
||||
window.dispatchEvent(new Event('locationchange'));
|
||||
return ret;
|
||||
};
|
||||
|
||||
let oldReplaceState = history.replaceState;
|
||||
history.replaceState = function replaceState() {
|
||||
let ret = oldReplaceState.apply(this, arguments);
|
||||
window.dispatchEvent(new Event('locationchange'));
|
||||
return ret;
|
||||
};
|
||||
|
||||
window.addEventListener('popstate', () => {
|
||||
window.dispatchEvent(new Event('locationchange'));
|
||||
});
|
||||
})();
|
@@ -685,6 +685,17 @@ function ratingFormatter(value, row) {
|
||||
return (value/2);
|
||||
}
|
||||
|
||||
function seriesIndexFormatter(value, row) {
|
||||
if (!value) {
|
||||
return value;
|
||||
}
|
||||
formated_value = Number(value).toFixed(2);
|
||||
if (formated_value.endsWith(".00")) {
|
||||
formated_value = parseInt(formated_value).toString();
|
||||
}
|
||||
return formated_value;
|
||||
}
|
||||
|
||||
|
||||
/* Do some hiding disabling after user list is loaded */
|
||||
function loadSuccess() {
|
||||
@@ -849,6 +860,7 @@ function BookCheckboxChange(checkbox, userId, field) {
|
||||
},
|
||||
success: handleListServerResponse
|
||||
});
|
||||
console.log("test");
|
||||
}
|
||||
|
||||
|
||||
|
@@ -1,8 +1,7 @@
|
||||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2019 decentral1se
|
||||
# Copyright (C) 2024 OzzieIsaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
@@ -16,28 +15,9 @@
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# """Calibre-web distribution package setuptools installer."""
|
||||
|
||||
from setuptools import setup
|
||||
import os
|
||||
import re
|
||||
import codecs
|
||||
|
||||
here = os.path.abspath(os.path.dirname(__file__))
|
||||
|
||||
def read(*parts):
|
||||
with codecs.open(os.path.join(here, *parts), 'r') as fp:
|
||||
return fp.read()
|
||||
def strip_whitespaces(text):
|
||||
return re.sub(r"(^[\s\u200B-\u200D\ufeff]+)|([\s\u200B-\u200D\ufeff]+$)","", text)
|
||||
|
||||
def find_version(*file_paths):
|
||||
version_file = read(*file_paths)
|
||||
version_match = re.search(r"^STABLE_VERSION\s+=\s+{['\"]version['\"]:\s*['\"](.*)['\"]}",
|
||||
version_file, re.M)
|
||||
if version_match:
|
||||
return version_match.group(1)
|
||||
raise RuntimeError("Unable to find version string.")
|
||||
|
||||
setup(
|
||||
version=find_version("src", "calibreweb", "cps", "constants.py")
|
||||
)
|
@@ -1,339 +1,355 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2020 pwr
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import os
|
||||
import re
|
||||
from glob import glob
|
||||
from shutil import copyfile, copyfileobj
|
||||
from markupsafe import escape
|
||||
from time import time
|
||||
from uuid import uuid4
|
||||
|
||||
from sqlalchemy.exc import SQLAlchemyError
|
||||
from flask_babel import lazy_gettext as N_
|
||||
|
||||
from cps.services.worker import CalibreTask
|
||||
from cps import db
|
||||
from cps import logger, config
|
||||
from cps.subproc_wrapper import process_open
|
||||
from flask_babel import gettext as _
|
||||
from cps.kobo_sync_status import remove_synced_book
|
||||
from cps.ub import init_db_thread
|
||||
from cps.file_helper import get_temp_dir
|
||||
|
||||
from cps.tasks.mail import TaskEmail
|
||||
from cps import gdriveutils, helper
|
||||
from cps.constants import SUPPORTED_CALIBRE_BINARIES
|
||||
|
||||
log = logger.create()
|
||||
|
||||
current_milli_time = lambda: int(round(time() * 1000))
|
||||
|
||||
|
||||
class TaskConvert(CalibreTask):
|
||||
def __init__(self, file_path, book_id, task_message, settings, ereader_mail, user=None):
|
||||
super(TaskConvert, self).__init__(task_message)
|
||||
self.worker_thread = None
|
||||
self.file_path = file_path
|
||||
self.book_id = book_id
|
||||
self.title = ""
|
||||
self.settings = settings
|
||||
self.ereader_mail = ereader_mail
|
||||
self.user = user
|
||||
|
||||
self.results = dict()
|
||||
|
||||
def run(self, worker_thread):
|
||||
self.worker_thread = worker_thread
|
||||
if config.config_use_google_drive:
|
||||
worker_db = db.CalibreDB(expire_on_commit=False, init=True)
|
||||
cur_book = worker_db.get_book(self.book_id)
|
||||
self.title = cur_book.title
|
||||
data = worker_db.get_book_format(self.book_id, self.settings['old_book_format'])
|
||||
df = gdriveutils.getFileFromEbooksFolder(cur_book.path,
|
||||
data.name + "." + self.settings['old_book_format'].lower())
|
||||
df_cover = gdriveutils.getFileFromEbooksFolder(cur_book.path, "cover.jpg")
|
||||
if df:
|
||||
datafile_cover = None
|
||||
datafile = os.path.join(config.get_book_path(),
|
||||
cur_book.path,
|
||||
data.name + "." + self.settings['old_book_format'].lower())
|
||||
if df_cover:
|
||||
datafile_cover = os.path.join(config.get_book_path(),
|
||||
cur_book.path, "cover.jpg")
|
||||
if not os.path.exists(os.path.join(config.get_book_path(), cur_book.path)):
|
||||
os.makedirs(os.path.join(config.get_book_path(), cur_book.path))
|
||||
df.GetContentFile(datafile)
|
||||
if df_cover:
|
||||
df_cover.GetContentFile(datafile_cover)
|
||||
worker_db.session.close()
|
||||
else:
|
||||
# ToDo Include cover in error handling
|
||||
error_message = _("%(format)s not found on Google Drive: %(fn)s",
|
||||
format=self.settings['old_book_format'],
|
||||
fn=data.name + "." + self.settings['old_book_format'].lower())
|
||||
worker_db.session.close()
|
||||
return self._handleError(error_message)
|
||||
|
||||
filename = self._convert_ebook_format()
|
||||
if config.config_use_google_drive:
|
||||
os.remove(self.file_path + '.' + self.settings['old_book_format'].lower())
|
||||
if df_cover:
|
||||
os.remove(os.path.join(config.config_calibre_dir, cur_book.path, "cover.jpg"))
|
||||
|
||||
if filename:
|
||||
if config.config_use_google_drive:
|
||||
# Upload files to gdrive
|
||||
gdriveutils.updateGdriveCalibreFromLocal()
|
||||
self._handleSuccess()
|
||||
if self.ereader_mail:
|
||||
# if we're sending to E-Reader after converting, create a one-off task and run it immediately
|
||||
# todo: figure out how to incorporate this into the progress
|
||||
try:
|
||||
EmailText = N_(u"%(book)s send to E-Reader", book=escape(self.title))
|
||||
for email in self.ereader_mail.split(','):
|
||||
email = email.strip()
|
||||
worker_thread.add(self.user, TaskEmail(self.settings['subject'],
|
||||
self.results["path"],
|
||||
filename,
|
||||
self.settings,
|
||||
email,
|
||||
EmailText,
|
||||
self.settings['body'],
|
||||
id=self.book_id,
|
||||
internal=True)
|
||||
)
|
||||
except Exception as ex:
|
||||
return self._handleError(str(ex))
|
||||
|
||||
def _convert_ebook_format(self):
|
||||
error_message = None
|
||||
local_db = db.CalibreDB(expire_on_commit=False, init=True)
|
||||
file_path = self.file_path
|
||||
book_id = self.book_id
|
||||
format_old_ext = '.' + self.settings['old_book_format'].lower()
|
||||
format_new_ext = '.' + self.settings['new_book_format'].lower()
|
||||
|
||||
# check to see if destination format already exists - or if book is in database
|
||||
# if it does - mark the conversion task as complete and return a success
|
||||
# this will allow to send to E-Reader workflow to continue to work
|
||||
if os.path.isfile(file_path + format_new_ext) or\
|
||||
local_db.get_book_format(self.book_id, self.settings['new_book_format']):
|
||||
log.info("Book id %d already converted to %s", book_id, format_new_ext)
|
||||
cur_book = local_db.get_book(book_id)
|
||||
self.title = cur_book.title
|
||||
self.results['path'] = cur_book.path
|
||||
self.results['title'] = self.title
|
||||
new_format = local_db.session.query(db.Data).filter(db.Data.book == book_id)\
|
||||
.filter(db.Data.format == self.settings['new_book_format'].upper()).one_or_none()
|
||||
if not new_format:
|
||||
new_format = db.Data(name=os.path.basename(file_path),
|
||||
book_format=self.settings['new_book_format'].upper(),
|
||||
book=book_id, uncompressed_size=os.path.getsize(file_path + format_new_ext))
|
||||
try:
|
||||
local_db.session.merge(new_format)
|
||||
local_db.session.commit()
|
||||
except SQLAlchemyError as e:
|
||||
local_db.session.rollback()
|
||||
log.error("Database error: %s", e)
|
||||
local_db.session.close()
|
||||
self._handleError(N_("Oops! Database Error: %(error)s.", error=e))
|
||||
return
|
||||
self._handleSuccess()
|
||||
local_db.session.close()
|
||||
return os.path.basename(file_path + format_new_ext)
|
||||
else:
|
||||
log.info("Book id %d - target format of %s does not exist. Moving forward with convert.",
|
||||
book_id,
|
||||
format_new_ext)
|
||||
|
||||
if config.config_kepubifypath and format_old_ext == '.epub' and format_new_ext == '.kepub':
|
||||
check, error_message = self._convert_kepubify(file_path,
|
||||
format_old_ext,
|
||||
format_new_ext)
|
||||
else:
|
||||
# check if calibre converter-executable is existing
|
||||
if not os.path.exists(config.config_converterpath):
|
||||
self._handleError(N_("Calibre ebook-convert %(tool)s not found", tool=config.config_converterpath))
|
||||
return
|
||||
has_cover = local_db.get_book(book_id).has_cover
|
||||
check, error_message = self._convert_calibre(file_path, format_old_ext, format_new_ext, has_cover)
|
||||
|
||||
if check == 0:
|
||||
cur_book = local_db.get_book(book_id)
|
||||
if os.path.isfile(file_path + format_new_ext):
|
||||
new_format = local_db.session.query(db.Data).filter(db.Data.book == book_id) \
|
||||
.filter(db.Data.format == self.settings['new_book_format'].upper()).one_or_none()
|
||||
if not new_format:
|
||||
new_format = db.Data(name=cur_book.data[0].name,
|
||||
book_format=self.settings['new_book_format'].upper(),
|
||||
book=book_id, uncompressed_size=os.path.getsize(file_path + format_new_ext))
|
||||
try:
|
||||
local_db.session.merge(new_format)
|
||||
local_db.session.commit()
|
||||
if self.settings['new_book_format'].upper() in ['KEPUB', 'EPUB', 'EPUB3']:
|
||||
ub_session = init_db_thread()
|
||||
remove_synced_book(book_id, True, ub_session)
|
||||
ub_session.close()
|
||||
except SQLAlchemyError as e:
|
||||
local_db.session.rollback()
|
||||
log.error("Database error: %s", e)
|
||||
local_db.session.close()
|
||||
self._handleError(error_message)
|
||||
return
|
||||
self.results['path'] = cur_book.path
|
||||
self.title = cur_book.title
|
||||
self.results['title'] = self.title
|
||||
if not config.config_use_google_drive:
|
||||
self._handleSuccess()
|
||||
return os.path.basename(file_path + format_new_ext)
|
||||
else:
|
||||
error_message = N_('%(format)s format not found on disk', format=format_new_ext.upper())
|
||||
local_db.session.close()
|
||||
log.info("ebook converter failed with error while converting book")
|
||||
if not error_message:
|
||||
error_message = N_('Ebook converter failed with unknown error')
|
||||
else:
|
||||
log.error(error_message)
|
||||
self._handleError(error_message)
|
||||
return
|
||||
|
||||
def _convert_kepubify(self, file_path, format_old_ext, format_new_ext):
|
||||
if config.config_embed_metadata and config.config_binariesdir:
|
||||
tmp_dir, temp_file_name = helper.do_calibre_export(self.book_id, format_old_ext[1:])
|
||||
filename = os.path.join(tmp_dir, temp_file_name + format_old_ext)
|
||||
temp_file_path = tmp_dir
|
||||
else:
|
||||
filename = file_path + format_old_ext
|
||||
temp_file_path = os.path.dirname(file_path)
|
||||
quotes = [1, 3]
|
||||
command = [config.config_kepubifypath, filename, '-o', temp_file_path, '-i']
|
||||
try:
|
||||
p = process_open(command, quotes)
|
||||
except OSError as e:
|
||||
return 1, N_("Kepubify-converter failed: %(error)s", error=e)
|
||||
self.progress = 0.01
|
||||
while True:
|
||||
nextline = p.stdout.readlines()
|
||||
nextline = [x.strip('\n') for x in nextline if x != '\n']
|
||||
for line in nextline:
|
||||
log.debug(line)
|
||||
if p.poll() is not None:
|
||||
break
|
||||
|
||||
# process returncode
|
||||
check = p.returncode
|
||||
|
||||
# move file
|
||||
if check == 0:
|
||||
converted_file = glob(os.path.splitext(filename)[0] + "*.kepub.epub")
|
||||
if len(converted_file) == 1:
|
||||
copyfile(converted_file[0], (file_path + format_new_ext))
|
||||
os.unlink(converted_file[0])
|
||||
else:
|
||||
return 1, N_("Converted file not found or more than one file in folder %(folder)s",
|
||||
folder=os.path.dirname(file_path))
|
||||
return check, None
|
||||
|
||||
def _convert_calibre(self, file_path, format_old_ext, format_new_ext, has_cover):
|
||||
path_tmp_opf = None
|
||||
try:
|
||||
# path_tmp_opf = self._embed_metadata()
|
||||
if config.config_embed_metadata:
|
||||
quotes = [5]
|
||||
tmp_dir = get_temp_dir()
|
||||
calibredb_binarypath = os.path.join(config.config_binariesdir, SUPPORTED_CALIBRE_BINARIES["calibredb"])
|
||||
my_env = os.environ.copy()
|
||||
if config.config_calibre_split:
|
||||
my_env['CALIBRE_OVERRIDE_DATABASE_PATH'] = os.path.join(config.config_calibre_dir, "metadata.db")
|
||||
library_path = config.config_calibre_split_dir
|
||||
else:
|
||||
library_path = config.config_calibre_dir
|
||||
|
||||
opf_command = [calibredb_binarypath, 'show_metadata', '--as-opf', str(self.book_id),
|
||||
'--with-library', library_path]
|
||||
p = process_open(opf_command, quotes, my_env)
|
||||
p.wait()
|
||||
check = p.returncode
|
||||
calibre_traceback = p.stderr.readlines()
|
||||
if check == 0:
|
||||
path_tmp_opf = os.path.join(tmp_dir, "metadata_" + str(uuid4()) + ".opf")
|
||||
with open(path_tmp_opf, 'w') as fd:
|
||||
copyfileobj(p.stdout, fd)
|
||||
else:
|
||||
error_message = ""
|
||||
for ele in calibre_traceback:
|
||||
if not ele.startswith('Traceback') and not ele.startswith(' File'):
|
||||
error_message = N_("Calibre failed with error: %(error)s", error=ele)
|
||||
return check, error_message
|
||||
quotes = [1, 2, 4, 6]
|
||||
command = [config.config_converterpath, (file_path + format_old_ext),
|
||||
(file_path + format_new_ext)]
|
||||
if config.config_embed_metadata:
|
||||
command.extend(['--from-opf', path_tmp_opf])
|
||||
if has_cover:
|
||||
command.extend(['--cover', os.path.join(os.path.dirname(file_path), 'cover.jpg')])
|
||||
quotes_index = 3
|
||||
if config.config_calibre:
|
||||
parameters = config.config_calibre.split(" ")
|
||||
for param in parameters:
|
||||
command.append(param)
|
||||
quotes.append(quotes_index)
|
||||
quotes_index += 1
|
||||
|
||||
p = process_open(command, quotes, newlines=False)
|
||||
except OSError as e:
|
||||
return 1, N_("Ebook-converter failed: %(error)s", error=e)
|
||||
|
||||
while p.poll() is None:
|
||||
nextline = p.stdout.readline()
|
||||
if isinstance(nextline, bytes):
|
||||
nextline = nextline.decode('utf-8', errors="ignore").strip('\r\n')
|
||||
if nextline:
|
||||
log.debug(nextline)
|
||||
# parse progress string from calibre-converter
|
||||
progress = re.search(r"(\d+)%\s.*", nextline)
|
||||
if progress:
|
||||
self.progress = int(progress.group(1)) / 100
|
||||
if config.config_use_google_drive:
|
||||
self.progress *= 0.9
|
||||
|
||||
# process returncode
|
||||
check = p.returncode
|
||||
calibre_traceback = p.stderr.readlines()
|
||||
error_message = ""
|
||||
for ele in calibre_traceback:
|
||||
ele = ele.decode('utf-8', errors="ignore").strip('\n')
|
||||
log.debug(ele)
|
||||
if not ele.startswith('Traceback') and not ele.startswith(' File'):
|
||||
error_message = N_("Calibre failed with error: %(error)s", error=ele)
|
||||
return check, error_message
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return N_("Convert")
|
||||
|
||||
def __str__(self):
|
||||
if self.ereader_mail:
|
||||
return "Convert Book {} and mail it to {}".format(self.book_id, self.ereader_mail)
|
||||
else:
|
||||
return "Convert Book {}".format(self.book_id)
|
||||
|
||||
@property
|
||||
def is_cancellable(self):
|
||||
return False
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2020 pwr
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import os
|
||||
import re
|
||||
import glob
|
||||
from shutil import copyfile, copyfileobj
|
||||
from markupsafe import escape
|
||||
from time import time
|
||||
from uuid import uuid4
|
||||
|
||||
from sqlalchemy.exc import SQLAlchemyError
|
||||
from flask_babel import lazy_gettext as N_
|
||||
|
||||
from cps.services.worker import CalibreTask
|
||||
from cps import db, app
|
||||
from cps import logger, config
|
||||
from cps.subproc_wrapper import process_open
|
||||
from flask_babel import gettext as _
|
||||
from cps.kobo_sync_status import remove_synced_book
|
||||
from cps.ub import init_db_thread
|
||||
from cps.file_helper import get_temp_dir
|
||||
|
||||
from cps.tasks.mail import TaskEmail
|
||||
from cps import gdriveutils, helper
|
||||
from cps.constants import SUPPORTED_CALIBRE_BINARIES
|
||||
from cps.string_helper import strip_whitespaces
|
||||
|
||||
log = logger.create()
|
||||
|
||||
current_milli_time = lambda: int(round(time() * 1000))
|
||||
|
||||
|
||||
class TaskConvert(CalibreTask):
|
||||
def __init__(self, file_path, book_id, task_message, settings, ereader_mail, user=None):
|
||||
super(TaskConvert, self).__init__(task_message)
|
||||
self.worker_thread = None
|
||||
self.file_path = file_path
|
||||
self.book_id = book_id
|
||||
self.title = ""
|
||||
self.settings = settings
|
||||
self.ereader_mail = ereader_mail
|
||||
self.user = user
|
||||
|
||||
self.results = dict()
|
||||
|
||||
def run(self, worker_thread):
|
||||
df_cover = None
|
||||
cur_book = None
|
||||
self.worker_thread = worker_thread
|
||||
if config.config_use_google_drive:
|
||||
with app.app_context():
|
||||
worker_db = db.CalibreDB(app)
|
||||
cur_book = worker_db.get_book(self.book_id)
|
||||
self.title = cur_book.title
|
||||
data = worker_db.get_book_format(self.book_id, self.settings['old_book_format'])
|
||||
df = gdriveutils.getFileFromEbooksFolder(cur_book.path,
|
||||
data.name + "." + self.settings['old_book_format'].lower())
|
||||
df_cover = gdriveutils.getFileFromEbooksFolder(cur_book.path, "cover.jpg")
|
||||
if df:
|
||||
datafile_cover = None
|
||||
datafile = os.path.join(config.get_book_path(),
|
||||
cur_book.path,
|
||||
data.name + "." + self.settings['old_book_format'].lower())
|
||||
if df_cover:
|
||||
datafile_cover = os.path.join(config.get_book_path(),
|
||||
cur_book.path, "cover.jpg")
|
||||
if not os.path.exists(os.path.join(config.get_book_path(), cur_book.path)):
|
||||
os.makedirs(os.path.join(config.get_book_path(), cur_book.path))
|
||||
df.GetContentFile(datafile)
|
||||
if df_cover:
|
||||
df_cover.GetContentFile(datafile_cover)
|
||||
# worker_db.session.close()
|
||||
else:
|
||||
# ToDo Include cover in error handling
|
||||
error_message = _("%(format)s not found on Google Drive: %(fn)s",
|
||||
format=self.settings['old_book_format'],
|
||||
fn=data.name + "." + self.settings['old_book_format'].lower())
|
||||
# worker_db.session.close()
|
||||
return self._handleError(error_message)
|
||||
|
||||
filename = self._convert_ebook_format()
|
||||
if config.config_use_google_drive:
|
||||
os.remove(self.file_path + '.' + self.settings['old_book_format'].lower())
|
||||
if df_cover:
|
||||
os.remove(os.path.join(config.config_calibre_dir, cur_book.path, "cover.jpg"))
|
||||
|
||||
if filename:
|
||||
if config.config_use_google_drive:
|
||||
# Upload files to gdrive
|
||||
gdriveutils.updateGdriveCalibreFromLocal()
|
||||
self._handleSuccess()
|
||||
if self.ereader_mail:
|
||||
# if we're sending to E-Reader after converting, create a one-off task and run it immediately
|
||||
# todo: figure out how to incorporate this into the progress
|
||||
try:
|
||||
EmailText = N_(u"%(book)s send to E-Reader", book=escape(self.title))
|
||||
for email in self.ereader_mail.split(','):
|
||||
email = strip_whitespaces(email)
|
||||
worker_thread.add(self.user, TaskEmail(self.settings['subject'],
|
||||
self.results["path"],
|
||||
filename,
|
||||
self.settings,
|
||||
email,
|
||||
EmailText,
|
||||
self.settings['body'],
|
||||
id=self.book_id,
|
||||
internal=True)
|
||||
)
|
||||
except Exception as ex:
|
||||
return self._handleError(str(ex))
|
||||
|
||||
def _convert_ebook_format(self):
|
||||
error_message = None
|
||||
with app.app_context():
|
||||
local_db = db.CalibreDB(app)
|
||||
file_path = self.file_path
|
||||
book_id = self.book_id
|
||||
format_old_ext = '.' + self.settings['old_book_format'].lower()
|
||||
format_new_ext = '.' + self.settings['new_book_format'].lower()
|
||||
|
||||
# check to see if destination format already exists - or if book is in database
|
||||
# if it does - mark the conversion task as complete and return a success
|
||||
# this will allow to send to E-Reader workflow to continue to work
|
||||
if os.path.isfile(file_path + format_new_ext) or\
|
||||
local_db.get_book_format(self.book_id, self.settings['new_book_format']):
|
||||
log.info("Book id %d already converted to %s", book_id, format_new_ext)
|
||||
cur_book = local_db.get_book(book_id)
|
||||
self.title = cur_book.title
|
||||
self.results['path'] = cur_book.path
|
||||
self.results['title'] = self.title
|
||||
new_format = local_db.session.query(db.Data).filter(db.Data.book == book_id)\
|
||||
.filter(db.Data.format == self.settings['new_book_format'].upper()).one_or_none()
|
||||
if not new_format:
|
||||
new_format = db.Data(name=os.path.basename(file_path),
|
||||
book_format=self.settings['new_book_format'].upper(),
|
||||
book=book_id, uncompressed_size=os.path.getsize(file_path + format_new_ext))
|
||||
try:
|
||||
local_db.session.merge(new_format)
|
||||
local_db.session.commit()
|
||||
except SQLAlchemyError as e:
|
||||
local_db.session.rollback()
|
||||
log.error("Database error: %s", e)
|
||||
local_db.session.close()
|
||||
self._handleError(N_("Oops! Database Error: %(error)s.", error=e))
|
||||
return
|
||||
self._handleSuccess()
|
||||
local_db.session.close()
|
||||
return os.path.basename(file_path + format_new_ext)
|
||||
else:
|
||||
log.info("Book id %d - target format of %s does not exist. Moving forward with convert.",
|
||||
book_id,
|
||||
format_new_ext)
|
||||
|
||||
if config.config_kepubifypath and format_old_ext == '.epub' and format_new_ext == '.kepub':
|
||||
check, error_message = self._convert_kepubify(file_path,
|
||||
format_old_ext,
|
||||
format_new_ext)
|
||||
else:
|
||||
# check if calibre converter-executable is existing
|
||||
if not os.path.exists(config.config_converterpath):
|
||||
self._handleError(N_("Calibre ebook-convert %(tool)s not found", tool=config.config_converterpath))
|
||||
return
|
||||
has_cover = local_db.get_book(book_id).has_cover
|
||||
check, error_message = self._convert_calibre(file_path, format_old_ext, format_new_ext, has_cover)
|
||||
|
||||
if check == 0:
|
||||
cur_book = local_db.get_book(book_id)
|
||||
if os.path.isfile(file_path + format_new_ext):
|
||||
new_format = local_db.session.query(db.Data).filter(db.Data.book == book_id) \
|
||||
.filter(db.Data.format == self.settings['new_book_format'].upper()).one_or_none()
|
||||
if not new_format:
|
||||
new_format = db.Data(name=cur_book.data[0].name,
|
||||
book_format=self.settings['new_book_format'].upper(),
|
||||
book=book_id, uncompressed_size=os.path.getsize(file_path + format_new_ext))
|
||||
try:
|
||||
local_db.session.merge(new_format)
|
||||
local_db.session.commit()
|
||||
if self.settings['new_book_format'].upper() in ['KEPUB', 'EPUB', 'EPUB3']:
|
||||
ub_session = init_db_thread()
|
||||
remove_synced_book(book_id, True, ub_session)
|
||||
ub_session.close()
|
||||
except SQLAlchemyError as e:
|
||||
local_db.session.rollback()
|
||||
log.error("Database error: %s", e)
|
||||
local_db.session.close()
|
||||
self._handleError(error_message)
|
||||
return
|
||||
self.results['path'] = cur_book.path
|
||||
self.title = cur_book.title
|
||||
self.results['title'] = self.title
|
||||
if not config.config_use_google_drive:
|
||||
self._handleSuccess()
|
||||
return os.path.basename(file_path + format_new_ext)
|
||||
else:
|
||||
error_message = N_('%(format)s format not found on disk', format=format_new_ext.upper())
|
||||
local_db.session.close()
|
||||
log.info("ebook converter failed with error while converting book")
|
||||
if not error_message:
|
||||
error_message = N_('Ebook converter failed with unknown error')
|
||||
else:
|
||||
log.error(error_message)
|
||||
self._handleError(error_message)
|
||||
return
|
||||
|
||||
def _convert_kepubify(self, file_path, format_old_ext, format_new_ext):
|
||||
if config.config_embed_metadata and config.config_binariesdir:
|
||||
tmp_dir, temp_file_name = helper.do_calibre_export(self.book_id, format_old_ext[1:])
|
||||
filename = os.path.join(tmp_dir, temp_file_name + format_old_ext)
|
||||
temp_file_path = tmp_dir
|
||||
else:
|
||||
filename = file_path + format_old_ext
|
||||
temp_file_path = os.path.dirname(file_path)
|
||||
quotes = [1, 3]
|
||||
command = [config.config_kepubifypath, filename, '-o', temp_file_path, '-i']
|
||||
try:
|
||||
p = process_open(command, quotes)
|
||||
except OSError as e:
|
||||
return 1, N_("Kepubify-converter failed: %(error)s", error=e)
|
||||
self.progress = 0.01
|
||||
while True:
|
||||
nextline = p.stdout.readlines()
|
||||
nextline = [x.strip('\n') for x in nextline if x != '\n']
|
||||
for line in nextline:
|
||||
log.debug(line)
|
||||
if p.poll() is not None:
|
||||
break
|
||||
|
||||
# process returncode
|
||||
check = p.returncode
|
||||
|
||||
# move file
|
||||
if check == 0:
|
||||
converted_file = glob.glob(glob.escape(os.path.splitext(filename)[0]) + "*.kepub.epub")
|
||||
if len(converted_file) == 1:
|
||||
copyfile(converted_file[0], (file_path + format_new_ext))
|
||||
os.unlink(converted_file[0])
|
||||
else:
|
||||
return 1, N_("Converted file not found or more than one file in folder %(folder)s",
|
||||
folder=os.path.dirname(file_path))
|
||||
return check, None
|
||||
|
||||
def _convert_calibre(self, file_path, format_old_ext, format_new_ext, has_cover):
|
||||
path_tmp_opf = None
|
||||
try:
|
||||
# path_tmp_opf = self._embed_metadata()
|
||||
if config.config_embed_metadata:
|
||||
quotes = [5]
|
||||
tmp_dir = get_temp_dir()
|
||||
calibredb_binarypath = os.path.join(config.config_binariesdir, SUPPORTED_CALIBRE_BINARIES["calibredb"])
|
||||
my_env = os.environ.copy()
|
||||
if config.config_calibre_split:
|
||||
my_env['CALIBRE_OVERRIDE_DATABASE_PATH'] = os.path.join(config.config_calibre_dir, "metadata.db")
|
||||
library_path = config.config_calibre_split_dir
|
||||
else:
|
||||
library_path = config.config_calibre_dir
|
||||
|
||||
opf_command = [calibredb_binarypath, 'show_metadata', '--as-opf', str(self.book_id),
|
||||
'--with-library', library_path]
|
||||
p = process_open(opf_command, quotes, my_env, newlines=False)
|
||||
lines = list()
|
||||
while p.poll() is None:
|
||||
lines.append(p.stdout.readline())
|
||||
check = p.returncode
|
||||
calibre_traceback = p.stderr.readlines()
|
||||
if check == 0:
|
||||
path_tmp_opf = os.path.join(tmp_dir, "metadata_" + str(uuid4()) + ".opf")
|
||||
with open(path_tmp_opf, 'wb') as fd:
|
||||
fd.write(b''.join(lines))
|
||||
else:
|
||||
error_message = ""
|
||||
for ele in calibre_traceback:
|
||||
if not ele.startswith('Traceback') and not ele.startswith(' File'):
|
||||
error_message = N_("Calibre failed with error: %(error)s", error=ele)
|
||||
return check, error_message
|
||||
quotes = [1, 2]
|
||||
quotes_index = 3
|
||||
command = [config.config_converterpath, (file_path + format_old_ext),
|
||||
(file_path + format_new_ext)]
|
||||
if config.config_embed_metadata:
|
||||
quotes.append(4)
|
||||
quotes_index = 5
|
||||
command.extend(['--from-opf', path_tmp_opf])
|
||||
if has_cover:
|
||||
quotes.append(6)
|
||||
command.extend(['--cover', os.path.join(os.path.dirname(file_path), 'cover.jpg')])
|
||||
quotes_index = 7
|
||||
if config.config_calibre:
|
||||
parameters = re.findall(r"(--[\w-]+)(?:(\s(?:(\".+\")|(?:.+?)))(?:\s|$))?",
|
||||
config.config_calibre, re.IGNORECASE | re.UNICODE)
|
||||
if parameters:
|
||||
for param in parameters:
|
||||
command.append(strip_whitespaces(param[0]))
|
||||
quotes_index += 1
|
||||
if param[1] != "":
|
||||
parsed = strip_whitespaces(param[1]).strip("\"")
|
||||
command.append(parsed)
|
||||
quotes.append(quotes_index)
|
||||
quotes_index += 1
|
||||
p = process_open(command, quotes, newlines=False)
|
||||
except OSError as e:
|
||||
return 1, N_("Ebook-converter failed: %(error)s", error=e)
|
||||
|
||||
while p.poll() is None:
|
||||
nextline = p.stdout.readline()
|
||||
if isinstance(nextline, bytes):
|
||||
nextline = nextline.decode('utf-8', errors="ignore").strip('\r\n')
|
||||
if nextline:
|
||||
log.debug(nextline)
|
||||
# parse progress string from calibre-converter
|
||||
progress = re.search(r"(\d+)%\s.*", nextline)
|
||||
if progress:
|
||||
self.progress = int(progress.group(1)) / 100
|
||||
if config.config_use_google_drive:
|
||||
self.progress *= 0.9
|
||||
|
||||
# process returncode
|
||||
check = p.returncode
|
||||
calibre_traceback = p.stderr.readlines()
|
||||
error_message = ""
|
||||
for ele in calibre_traceback:
|
||||
ele = ele.decode('utf-8', errors="ignore").strip('\n')
|
||||
log.debug(ele)
|
||||
if not ele.startswith('Traceback') and not ele.startswith(' File'):
|
||||
error_message = N_("Calibre failed with error: %(error)s", error=ele)
|
||||
return check, error_message
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return N_("Convert")
|
||||
|
||||
def __str__(self):
|
||||
if self.ereader_mail:
|
||||
return "Convert Book {} and mail it to {}".format(self.book_id, self.ereader_mail)
|
||||
else:
|
||||
return "Convert Book {}".format(self.book_id)
|
||||
|
||||
@property
|
||||
def is_cancellable(self):
|
||||
return False
|
||||
|
@@ -18,7 +18,7 @@
|
||||
|
||||
from flask_babel import lazy_gettext as N_
|
||||
|
||||
from cps import config, logger, db, ub
|
||||
from cps import config, logger, db, ub, app
|
||||
from cps.services.worker import CalibreTask
|
||||
|
||||
|
||||
@@ -26,11 +26,13 @@ class TaskReconnectDatabase(CalibreTask):
|
||||
def __init__(self, task_message=N_('Reconnecting Calibre database')):
|
||||
super(TaskReconnectDatabase, self).__init__(task_message)
|
||||
self.log = logger.create()
|
||||
self.calibre_db = db.CalibreDB(expire_on_commit=False, init=True)
|
||||
# self.calibre_db = db.CalibreDB(expire_on_commit=False, init=True)
|
||||
|
||||
def run(self, worker_thread):
|
||||
self.calibre_db.reconnect_db(config, ub.app_DB_path)
|
||||
self.calibre_db.session.close()
|
||||
with app.app_context():
|
||||
calibre_db = db.CalibreDB(app)
|
||||
calibre_db.reconnect_db(config, ub.app_DB_path)
|
||||
# self.calibre_db.session.close()
|
||||
self._handleSuccess()
|
||||
|
||||
@property
|
||||
|
@@ -25,7 +25,7 @@ import mimetypes
|
||||
|
||||
from io import StringIO
|
||||
from email.message import EmailMessage
|
||||
from email.utils import formatdate, parseaddr
|
||||
from email.utils import formatdate, parseaddr, make_msgid
|
||||
from email.generator import Generator
|
||||
from flask_babel import lazy_gettext as N_
|
||||
|
||||
@@ -34,7 +34,8 @@ from cps.services import gmail
|
||||
from cps.embed_helper import do_calibre_export
|
||||
from cps import logger, config
|
||||
from cps import gdriveutils
|
||||
import uuid
|
||||
from cps.string_helper import strip_whitespaces
|
||||
|
||||
|
||||
log = logger.create()
|
||||
|
||||
@@ -54,8 +55,8 @@ class EmailBase:
|
||||
return (code, resp)
|
||||
|
||||
def send(self, strg):
|
||||
"""Send `strg' to the server."""
|
||||
log.debug_no_auth('send: {}'.format(strg[:300]))
|
||||
"""Send 'strg' to the server."""
|
||||
log.debug_no_auth('send: {}'.format(strg[:300]), stacklevel=2)
|
||||
if hasattr(self, 'sock') and self.sock:
|
||||
try:
|
||||
if self.transferSize:
|
||||
@@ -101,7 +102,7 @@ class Email(EmailBase, smtplib.SMTP):
|
||||
smtplib.SMTP.__init__(self, *args, **kwargs)
|
||||
|
||||
|
||||
# Class for sending ssl encrypted email with ability to get current progress, , derived from emailbase class
|
||||
# Class for sending ssl encrypted email with ability to get current progress, derived from emailbase class
|
||||
class EmailSSL(EmailBase, smtplib.SMTP_SSL):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
@@ -127,9 +128,9 @@ class TaskEmail(CalibreTask):
|
||||
try:
|
||||
# Parse out the address from the From line, and then the domain from that
|
||||
from_email = parseaddr(self.settings["mail_from"])[1]
|
||||
msgid_domain = from_email.partition('@')[2].strip()
|
||||
msgid_domain = strip_whitespaces(from_email.partition('@')[2])
|
||||
# This can sometimes sneak through parseaddr if the input is malformed
|
||||
msgid_domain = msgid_domain.rstrip('>').strip()
|
||||
msgid_domain = strip_whitespaces(msgid_domain.rstrip('>'))
|
||||
except Exception:
|
||||
msgid_domain = ''
|
||||
return msgid_domain or 'calibre-web.com'
|
||||
@@ -141,7 +142,7 @@ class TaskEmail(CalibreTask):
|
||||
message['To'] = self.recipient
|
||||
message['Subject'] = self.subject
|
||||
message['Date'] = formatdate(localtime=True)
|
||||
message['Message-Id'] = "{}@{}".format(uuid.uuid4(), self.get_msgid_domain())
|
||||
message['Message-ID'] = make_msgid(domain=self.get_msgid_domain())
|
||||
message.set_content(self.text.encode('UTF-8'), "text", "plain")
|
||||
if self.attachment:
|
||||
data = self._get_attachment(self.filepath, self.attachment)
|
||||
@@ -168,10 +169,14 @@ class TaskEmail(CalibreTask):
|
||||
else:
|
||||
self.send_gmail_email(msg)
|
||||
except MemoryError as e:
|
||||
log.error_or_exception(e, stacklevel=3)
|
||||
log.error_or_exception(e, stacklevel=2)
|
||||
self._handleError('MemoryError sending e-mail: {}'.format(str(e)))
|
||||
except (smtplib.SMTPRecipientsRefused) as e:
|
||||
log.error_or_exception(e, stacklevel=2)
|
||||
self._handleError('Smtplib Error sending e-mail: {}'.format(
|
||||
(list(e.args[0].values())[0][1]).decode('utf-8)').replace("\n", '. ')))
|
||||
except (smtplib.SMTPException, smtplib.SMTPAuthenticationError) as e:
|
||||
log.error_or_exception(e, stacklevel=3)
|
||||
log.error_or_exception(e, stacklevel=2)
|
||||
if hasattr(e, "smtp_error"):
|
||||
text = e.smtp_error.decode('utf-8').replace("\n", '. ')
|
||||
elif hasattr(e, "message"):
|
||||
@@ -182,10 +187,10 @@ class TaskEmail(CalibreTask):
|
||||
text = ''
|
||||
self._handleError('Smtplib Error sending e-mail: {}'.format(text))
|
||||
except (socket.error) as e:
|
||||
log.error_or_exception(e, stacklevel=3)
|
||||
log.error_or_exception(e, stacklevel=2)
|
||||
self._handleError('Socket Error sending e-mail: {}'.format(e.strerror))
|
||||
except Exception as ex:
|
||||
log.error_or_exception(ex, stacklevel=3)
|
||||
log.error_or_exception(ex, stacklevel=2)
|
||||
self._handleError('Error sending e-mail: {}'.format(ex))
|
||||
|
||||
def send_standard_email(self, msg):
|
||||
@@ -268,7 +273,7 @@ class TaskEmail(CalibreTask):
|
||||
if config.config_binariesdir and config.config_embed_metadata:
|
||||
os.remove(datafile)
|
||||
except IOError as e:
|
||||
log.error_or_exception(e, stacklevel=3)
|
||||
log.error_or_exception(e, stacklevel=2)
|
||||
log.error('The requested file could not be read. Maybe wrong permissions?')
|
||||
return None
|
||||
return data
|
||||
|
@@ -19,7 +19,7 @@
|
||||
import os
|
||||
from lxml import etree
|
||||
|
||||
from cps import config, db, gdriveutils, logger
|
||||
from cps import config, db, gdriveutils, logger, app
|
||||
from cps.services.worker import CalibreTask
|
||||
from flask_babel import lazy_gettext as N_
|
||||
|
||||
@@ -34,7 +34,7 @@ class TaskBackupMetadata(CalibreTask):
|
||||
task_message=N_('Backing up Metadata')):
|
||||
super(TaskBackupMetadata, self).__init__(task_message)
|
||||
self.log = logger.create()
|
||||
self.calibre_db = db.CalibreDB(expire_on_commit=False, init=True)
|
||||
# self.calibre_db = db.CalibreDB(expire_on_commit=False, init=True)
|
||||
self.export_language = export_language
|
||||
self.translated_title = translated_title
|
||||
self.set_dirty = set_dirty
|
||||
@@ -46,47 +46,51 @@ class TaskBackupMetadata(CalibreTask):
|
||||
self.backup_metadata()
|
||||
|
||||
def set_all_books_dirty(self):
|
||||
try:
|
||||
books = self.calibre_db.session.query(db.Books).all()
|
||||
for book in books:
|
||||
self.calibre_db.set_metadata_dirty(book.id)
|
||||
self.calibre_db.session.commit()
|
||||
self._handleSuccess()
|
||||
except Exception as ex:
|
||||
self.log.debug('Error adding book for backup: ' + str(ex))
|
||||
self._handleError('Error adding book for backup: ' + str(ex))
|
||||
self.calibre_db.session.rollback()
|
||||
self.calibre_db.session.close()
|
||||
with app.app_context():
|
||||
calibre_dbb = db.CalibreDB(app)
|
||||
try:
|
||||
books = calibre_dbb.session.query(db.Books).all()
|
||||
for book in books:
|
||||
calibre_dbb.set_metadata_dirty(book.id)
|
||||
calibre_dbb.session.commit()
|
||||
self._handleSuccess()
|
||||
except Exception as ex:
|
||||
self.log.debug('Error adding book for backup: ' + str(ex))
|
||||
self._handleError('Error adding book for backup: ' + str(ex))
|
||||
calibre_dbb.session.rollback()
|
||||
# self.calibre_db.session.close()
|
||||
|
||||
def backup_metadata(self):
|
||||
try:
|
||||
metadata_backup = self.calibre_db.session.query(db.Metadata_Dirtied).all()
|
||||
custom_columns = (self.calibre_db.session.query(db.CustomColumns)
|
||||
.filter(db.CustomColumns.mark_for_delete == 0)
|
||||
.filter(db.CustomColumns.datatype.notin_(db.cc_exceptions))
|
||||
.order_by(db.CustomColumns.label).all())
|
||||
count = len(metadata_backup)
|
||||
i = 0
|
||||
for backup in metadata_backup:
|
||||
book = self.calibre_db.session.query(db.Books).filter(db.Books.id == backup.book).one_or_none()
|
||||
self.calibre_db.session.query(db.Metadata_Dirtied).filter(
|
||||
db.Metadata_Dirtied.book == backup.book).delete()
|
||||
self.calibre_db.session.commit()
|
||||
if book:
|
||||
self.open_metadata(book, custom_columns)
|
||||
else:
|
||||
self.log.error("Book {} not found in database".format(backup.book))
|
||||
i += 1
|
||||
self.progress = (1.0 / count) * i
|
||||
self._handleSuccess()
|
||||
self.calibre_db.session.close()
|
||||
with app.app_context():
|
||||
try:
|
||||
calibre_dbb = db.CalibreDB(app)
|
||||
metadata_backup = calibre_dbb.session.query(db.Metadata_Dirtied).all()
|
||||
custom_columns = (calibre_dbb.session.query(db.CustomColumns)
|
||||
.filter(db.CustomColumns.mark_for_delete == 0)
|
||||
.filter(db.CustomColumns.datatype.notin_(db.cc_exceptions))
|
||||
.order_by(db.CustomColumns.label).all())
|
||||
count = len(metadata_backup)
|
||||
i = 0
|
||||
for backup in metadata_backup:
|
||||
book = calibre_dbb.session.query(db.Books).filter(db.Books.id == backup.book).one_or_none()
|
||||
calibre_dbb.session.query(db.Metadata_Dirtied).filter(
|
||||
db.Metadata_Dirtied.book == backup.book).delete()
|
||||
calibre_dbb.session.commit()
|
||||
if book:
|
||||
self.open_metadata(book, custom_columns)
|
||||
else:
|
||||
self.log.error("Book {} not found in database".format(backup.book))
|
||||
i += 1
|
||||
self.progress = (1.0 / count) * i
|
||||
self._handleSuccess()
|
||||
# self.calibre_db.session.close()
|
||||
|
||||
except Exception as ex:
|
||||
b = "NaN" if not hasattr(book, 'id') else book.id
|
||||
self.log.debug('Error creating metadata backup for book {}: '.format(b) + str(ex))
|
||||
self._handleError('Error creating metadata backup: ' + str(ex))
|
||||
self.calibre_db.session.rollback()
|
||||
self.calibre_db.session.close()
|
||||
except Exception as ex:
|
||||
b = "NaN" if not hasattr(book, 'id') else book.id
|
||||
self.log.debug('Error creating metadata backup for book {}: '.format(b) + str(ex))
|
||||
self._handleError('Error creating metadata backup: ' + str(ex))
|
||||
calibre_dbb.session.rollback()
|
||||
# self.calibre_db.session.close()
|
||||
|
||||
def open_metadata(self, book, custom_columns):
|
||||
# package = self.create_new_metadata_backup(book, custom_columns)
|
||||
|
@@ -20,11 +20,11 @@ import os
|
||||
from shutil import copyfile, copyfileobj
|
||||
from urllib.request import urlopen
|
||||
from io import BytesIO
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from .. import constants
|
||||
from cps import config, db, fs, gdriveutils, logger, ub
|
||||
from cps import config, db, fs, gdriveutils, logger, ub, app
|
||||
from cps.services.worker import CalibreTask, STAT_CANCELLED, STAT_ENDED
|
||||
from datetime import datetime
|
||||
from sqlalchemy import func, text, or_
|
||||
from flask_babel import lazy_gettext as N_
|
||||
|
||||
@@ -36,7 +36,7 @@ except (ImportError, RuntimeError) as e:
|
||||
|
||||
|
||||
def get_resize_height(resolution):
|
||||
return int(225 * resolution)
|
||||
return int(255 * resolution)
|
||||
|
||||
|
||||
def get_resize_width(resolution, original_width, original_height):
|
||||
@@ -73,7 +73,8 @@ class TaskGenerateCoverThumbnails(CalibreTask):
|
||||
self.cache = fs.FileSystem()
|
||||
self.resolutions = [
|
||||
constants.COVER_THUMBNAIL_SMALL,
|
||||
constants.COVER_THUMBNAIL_MEDIUM
|
||||
constants.COVER_THUMBNAIL_MEDIUM,
|
||||
constants.COVER_THUMBNAIL_LARGE
|
||||
]
|
||||
|
||||
def run(self, worker_thread):
|
||||
@@ -113,9 +114,10 @@ class TaskGenerateCoverThumbnails(CalibreTask):
|
||||
@staticmethod
|
||||
def get_books_with_covers(book_id=-1):
|
||||
filter_exp = (db.Books.id == book_id) if book_id != -1 else True
|
||||
calibre_db = db.CalibreDB(expire_on_commit=False, init=True)
|
||||
books_cover = calibre_db.session.query(db.Books).filter(db.Books.has_cover == 1).filter(filter_exp).all()
|
||||
calibre_db.session.close()
|
||||
with app.app_context():
|
||||
calibre_db = db.CalibreDB(app) #, expire_on_commit=False, init=True)
|
||||
books_cover = calibre_db.session.query(db.Books).filter(db.Books.has_cover == 1).filter(filter_exp).all()
|
||||
# calibre_db.session.close()
|
||||
return books_cover
|
||||
|
||||
def get_book_cover_thumbnails(self, book_id):
|
||||
@@ -123,7 +125,7 @@ class TaskGenerateCoverThumbnails(CalibreTask):
|
||||
.query(ub.Thumbnail) \
|
||||
.filter(ub.Thumbnail.type == constants.THUMBNAIL_TYPE_COVER) \
|
||||
.filter(ub.Thumbnail.entity_id == book_id) \
|
||||
.filter(or_(ub.Thumbnail.expiration.is_(None), ub.Thumbnail.expiration > datetime.utcnow())) \
|
||||
.filter(or_(ub.Thumbnail.expiration.is_(None), ub.Thumbnail.expiration > datetime.now(timezone.utc))) \
|
||||
.all()
|
||||
|
||||
def create_book_cover_thumbnails(self, book):
|
||||
@@ -165,7 +167,7 @@ class TaskGenerateCoverThumbnails(CalibreTask):
|
||||
self.app_db_session.rollback()
|
||||
|
||||
def update_book_cover_thumbnail(self, book, thumbnail):
|
||||
thumbnail.generated_at = datetime.utcnow()
|
||||
thumbnail.generated_at = datetime.now(timezone.utc)
|
||||
|
||||
try:
|
||||
self.app_db_session.commit()
|
||||
@@ -197,9 +199,11 @@ class TaskGenerateCoverThumbnails(CalibreTask):
|
||||
img.format = thumbnail.format
|
||||
img.save(filename=filename)
|
||||
else:
|
||||
with open(filename, 'rb') as fd:
|
||||
stream.seek(0)
|
||||
with open(filename, 'wb') as fd:
|
||||
copyfileobj(stream, fd)
|
||||
|
||||
|
||||
except Exception as ex:
|
||||
# Bubble exception to calling function
|
||||
self.log.debug('Error generating thumbnail file: ' + str(ex))
|
||||
@@ -244,7 +248,7 @@ class TaskGenerateSeriesThumbnails(CalibreTask):
|
||||
super(TaskGenerateSeriesThumbnails, self).__init__(task_message)
|
||||
self.log = logger.create()
|
||||
self.app_db_session = ub.get_new_session_instance()
|
||||
self.calibre_db = db.CalibreDB(expire_on_commit=False, init=True)
|
||||
# self.calibre_db = db.CalibreDB(expire_on_commit=False, init=True)
|
||||
self.cache = fs.FileSystem()
|
||||
self.resolutions = [
|
||||
constants.COVER_THUMBNAIL_SMALL,
|
||||
@@ -252,58 +256,60 @@ class TaskGenerateSeriesThumbnails(CalibreTask):
|
||||
]
|
||||
|
||||
def run(self, worker_thread):
|
||||
if self.calibre_db.session and use_IM and self.stat != STAT_CANCELLED and self.stat != STAT_ENDED:
|
||||
self.message = 'Scanning Series'
|
||||
all_series = self.get_series_with_four_plus_books()
|
||||
count = len(all_series)
|
||||
with app.app_context():
|
||||
calibre_db = db.CalibreDB(app)
|
||||
if calibre_db.session and use_IM and self.stat != STAT_CANCELLED and self.stat != STAT_ENDED:
|
||||
self.message = 'Scanning Series'
|
||||
all_series = self.get_series_with_four_plus_books(calibre_db)
|
||||
count = len(all_series)
|
||||
|
||||
total_generated = 0
|
||||
for i, series in enumerate(all_series):
|
||||
generated = 0
|
||||
series_thumbnails = self.get_series_thumbnails(series.id)
|
||||
series_books = self.get_series_books(series.id)
|
||||
total_generated = 0
|
||||
for i, series in enumerate(all_series):
|
||||
generated = 0
|
||||
series_thumbnails = self.get_series_thumbnails(series.id)
|
||||
series_books = self.get_series_books(series.id, calibre_db)
|
||||
|
||||
# Generate new thumbnails for missing covers
|
||||
resolutions = list(map(lambda t: t.resolution, series_thumbnails))
|
||||
missing_resolutions = list(set(self.resolutions).difference(resolutions))
|
||||
for resolution in missing_resolutions:
|
||||
generated += 1
|
||||
self.create_series_thumbnail(series, series_books, resolution)
|
||||
|
||||
# Replace outdated or missing thumbnails
|
||||
for thumbnail in series_thumbnails:
|
||||
if any(book.last_modified > thumbnail.generated_at for book in series_books):
|
||||
# Generate new thumbnails for missing covers
|
||||
resolutions = list(map(lambda t: t.resolution, series_thumbnails))
|
||||
missing_resolutions = list(set(self.resolutions).difference(resolutions))
|
||||
for resolution in missing_resolutions:
|
||||
generated += 1
|
||||
self.update_series_thumbnail(series_books, thumbnail)
|
||||
self.create_series_thumbnail(series, series_books, resolution)
|
||||
|
||||
elif not self.cache.get_cache_file_exists(thumbnail.filename, constants.CACHE_TYPE_THUMBNAILS):
|
||||
generated += 1
|
||||
self.update_series_thumbnail(series_books, thumbnail)
|
||||
# Replace outdated or missing thumbnails
|
||||
for thumbnail in series_thumbnails:
|
||||
if any(book.last_modified > thumbnail.generated_at for book in series_books):
|
||||
generated += 1
|
||||
self.update_series_thumbnail(series_books, thumbnail)
|
||||
|
||||
# Increment the progress
|
||||
self.progress = (1.0 / count) * i
|
||||
elif not self.cache.get_cache_file_exists(thumbnail.filename, constants.CACHE_TYPE_THUMBNAILS):
|
||||
generated += 1
|
||||
self.update_series_thumbnail(series_books, thumbnail)
|
||||
|
||||
if generated > 0:
|
||||
total_generated += generated
|
||||
self.message = N_('Generated {0} series thumbnails').format(total_generated)
|
||||
# Increment the progress
|
||||
self.progress = (1.0 / count) * i
|
||||
|
||||
# Check if job has been cancelled or ended
|
||||
if self.stat == STAT_CANCELLED:
|
||||
self.log.info(f'GenerateSeriesThumbnails task has been cancelled.')
|
||||
return
|
||||
if generated > 0:
|
||||
total_generated += generated
|
||||
self.message = N_('Generated {0} series thumbnails').format(total_generated)
|
||||
|
||||
if self.stat == STAT_ENDED:
|
||||
self.log.info(f'GenerateSeriesThumbnails task has been ended.')
|
||||
return
|
||||
# Check if job has been cancelled or ended
|
||||
if self.stat == STAT_CANCELLED:
|
||||
self.log.info(f'GenerateSeriesThumbnails task has been cancelled.')
|
||||
return
|
||||
|
||||
if total_generated == 0:
|
||||
self.self_cleanup = True
|
||||
if self.stat == STAT_ENDED:
|
||||
self.log.info(f'GenerateSeriesThumbnails task has been ended.')
|
||||
return
|
||||
|
||||
self._handleSuccess()
|
||||
self.app_db_session.remove()
|
||||
if total_generated == 0:
|
||||
self.self_cleanup = True
|
||||
|
||||
def get_series_with_four_plus_books(self):
|
||||
return self.calibre_db.session \
|
||||
self._handleSuccess()
|
||||
self.app_db_session.remove()
|
||||
|
||||
def get_series_with_four_plus_books(self, calibre_db):
|
||||
return calibre_db.session \
|
||||
.query(db.Series) \
|
||||
.join(db.books_series_link) \
|
||||
.join(db.Books) \
|
||||
@@ -312,8 +318,8 @@ class TaskGenerateSeriesThumbnails(CalibreTask):
|
||||
.having(func.count('book_series_link') > 3) \
|
||||
.all()
|
||||
|
||||
def get_series_books(self, series_id):
|
||||
return self.calibre_db.session \
|
||||
def get_series_books(self, series_id, calibre_db):
|
||||
return calibre_db.session \
|
||||
.query(db.Books) \
|
||||
.join(db.books_series_link) \
|
||||
.join(db.Series) \
|
||||
@@ -322,12 +328,12 @@ class TaskGenerateSeriesThumbnails(CalibreTask):
|
||||
.all()
|
||||
|
||||
def get_series_thumbnails(self, series_id):
|
||||
return self.app_db_session \
|
||||
.query(ub.Thumbnail) \
|
||||
.filter(ub.Thumbnail.type == constants.THUMBNAIL_TYPE_SERIES) \
|
||||
.filter(ub.Thumbnail.entity_id == series_id) \
|
||||
.filter(or_(ub.Thumbnail.expiration.is_(None), ub.Thumbnail.expiration > datetime.utcnow())) \
|
||||
.all()
|
||||
return (self.app_db_session
|
||||
.query(ub.Thumbnail)
|
||||
.filter(ub.Thumbnail.type == constants.THUMBNAIL_TYPE_SERIES)
|
||||
.filter(ub.Thumbnail.entity_id == series_id)
|
||||
.filter(or_(ub.Thumbnail.expiration.is_(None), ub.Thumbnail.expiration > datetime.now(timezone.utc)))
|
||||
.all())
|
||||
|
||||
def create_series_thumbnail(self, series, series_books, resolution):
|
||||
thumbnail = ub.Thumbnail()
|
||||
@@ -346,7 +352,7 @@ class TaskGenerateSeriesThumbnails(CalibreTask):
|
||||
self.app_db_session.rollback()
|
||||
|
||||
def update_series_thumbnail(self, series_books, thumbnail):
|
||||
thumbnail.generated_at = datetime.utcnow()
|
||||
thumbnail.generated_at = datetime.now(timezone.utc)
|
||||
|
||||
try:
|
||||
self.app_db_session.commit()
|
||||
@@ -459,13 +465,15 @@ class TaskClearCoverThumbnailCache(CalibreTask):
|
||||
|
||||
def run(self, worker_thread):
|
||||
if self.app_db_session:
|
||||
if self.book_id == 0: # delete superfluous thumbnails
|
||||
calibre_db = db.CalibreDB(expire_on_commit=False, init=True)
|
||||
thumbnails = (calibre_db.session.query(ub.Thumbnail)
|
||||
.join(db.Books, ub.Thumbnail.entity_id == db.Books.id, isouter=True)
|
||||
.filter(db.Books.id==None)
|
||||
.all())
|
||||
calibre_db.session.close()
|
||||
# delete superfluous thumbnails
|
||||
if self.book_id == 0:
|
||||
with app.app_context():
|
||||
calibre_db = db.CalibreDB(app)
|
||||
thumbnails = (calibre_db.session.query(ub.Thumbnail)
|
||||
.join(db.Books, ub.Thumbnail.entity_id == db.Books.id, isouter=True)
|
||||
.filter(db.Books.id==None)
|
||||
.all())
|
||||
# calibre_db.session.close()
|
||||
elif self.book_id > 0: # make sure single book is selected
|
||||
thumbnails = self.get_thumbnails_for_book(self.book_id)
|
||||
if self.book_id < 0:
|
||||
|
@@ -62,18 +62,16 @@
|
||||
<a class="author-name" href="{{url_for('web.books_list', data='author', sort_param='stored', book_id=author.id) }}">{{author.name.replace('|',',')|shortentitle(30)}}</a>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{% for format in entry.Books.data %}
|
||||
{% if format.format|lower in g.constants.EXTENSIONS_AUDIO %}
|
||||
{% if entry.Books.data|music %}
|
||||
<span class="glyphicon glyphicon-music"></span>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
</p>
|
||||
{% if entry.Books.series.__len__() > 0 %}
|
||||
<p class="series">
|
||||
<a href="{{url_for('web.books_list', data='series', sort_param='stored', book_id=entry.Books.series[0].id )}}">
|
||||
{{entry.Books.series[0].name}}
|
||||
</a>
|
||||
({{entry.Books.series_index|formatseriesindex}})
|
||||
({{entry.Books.series_index|formatfloat(2)}})
|
||||
</p>
|
||||
{% endif %}
|
||||
{% if entry.Books.ratings.__len__() > 0 %}
|
||||
@@ -124,7 +122,7 @@
|
||||
<a href="{{url_for('web.books_list', data='series', sort_param='stored', book_id=entry.series[0].id )}}">
|
||||
{{entry.series[0].name}}
|
||||
</a>
|
||||
({{entry.series_index|formatseriesindex}})
|
||||
({{entry.series_index|formatfloat(2)}})
|
||||
</p>
|
||||
{% endif %}
|
||||
<div class="rating">
|
||||
|
81
cps/templates/basic_detail.html
Normal file
81
cps/templates/basic_detail.html
Normal file
@@ -0,0 +1,81 @@
|
||||
{% extends "basic_layout.html" %}
|
||||
|
||||
{% block body %}
|
||||
<div>
|
||||
|
||||
<h2 id="title">{{ entry.title }}</h2>
|
||||
<div>
|
||||
{% for author in entry.ordered_authors %}
|
||||
<p>{{ author.name.replace("|",",") }}</p>
|
||||
{% endfor %}
|
||||
</div>
|
||||
|
||||
<div class="cover">
|
||||
<img title="{{ entry.title }}" src="{{ url_for('web.get_cover', book_id=entry.id, resolution='og', c=entry|last_modified) }}"/>
|
||||
</div>
|
||||
|
||||
{% if current_user.role_download() %}
|
||||
{% if entry.data|length %}
|
||||
<div>
|
||||
<h2>Download</h2>
|
||||
{% for format in entry.data %}
|
||||
<p>
|
||||
<a href="{{ url_for('web.download_link', book_id=entry.id, book_format=format.format|lower, anyname=entry.id|string+'.'+format.format|lower) }}">
|
||||
{{ format.format }} ({{ format.uncompressed_size|filesizeformat }})</a>
|
||||
</p>
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
<h2>Details</h2>
|
||||
|
||||
{% if entry.series|length > 0 %}
|
||||
<p>{{ _("Book %(index)s of %(range)s", index=entry.series_index | formatfloat(2), range=(entry.series[0].name)|safe) }}</p>
|
||||
{% endif %}
|
||||
|
||||
{% if entry.languages|length > 0 %}
|
||||
<div>
|
||||
<p>
|
||||
<span>
|
||||
{{_('Language')}}: {% for language in entry.languages %}{{language.language_name}}{% if not loop.last %}, {% endif %}{% endfor %}
|
||||
</span>
|
||||
</p>
|
||||
</div>
|
||||
{% endif %}
|
||||
|
||||
{% if entry.identifiers|length > 0 %}
|
||||
<div>
|
||||
<p>
|
||||
<span></span>
|
||||
{% for identifier in entry.identifiers %}
|
||||
<p>{{ identifier.format_type() }}: {{ identifier|escape }}</p>
|
||||
{% endfor %}
|
||||
</p>
|
||||
</div>
|
||||
{% endif %}
|
||||
|
||||
{% if entry.publishers|length > 0 %}
|
||||
<div>
|
||||
<p>
|
||||
<span>{{ _('Publisher') }}:
|
||||
<span>{{ entry.publishers[0].name }}</span>
|
||||
</span>
|
||||
</p>
|
||||
</div>
|
||||
{% endif %}
|
||||
|
||||
{% if (entry.pubdate|string)[:10] != '0101-01-01' %}
|
||||
<div>
|
||||
<p>{{ _('Published') }}: {{ entry.pubdate|formatdate }} </p>
|
||||
</div>
|
||||
{% endif %}
|
||||
|
||||
{% if entry.comments|length > 0 and entry.comments[0].text|length > 0 %}
|
||||
<div>
|
||||
<h2 id="decription">{{ _('Description:') }}</h2>
|
||||
{{ entry.comments[0].text|safe }}
|
||||
</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endblock %}
|
32
cps/templates/basic_index.html
Normal file
32
cps/templates/basic_index.html
Normal file
@@ -0,0 +1,32 @@
|
||||
{% extends "basic_layout.html" %}
|
||||
{% block body %}
|
||||
|
||||
<div class="pagination">
|
||||
<div>
|
||||
{% if pagination.has_prev %}
|
||||
<a href="{{ (pagination.page - 1)|url_for_other_page }}">« {{_('Previous')}}</a>
|
||||
{% endif %}
|
||||
</div>
|
||||
<div>
|
||||
{% if pagination.has_next %}
|
||||
<a href="{{ (pagination.page + 1)|url_for_other_page }}">{{_('Next')}} »</a>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{% if entries|length < 1 %}
|
||||
<p>{{_('No Results Found')}}</p>
|
||||
{% endif %}
|
||||
|
||||
{% for entry in entries %}
|
||||
{% if entry.Books.authors %}
|
||||
{% set author = entry.Books.authors[0].name.replace('|',',')|shortentitle(30) %}
|
||||
{% else %}
|
||||
{% set author = '' %}
|
||||
{% endif %}
|
||||
<a href="{{ url_for('basic.show_book', book_id=entry.Books.id) }}">
|
||||
<p class="listing" title="{{entry.Books.title}}">{{ author }} - {{entry.Books.title|shortentitle}}</p>
|
||||
</a>
|
||||
{% endfor %}
|
||||
|
||||
{% endblock %}
|
47
cps/templates/basic_layout.html
Normal file
47
cps/templates/basic_layout.html
Normal file
@@ -0,0 +1,47 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="{{ current_user.locale }}">
|
||||
|
||||
<head>
|
||||
<title>{{instance}} | {{title}}</title>
|
||||
<meta charset="utf-8">
|
||||
<meta name='viewport' content='initial-scale=1,maximum-scale=5,user-scalable=no' />
|
||||
<link href="{{ url_for('static', filename='css/basic.css') }}" rel="stylesheet" media="screen">
|
||||
</head>
|
||||
|
||||
<body>
|
||||
<div>
|
||||
<div>
|
||||
{% if current_user.is_authenticated or g.allow_anonymous %}
|
||||
<nav>
|
||||
<a href="{{url_for('basic.index')}}">
|
||||
<span><h1>{{_('Home')}}</h1></span>
|
||||
</a>
|
||||
<div class="search">
|
||||
<form role="search" action="{{url_for('basic.index')}}" method="GET">
|
||||
<input type="text" id="query" name="query" placeholder="{{_('Search Library')}}" value="{{ searchterm }}">
|
||||
<span>
|
||||
<button type="submit" id="query_submit">{{_('Search')}}</button>
|
||||
</span>
|
||||
</form>
|
||||
</div>
|
||||
{% if not current_user.is_anonymous %}
|
||||
<a href="{{url_for('web.logout')}}">
|
||||
<span>{{_('Logout')}}</span>
|
||||
</a>
|
||||
{% endif %}
|
||||
</nav>
|
||||
<div class="theme">
|
||||
<a href="{{url_for('web.index')}}">
|
||||
<span>{{_('Normal Theme')}}</span>
|
||||
</a>
|
||||
</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
<div class="body">
|
||||
{% block body %}
|
||||
{% endblock %}
|
||||
</div>
|
||||
</body>
|
||||
|
||||
</html>
|
@@ -47,42 +47,37 @@
|
||||
</form>
|
||||
</div>
|
||||
{% endif %}
|
||||
{% if current_user.role_upload() and g.allow_upload %}
|
||||
|
||||
<div class="text-center more-stuff"><!--h4 aria-label="Upload new book format"></h4-->
|
||||
<form id="form-upload-format" action="{{ url_for('edit-book.upload') }}" data-title="{{_('Uploading...')}}" data-footer="{{_('Close')}}" data-failed="{{_('Error')}}" data-message="{{_('Upload done, processing, please wait...')}}" method="post" enctype="multipart/form-data">
|
||||
<div class="text-center">
|
||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
|
||||
<input type="hidden" name="book_id" value="{{ book.id }}">
|
||||
<div role="group" aria-label="Upload new book format">
|
||||
<label class="btn btn-primary btn-file" for="btn-upload-format">{{ _('Upload Format') }}</label>
|
||||
<div class="upload-format-input-text" id="upload-format"></div>
|
||||
<input id="btn-upload-format" name="btn-upload-format" type="file" accept="{% for format in accept %}.{% if format != ''%}{{format}}{% else %}*{% endif %}{{ ',' if not loop.last }}{% endfor %}" multiple>
|
||||
</div>
|
||||
</div>
|
||||
</form>
|
||||
</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
<form role="form" action="{{ url_for('edit-book.edit_book', book_id=book.id) }}" method="post" enctype="multipart/form-data" id="book_edit_frm">
|
||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
|
||||
<div class="col-sm-9 col-xs-12">
|
||||
<div class="form-group">
|
||||
<label for="book_title">{{_('Book Title')}}</label>
|
||||
<input type="text" class="form-control" name="book_title" id="book_title" value="{{book.title}}">
|
||||
<label for="title">{{_('Book Title')}}</label>
|
||||
<input type="text" class="form-control" name="title" id="title" value="{{book.title}}">
|
||||
</div>
|
||||
<div class="text-center">
|
||||
<button type="button" class="btn btn-default" id="xchange" ><span class="glyphicon glyphicon-arrow-up"></span><span class="glyphicon glyphicon-arrow-down"></span></button>
|
||||
</div>
|
||||
<div id="author_div" class="form-group">
|
||||
<label for="bookAuthor">{{_('Author')}}</label>
|
||||
<input type="text" class="form-control typeahead" autocomplete="off" name="author_name" id="bookAuthor" value="{{' & '.join(authors)}}">
|
||||
<input type="text" class="form-control typeahead" autocomplete="off" name="authors" id="authors" value="{{' & '.join(authors)}}">
|
||||
</div>
|
||||
|
||||
<div class="form-group">
|
||||
<label for="description">{{_('Description')}}</label>
|
||||
<textarea class="form-control" name="description" id="description" rows="7">{% if book.comments %}{{book.comments[0].text}}{%endif%}</textarea>
|
||||
</div>
|
||||
|
||||
<div class="form-group">
|
||||
<label>{{_('Identifiers')}}</label>
|
||||
<table class="table" id="identifier-table">
|
||||
{% for identifier in book.identifiers %}
|
||||
<tr>
|
||||
<td><input type="text" class="form-control" name="identifier-type-{{identifier.type}}" value="{{identifier.type}}" required="required" placeholder="{{_('Identifier Type')}}"></td>
|
||||
<td><input type="text" class="form-control" name="identifier-val-{{identifier.type}}" value="{{identifier.val}}" required="required" placeholder="{{_('Identifier Value')}}"></td>
|
||||
<td><a class="btn btn-default" onclick="removeIdentifierLine(this)">{{_('Remove')}}</a></td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</table>
|
||||
<a id="add-identifier-line" class="btn btn-default">{{_('Add Identifier')}}</a>
|
||||
</div>
|
||||
|
||||
<div class="form-group">
|
||||
<label for="tags">{{_('Tags')}}</label>
|
||||
<input type="text" class="form-control typeahead" autocomplete="off" name="tags" id="tags" value="{% for tag in book.tags %}{{tag.name.strip()}}{% if not loop.last %}, {% endif %}{% endfor %}">
|
||||
@@ -93,23 +88,8 @@
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="series_index">{{_('Series ID')}}</label>
|
||||
<input type="number" step="0.01" min="0" placeholder="1" class="form-control" name="series_index" id="series_index" value="{{book.series_index}}">
|
||||
<input type="number" step="0.01" min="0" placeholder="1" class="form-control" name="series_index" id="series_index" value="{{book.series_index|formatfloat(2)}}">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="rating">{{_('Rating')}}</label>
|
||||
<input type="number" name="rating" id="rating" class="rating input-lg" data-clearable="" value="{% if book.ratings %}{{(book.ratings[0].rating / 2)|int}}{% endif %}">
|
||||
</div>
|
||||
{% if current_user.role_upload() and g.allow_upload %}
|
||||
<div class="form-group">
|
||||
<label for="cover_url">{{_('Fetch Cover from URL (JPEG - Image will be downloaded and stored in database)')}}</label>
|
||||
<input type="text" class="form-control" name="cover_url" id="cover_url" value="">
|
||||
</div>
|
||||
<div class="form-group" aria-label="Upload cover from local drive">
|
||||
<label class="btn btn-primary btn-file" for="btn-upload-cover">{{ _('Upload Cover from Local Disk') }}</label>
|
||||
<div class="upload-cover-input-text" id="upload-cover"></div>
|
||||
<input id="btn-upload-cover" name="btn-upload-cover" type="file" accept=".jpg, .jpeg, .png, .webp">
|
||||
</div>
|
||||
{% endif %}
|
||||
<label for="pubdate">{{_('Published Date')}}</label>
|
||||
<div class="form-group input-group">
|
||||
<input type="text" class="datepicker form-control" name="pubdate" id="pubdate" value="{% if book.pubdate %}{{book.pubdate|formatdateinput}}{% endif %}">
|
||||
@@ -126,6 +106,39 @@
|
||||
<label for="languages">{{_('Language')}}</label>
|
||||
<input type="text" class="form-control typeahead" autocomplete="off" name="languages" id="languages" value="{% for language in book.languages %}{{language.language_name.strip()}}{% if not loop.last %}, {% endif %}{% endfor %}">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="rating">{{_('Rating')}}</label>
|
||||
<input type="number" name="rating" id="rating" class="rating input-lg" data-clearable="" value="{% if book.ratings %}{{(book.ratings[0].rating / 2)|int}}{% endif %}">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="comments">{{_('Description')}}</label>
|
||||
<textarea class="form-control" name="comments" id="comments" rows="7">{% if book.comments %}{{book.comments[0].text}}{%endif%}</textarea>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label>{{_('Identifiers')}}</label>
|
||||
<table class="table" id="identifier-table"><tbody>
|
||||
{% for identifier in book.identifiers %}
|
||||
<tr>
|
||||
<td><input type="text" class="form-control" name="identifier-type-{{identifier.type}}" value="{{identifier.type}}" required="required" placeholder="{{_('Identifier Type')}}"></td>
|
||||
<td><input type="text" class="form-control" name="identifier-val-{{identifier.type}}" value="{{identifier.val}}" required="required" placeholder="{{_('Identifier Value')}}"></td>
|
||||
<td><a class="btn btn-default" onclick="removeIdentifierLine(this)">{{_('Remove')}}</a></td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
<a id="add-identifier-line" class="btn btn-default">{{_('Add Identifier')}}</a>
|
||||
</div>
|
||||
{% if current_user.role_upload() and g.allow_upload %}
|
||||
<div class="form-group">
|
||||
<label for="cover_url">{{_('Fetch Cover from URL (JPEG - Image will be downloaded and stored in database)')}}</label>
|
||||
<input type="text" class="form-control" name="cover_url" id="cover_url" value="">
|
||||
</div>
|
||||
<div class="form-group" aria-label="Upload cover from local drive">
|
||||
<label class="btn btn-primary btn-file" for="btn-upload-cover">{{ _('Upload Cover from Local Disk') }}</label>
|
||||
<div class="upload-cover-input-text" id="upload-cover"></div>
|
||||
<input id="btn-upload-cover" name="btn-upload-cover" type="file" accept=".jpg, .jpeg, .png, .webp">
|
||||
</div>
|
||||
{% endif %}
|
||||
{% if cc|length > 0 %}
|
||||
{% for c in cc %}
|
||||
<div class="form-group">
|
||||
@@ -196,13 +209,6 @@
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% if current_user.role_upload() and g.allow_upload %}
|
||||
<div role="group" aria-label="Upload new book format">
|
||||
<label class="btn btn-primary btn-file" for="btn-upload-format">{{ _('Upload Format') }}</label>
|
||||
<div class="upload-format-input-text" id="upload-format"></div>
|
||||
<input id="btn-upload-format" name="btn-upload-format" type="file">
|
||||
</div>
|
||||
{% endif %}
|
||||
|
||||
<div class="checkbox">
|
||||
<label>
|
||||
@@ -288,7 +294,7 @@
|
||||
'no_result': {{_('No Result(s) found! Please try another keyword.')|safe|tojson}},
|
||||
'author': {{_('Author')|safe|tojson}},
|
||||
'publisher': {{_('Publisher')|safe|tojson}},
|
||||
'description': {{_('Description')|safe|tojson}},
|
||||
'comments': {{_('Description')|safe|tojson}},
|
||||
'source': {{_('Source')|safe|tojson}},
|
||||
};
|
||||
var language = '{{ current_user.locale }}';
|
||||
|
@@ -66,7 +66,7 @@
|
||||
{{ text_table_row('authors', _('Enter Authors'),_('Authors'), true, true) }}
|
||||
{{ text_table_row('tags', _('Enter Categories'),_('Categories'), false, true) }}
|
||||
{{ text_table_row('series', _('Enter Series'),_('Series'), false, true) }}
|
||||
<th data-field="series_index" id="series_index" data-visible="{{visiblility.get('series_index')}}" data-edit-validate="{{ _('This Field is Required') }}" data-sortable="true" {% if current_user.role_edit() %} data-editable-type="number" data-editable-placeholder="1" data-editable-step="0.01" data-editable-min="0" data-editable-url="{{ url_for('edit-book.edit_list_book', param='series_index')}}" data-edit="true" data-editable-title="{{_('Enter Title')}}"{% endif %}>{{_('Series Index')}}</th>
|
||||
<th data-field="series_index" id="series_index" data-visible="{{visiblility.get('series_index')}}" data-formatter="seriesIndexFormatter" data-edit-validate="{{ _('This Field is Required') }}" data-sortable="true" {% if current_user.role_edit() %} data-editable-type="number" data-editable-placeholder="1" data-editable-step="0.01" data-editable-min="0" data-editable-url="{{ url_for('edit-book.edit_list_book', param='series_index')}}" data-edit="true" data-editable-title="{{_('Enter Title')}}"{% endif %}>{{_('Series Index')}}</th>
|
||||
{{ text_table_row('languages', _('Enter Languages'),_('Languages'), false, true) }}
|
||||
<!--th data-field="pubdate" data-type="date" data-visible="{{visiblility.get('pubdate')}}" data-viewformat="dd.mm.yyyy" id="pubdate" data-sortable="true">{{_('Publishing Date')}}</th-->
|
||||
{{ text_table_row('publishers', _('Enter Publishers'),_('Publishers'), false, true) }}
|
||||
|
@@ -62,7 +62,7 @@
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
<div class="col-sm-12">
|
||||
<div id="db_submit" name="submit" class="btn btn-default">{{_('Save')}}</div>
|
||||
<button id="db_submit" type="submit" class="btn btn-default">{{_('Save')}}</button>
|
||||
<a href="{{ url_for('admin.admin') }}" id="config_back" class="btn btn-default">{{_('Cancel')}}</a>
|
||||
</div>
|
||||
</form>
|
||||
|
2
cps/templates/config_edit.html
Executable file → Normal file
2
cps/templates/config_edit.html
Executable file → Normal file
@@ -411,7 +411,7 @@
|
||||
</div>
|
||||
<div class="form-group" style="margin-left:10px;">
|
||||
<input type="checkbox" id="config_password_character" name="config_password_character" {% if config.config_password_character %}checked{% endif %}>
|
||||
<label for="config_password_lower">{{_('Enforce characters (needed For Chinese/Japanese/Korean Characters)')}}</label>
|
||||
<label for="config_password_character">{{_('Enforce characters (needed For Chinese/Japanese/Korean Characters)')}}</label>
|
||||
</div>
|
||||
<div class="form-group" style="margin-left:10px;">
|
||||
<input type="checkbox" id="config_password_special" name="config_password_special" {% if config.config_password_special %}checked{% endif %}>
|
||||
|
@@ -1,4 +1,12 @@
|
||||
{% extends is_xhr|yesno("fragment.html", "layout.html") %}
|
||||
{% block header %}
|
||||
<meta property="og:type" content="book" />
|
||||
<meta property="og:title" content="{{ entry.title|truncate(35) }}" />
|
||||
{% if entry.comments|length > 0 and entry.comments[0].text|length > 0 %}
|
||||
<meta property="og:description" content="{{ entry.comments[0].text|striptags|truncate(65) }}" />
|
||||
<meta property="og:image" content="{{url_for('web.get_cover', book_id=entry.id, resolution='og', c=entry|last_modified)}}" />
|
||||
{% endif %}
|
||||
{% endblock %}
|
||||
{% block body %}
|
||||
<div class="single">
|
||||
<div class="row">
|
||||
@@ -20,7 +28,7 @@
|
||||
{{ _('Download') }} :
|
||||
</button>
|
||||
{% for format in entry.data %}
|
||||
<a href="{{ url_for('web.download_link', book_id=entry.id, book_format=format.format|lower, anyname=entry.id|string+'.'+format.format|lower) }}"
|
||||
<a href="{{ url_for('web.download_link', book_id=entry.id, book_format=format.format|lower, anyname=entry.id|string+'.'+format.format|lower|replace('kepub', 'kepub.epub')) }}"
|
||||
id="btnGroupDrop1{{ format.format|lower }}" class="btn btn-primary"
|
||||
role="button">
|
||||
<span class="glyphicon glyphicon-download"></span>{{ format.format }}
|
||||
@@ -36,7 +44,7 @@
|
||||
<ul class="dropdown-menu" aria-labelledby="btnGroupDrop1">
|
||||
{% for format in entry.data %}
|
||||
<li>
|
||||
<a href="{{ url_for('web.download_link', book_id=entry.id, book_format=format.format|lower, anyname=entry.id|string+'.'+format.format|lower) }}">{{ format.format }}
|
||||
<a href="{{ url_for('web.download_link', book_id=entry.id, book_format=format.format|lower, anyname=entry.id|string+'.'+format.format|lower|replace('kepub', 'kepub.epub')) }}">{{ format.format }}
|
||||
({{ format.uncompressed_size|filesizeformat }})</a></li>
|
||||
{% endfor %}
|
||||
</ul>
|
||||
@@ -147,7 +155,7 @@
|
||||
</div>
|
||||
{% endif %}
|
||||
{% if entry.series|length > 0 %}
|
||||
<p>{{ _("Book %(index)s of %(range)s", index=entry.series_index | formatfloat(2), range=(url_for('web.books_list', data='series', sort_param='stored', book_id=entry.series[0].id)|escapedlink(entry.series[0].name))|safe) }}</p>
|
||||
<p>{{ _("Book %(index)s of %(range)s", index=entry.series_index|formatfloat(2), range=(url_for('web.books_list', data='series', sort_param='stored', book_id=entry.series[0].id)|escapedlink(entry.series[0].name))|safe) }}</p>
|
||||
|
||||
{% endif %}
|
||||
|
||||
|
@@ -70,7 +70,7 @@
|
||||
{% endif %}
|
||||
{% for format in entry.Books.data %}
|
||||
<link rel="http://opds-spec.org/acquisition" href="{{ url_for('opds.opds_download_link', book_id=entry.Books.id, book_format=format.format|lower)}}"
|
||||
length="{{format.uncompressed_size}}" mtime="{{entry.Books.atom_timestamp}}" type="{{format.format|lower|mimetype}}"/>
|
||||
length="{{format.uncompressed_size}}" title="{{format.format}}" mtime="{{entry.Books.atom_timestamp}}" type="{{format.format|lower|mimetype}}"/>
|
||||
{% endfor %}
|
||||
</entry>
|
||||
{% endfor %}
|
||||
|
@@ -48,7 +48,11 @@
|
||||
<div class="row">
|
||||
<div class="col errorlink">
|
||||
{% if not unconfigured %}
|
||||
<a href="{{url_for('web.index')}}" title="{{ _('Return to Home') }}">{{_('Return to Home')}}</a>
|
||||
{% if goto_admin %}
|
||||
<a href="{{url_for('admin.db_configuration')}}" title="{{ _('Return to Database config') }}">{{_('Return to Database config')}}</a>
|
||||
{% else %}
|
||||
<a href="{{url_for('web.index')}}" title="{{ _('Return to Home') }}">{{_('Return to Home')}}</a>
|
||||
{% endif %}
|
||||
{% else %}
|
||||
<a href="{{url_for('web.logout')}}" title="{{ _('Logout User') }}">{{ _('Logout User') }}</a>
|
||||
{% endif %}
|
||||
|
@@ -16,7 +16,7 @@
|
||||
<img
|
||||
srcset="{{ srcset }}"
|
||||
src="{{ url_for('web.get_series_cover', series_id=series.id, resolution='og', c='day'|cache_timestamp) }}"
|
||||
alt="{{ book_title }}"
|
||||
alt="{{ title }}"
|
||||
loading="lazy"
|
||||
/>
|
||||
{%- endmacro %}
|
||||
|
@@ -42,7 +42,7 @@
|
||||
<a href="{{url_for('web.books_list', data='series', sort_param='stored', book_id=entry.Books.series[0].id )}}">
|
||||
{{entry.Books.series[0].name}}
|
||||
</a>
|
||||
({{entry.Books.series_index|formatseriesindex}})
|
||||
({{entry.Books.series_index|formatfloat(2)}})
|
||||
</p>
|
||||
{% endif %}
|
||||
{% if entry.Books.ratings.__len__() > 0 %}
|
||||
@@ -119,11 +119,9 @@
|
||||
<a class="author-name" href="{{url_for('web.books_list', data='author', book_id=author.id, sort_param='stored') }}">{{author.name.replace('|',',')|shortentitle(30)}}</a>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{% for format in entry.Books.data %}
|
||||
{% if format.format|lower in g.constants.EXTENSIONS_AUDIO %}
|
||||
{% if entry.Books.data|music %}
|
||||
<span class="glyphicon glyphicon-music"></span>
|
||||
{% endif %}
|
||||
{%endfor%}
|
||||
{% endif %}
|
||||
</p>
|
||||
{% if entry.Books.series.__len__() > 0 %}
|
||||
<p class="series">
|
||||
@@ -134,7 +132,7 @@
|
||||
{% else %}
|
||||
<span>{{entry.Books.series[0].name}}</span>
|
||||
{% endif %}
|
||||
({{entry.Books.series_index|formatseriesindex}})
|
||||
({{entry.Books.series_index|formatfloat(2)}})
|
||||
</p>
|
||||
{% endif %}
|
||||
{% if entry.Books.ratings.__len__() > 0 %}
|
||||
|
@@ -37,11 +37,12 @@
|
||||
<a class="navbar-brand" href="{{url_for('web.index')}}">{{instance}}</a>
|
||||
</div>
|
||||
{% if g.current_theme == 1 %}
|
||||
<div class="home-btn"><a class="home-btn-tooltip" href="{{url_for("web.index",page=1)}}" data-toggle="tooltip" title="" data-placement="bottom" data-original-title="Home"></a></div>
|
||||
<div class="home-btn"><a class="home-btn-tooltip" href="{{url_for("web.index", page=1)}}" data-toggle="tooltip" title="" data-placement="bottom" data-original-title="Home"></a></div>
|
||||
<div class="plexBack"><a href="{{url_for('web.index')}}"></a></div>
|
||||
{% endif %}
|
||||
{% if current_user.is_authenticated or g.allow_anonymous %}
|
||||
<form class="navbar-form navbar-left" role="search" action="{{url_for('search.simple_search')}}" method="GET">
|
||||
<!--# margin 0, padding 15, background color-->
|
||||
<form class="navbar-form navbar-left" role="search" action="{{url_for('search.simple_search')}}" method="GET">
|
||||
<div class="form-group input-group input-group-sm">
|
||||
<label for="query" class="sr-only">{{_('Search')}}</label>
|
||||
<input type="text" class="form-control" id="query" name="query" placeholder="{{_('Search Library')}}" value="{{searchterm}}">
|
||||
@@ -55,6 +56,7 @@
|
||||
{% if current_user.is_authenticated or g.allow_anonymous %}
|
||||
<ul class="nav navbar-nav ">
|
||||
<li><a href="{{url_for('search.advanced_search')}}" id="advanced_search"><span class="glyphicon glyphicon-search"></span><span class="hidden-sm"> {{_('Advanced Search')}}</span></a></li>
|
||||
{% if simple==true %} <li><a href="{{url_for('basic.index')}}" id="basic"><span class="glyphicon glyphicon-phone"></span><span>{{_('Simple Theme')}}</span></a><li>{% endif %}
|
||||
</ul>
|
||||
{% endif %}
|
||||
<ul class="nav navbar-nav navbar-right" id="main-nav">
|
||||
@@ -80,6 +82,7 @@
|
||||
<div class="form-group">
|
||||
<span class="btn btn-default btn-file">{{_('Upload')}}<input id="btn-upload" name="btn-upload"
|
||||
type="file" accept="{% for format in accept %}.{% if format != ''%}{{format}}{% else %}*{% endif %}{{ ',' if not loop.last }}{% endfor %}" multiple></span>
|
||||
<input class="hide" id="btn-upload2" name="btn-upload2" type="file" accept="{% for format in accept %}.{% if format != ''%}{{format}}{% else %}*{% endif %}{{ ',' if not loop.last }}{% endfor %}">
|
||||
</div>
|
||||
</form>
|
||||
</li>
|
||||
|
@@ -59,7 +59,7 @@
|
||||
</div>
|
||||
{% endif %}
|
||||
{% if entry.series|length > 0 %}
|
||||
<p>{{_("Book %(index)s of %(range)s", index=entry.series_index | formatfloat(2), range=(url_for('web.books_list', data='series', sort_param='stored', book_id=entry.series[0].id)|escapedlink(entry.series[0].name))|safe)}}</p>
|
||||
<p>{{_("Book %(index)s of %(range)s", index=entry.series_index|formatfloat(2), range=(url_for('web.books_list', data='series', sort_param='stored', book_id=entry.series[0].id)|escapedlink(entry.series[0].name))|safe)}}</p>
|
||||
|
||||
{% endif %}
|
||||
|
||||
|
@@ -1,10 +0,0 @@
|
||||
{
|
||||
"input": {
|
||||
"placeholder": "a placeholder"
|
||||
},
|
||||
"nav": {
|
||||
"home": "Home",
|
||||
"page1": "Page One",
|
||||
"page2": "Page Two"
|
||||
}
|
||||
}
|
@@ -77,7 +77,7 @@
|
||||
<div class="md-content">
|
||||
<h3>{{_('Settings')}}</h3>
|
||||
<div class="form-group themes" id="themes">
|
||||
Choose a theme below: <br />
|
||||
{{_('Choose a theme below:')}}}<br />
|
||||
|
||||
<!-- Hardcoded a tick in the light theme button because it is the "default" theme. Need to find a way to do this dynamically on startup-->
|
||||
<button type="button" id="lightTheme" class="lightTheme" onclick="selectTheme(this.id)"><span
|
||||
@@ -126,8 +126,8 @@
|
||||
window.calibre = {
|
||||
filePath: "{{ url_for('static', filename='js/libs/') }}",
|
||||
cssPath: "{{ url_for('static', filename='css/') }}",
|
||||
bookmarkUrl: "{{ url_for('web.set_bookmark', book_id=bookid, book_format='EPUB') }}",
|
||||
bookUrl: "{{ url_for('web.serve_book', book_id=bookid, book_format='epub', anyname='file.epub') }}",
|
||||
bookmarkUrl: "{{ url_for('web.set_bookmark', book_id=bookid, book_format=book_format) }}",
|
||||
bookUrl: "{{ url_for('web.serve_book', book_id=bookid, book_format=book_format, anyname='file.epub') }}",
|
||||
bookmark: "{{ bookmark.bookmark_key if bookmark != None }}",
|
||||
useBookmarks: "{{ current_user.is_authenticated | tojson }}"
|
||||
};
|
||||
@@ -135,19 +135,23 @@
|
||||
window.themes = {
|
||||
"darkTheme": {
|
||||
"bgColor": "#202124",
|
||||
"css_path": "{{ url_for('static', filename='css/epub_themes.css') }}"
|
||||
"css_path": "{{ url_for('static', filename='css/epub_themes.css') }}",
|
||||
"title-color": "#fff"
|
||||
},
|
||||
"lightTheme": {
|
||||
"bgColor": "white",
|
||||
"css_path": "{{ url_for('static', filename='css/epub_themes.css') }}"
|
||||
"css_path": "{{ url_for('static', filename='css/epub_themes.css') }}",
|
||||
"title-color": "#4f4f4f"
|
||||
},
|
||||
"sepiaTheme": {
|
||||
"bgColor": "#ece1ca",
|
||||
"css_path": "{{ url_for('static', filename='css/epub_themes.css') }}"
|
||||
"css_path": "{{ url_for('static', filename='css/epub_themes.css') }}",
|
||||
"title-color": "#4f4f4f"
|
||||
},
|
||||
"blackTheme": {
|
||||
"bgColor": "black",
|
||||
"css_path": "{{ url_for('static', filename='css/epub_themes.css') }}"
|
||||
"css_path": "{{ url_for('static', filename='css/epub_themes.css') }}",
|
||||
"title-color": "#fff"
|
||||
},
|
||||
};
|
||||
|
||||
@@ -170,6 +174,8 @@
|
||||
|
||||
// Apply theme to rest of the page.
|
||||
document.getElementById("main").style.backgroundColor = themes[id]["bgColor"];
|
||||
document.getElementById("titlebar").style.color = themes[id]["title-color"] || "#fff";
|
||||
document.getElementById("progress").style.color = themes[id]["title-color"] || "#fff";
|
||||
}
|
||||
|
||||
// font size settings logic
|
||||
@@ -210,6 +216,6 @@
|
||||
<script src="{{ url_for('static', filename='js/libs/screenfull.min.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/libs/reader.min.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/reading/epub.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/reading/epub-progress.js') }}"></script>
|
||||
<!--script src="{{ url_for('static', filename='js/reading/locationchange-polyfill.js') }}"></script-->
|
||||
</body>
|
||||
</html>
|
||||
|
@@ -35,7 +35,7 @@ See https://github.com/adobe-type-tools/cmap-resources
|
||||
<link rel="stylesheet" href="{{ url_for('static', filename='css/libs/viewer.css') }}">
|
||||
<!-- This snippet is used in production (included from viewer.html) -->
|
||||
<link rel="resource" type="application/l10n" href="{{ url_for('static', filename='locale/locale.json') }}">
|
||||
<script src="{{ url_for('static', filename='js/libs/pdf.mjs') }}" type="module"></script>
|
||||
<script src="{{ url_for('static', filename='js/libs/pdf.js') }}" type="module"></script>
|
||||
|
||||
<script type="text/javascript">
|
||||
window.addEventListener('webviewerloaded', function() {
|
||||
@@ -46,12 +46,12 @@ See https://github.com/adobe-type-tools/cmap-resources
|
||||
PDFViewerApplicationOptions.set('cMapUrl', "{{ url_for('static', filename='cmaps/') }}");
|
||||
PDFViewerApplicationOptions.set('sidebarViewOnLoad', 0);
|
||||
PDFViewerApplicationOptions.set('imageResourcesPath', "{{ url_for('static', filename='css/images/') }}");
|
||||
PDFViewerApplicationOptions.set('workerSrc', "{{ url_for('static', filename='js/libs/pdf.worker.mjs') }}");
|
||||
PDFViewerApplicationOptions.set('workerSrc', "{{ url_for('static', filename='js/libs/pdf.worker.js') }}");
|
||||
PDFViewerApplicationOptions.set('defaultUrl',"{{ url_for('web.serve_book', book_id=pdffile, book_format='pdf') }}")
|
||||
});
|
||||
</script>
|
||||
|
||||
<script src="{{ url_for('static', filename='js/libs/viewer.mjs') }}" type="module">></script>
|
||||
<script src="{{ url_for('static', filename='js/libs/viewer.js') }}" type="module">></script>
|
||||
|
||||
</head>
|
||||
|
||||
|
@@ -73,18 +73,16 @@
|
||||
<a class="author-name" href="{{url_for('web.books_list', data='author', sort_param='stored', book_id=author.id) }}">{{author.name.replace('|',',')|shortentitle(30)}}</a>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{% for format in entry.Books.data %}
|
||||
{% if format.format|lower in g.constants.EXTENSIONS_AUDIO %}
|
||||
{% if entry.Books.data|music %}
|
||||
<span class="glyphicon glyphicon-music"></span>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
</p>
|
||||
{% if entry.Books.series.__len__() > 0 %}
|
||||
<p class="series">
|
||||
<a href="{{url_for('web.books_list', data='series', sort_param='stored', book_id=entry.Books.series[0].id )}}">
|
||||
{{entry.Books.series[0].name}}
|
||||
</a>
|
||||
({{entry.Books.series_index|formatseriesindex}})
|
||||
({{entry.Books.series_index|formatfloat(2)}})
|
||||
</p>
|
||||
{% endif %}
|
||||
|
||||
|
@@ -5,12 +5,12 @@
|
||||
<form role="form" id="search" action="{{ url_for('search.advanced_search_form') }}" method="POST">
|
||||
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
|
||||
<div class="form-group">
|
||||
<label for="book_title">{{_('Book Title')}}</label>
|
||||
<input type="text" class="form-control" name="book_title" id="book_title" value="">
|
||||
<label for="title">{{_('Book Title')}}</label>
|
||||
<input type="text" class="form-control" name="title" id="title" value="">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="bookAuthor">{{_('Author')}}</label>
|
||||
<input type="text" class="form-control typeahead" name="author_name" id="bookAuthor" value="" autocomplete="off">
|
||||
<label for="authors">{{_('Author')}}</label>
|
||||
<input type="text" class="form-control typeahead" name="authors" id="authors" value="" autocomplete="off">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="Publisher">{{_('Publisher')}}</label>
|
||||
@@ -151,8 +151,8 @@
|
||||
</div>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="comment">{{_('Description')}}</label>
|
||||
<input type="text" class="form-control" name="comment" id="comment" value="">
|
||||
<label for="comments">{{_('Description')}}</label>
|
||||
<input type="text" class="form-control" name="comments" id="comments" value="">
|
||||
</div>
|
||||
|
||||
{% if cc|length > 0 %}
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user