1
0
mirror of https://github.com/janeczku/calibre-web synced 2024-06-14 01:16:48 +00:00

Compare commits

..

No commits in common. "master" and "0.6.19" have entirely different histories.

546 changed files with 130863 additions and 181209 deletions

1
.gitattributes vendored
View File

@ -1,5 +1,4 @@
constants.py ident export-subst
/test export-ignore
/library export-ignore
cps/static/css/libs/* linguist-vendored
cps/static/js/libs/* linguist-vendored

View File

@ -6,23 +6,12 @@ labels: ''
assignees: ''
---
<!-- Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md) -->
## Short Notice from the maintainer
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
Please also have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
**Describe the bug/problem**
**Describe the bug/problem**
A clear and concise description of what the bug is. If you are asking for support, please check our [Wiki](https://github.com/janeczku/calibre-web/wiki) if your question is already answered there.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
@ -30,19 +19,15 @@ Steps to reproduce the behavior:
4. See error
**Logfile**
Add content of calibre-web.log file or the relevant error, try to reproduce your problem with "debug" log-level to get more output.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. Windows 10/Raspberry Pi OS]
- Python version: [e.g. python2.7]
- Calibre-Web version: [e.g. 0.6.8 or 087c4c59 (git rev-parse --short HEAD)]:
@ -52,4 +37,3 @@ If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here. [e.g. access via reverse proxy, database background sync, special database location]

View File

@ -7,14 +7,7 @@ assignees: ''
---
# Short Notice from the maintainer
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
<!-- Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md) -->
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

2
.gitignore vendored
View File

@ -28,10 +28,8 @@ cps/cache
.idea/
*.bak
*.log.*
.key
settings.yaml
gdrive_credentials
client_secrets.json
gmail.json
/.key

View File

@ -26,9 +26,9 @@ The Calibre-Web documentation is hosted in the Github [Wiki](https://github.com/
Do not open up a GitHub issue if the bug is a **security vulnerability** in Calibre-Web. Instead, please write an email to "ozzie.fernandez.isaacs@googlemail.com".
Ensure the **bug was not already reported** by searching on GitHub under [Issues](https://github.com/janeczku/calibre-web/issues). Please also check if a solution for your problem can be found in the [wiki](https://github.com/janeczku/calibre-web/wiki).
Ensure the ***bug was not already reported** by searching on GitHub under [Issues](https://github.com/janeczku/calibre-web/issues). Please also check if a solution for your problem can be found in the [wiki](https://github.com/janeczku/calibre-web/wiki).
If you're unable to find an **open issue** addressing the problem, open a [new one](https://github.com/janeczku/calibre-web/issues/new/choose). Be sure to include a **title** and **clear description**, as much relevant information as possible, the **issue form** helps you providing the right information. Deleting the form and just pasting the stack trace doesn't speed up fixing the problem. If your issue could be resolved, consider closing the issue.
If you're unable to find an **open issue** addressing the problem, open a [new one](https://github.com/janeczku/calibre-web/issues/new?assignees=&labels=&template=bug_report.md&title=). Be sure to include a **title** and **clear description**, as much relevant information as possible, the **issue form** helps you providing the right information. Deleting the form and just pasting the stack trace doesn't speed up fixing the problem. If your issue could be resolved, consider closing the issue.
### **Feature Request**

152
README.md
View File

@ -1,125 +1,99 @@
# Short Notice from the maintainer
# About
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
Calibre-Web is a web app providing a clean interface for browsing, reading and downloading eBooks using a valid [Calibre](https://calibre-ebook.com) database.
# Calibre-Web
Calibre-Web is a web app that offers a clean and intuitive interface for browsing, reading, and downloading eBooks using a valid [Calibre](https://calibre-ebook.com) database.
[![License](https://img.shields.io/github/license/janeczku/calibre-web?style=flat-square)](https://github.com/janeczku/calibre-web/blob/master/LICENSE)
![Commit Activity](https://img.shields.io/github/commit-activity/w/janeczku/calibre-web?logo=github&style=flat-square&label=commits)
[![All Releases](https://img.shields.io/github/downloads/janeczku/calibre-web/total?logo=github&style=flat-square)](https://github.com/janeczku/calibre-web/releases)
[![GitHub License](https://img.shields.io/github/license/janeczku/calibre-web?style=flat-square)](https://github.com/janeczku/calibre-web/blob/master/LICENSE)
[![GitHub commit activity](https://img.shields.io/github/commit-activity/w/janeczku/calibre-web?logo=github&style=flat-square&label=commits)]()
[![GitHub all releases](https://img.shields.io/github/downloads/janeczku/calibre-web/total?logo=github&style=flat-square)](https://github.com/janeczku/calibre-web/releases)
[![PyPI](https://img.shields.io/pypi/v/calibreweb?logo=pypi&logoColor=fff&style=flat-square)](https://pypi.org/project/calibreweb/)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/calibreweb?logo=pypi&logoColor=fff&style=flat-square)](https://pypi.org/project/calibreweb/)
[![Discord](https://img.shields.io/discord/838810113564344381?label=Discord&logo=discord&style=flat-square)](https://discord.gg/h2VsJ2NEfB)
<details>
<summary><strong>Table of Contents</strong> (click to expand)</summary>
1. [About](#calibre-web)
2. [Features](#features)
3. [Installation](#installation)
- [Installation via pip (recommended)](#installation-via-pip-recommended)
- [Quick start](#quick-start)
- [Requirements](#requirements)
4. [Docker Images](#docker-images)
5. [Contributor Recognition](#contributor-recognition)
6. [Contact](#contact)
7. [Contributing to Calibre-Web](#contributing-to-calibre-web)
</details>
*This software is a fork of [library](https://github.com/mutschler/calibreserver) and licensed under the GPL v3 License.*
![Main screen](https://github.com/janeczku/calibre-web/wiki/images/main_screen.png)
## Features
- Modern and responsive Bootstrap 3 HTML5 interface
- Full graphical setup
- Comprehensive user management with fine-grained per-user permissions
- Bootstrap 3 HTML5 interface
- full graphical setup
- User management with fine-grained per-user permissions
- Admin interface
- Multilingual user interface supporting 20+ languages ([supported languages](https://github.com/janeczku/calibre-web/wiki/Translation-Status))
- OPDS feed for eBook reader apps
- Advanced search and filtering options
- Custom book collection (shelves) creation
- eBook metadata editing and deletion support
- Metadata download from various sources (extensible via plugins)
- eBook conversion through Calibre binaries
- eBook download restriction to logged-in users
- Public user registration support
- Send eBooks to E-Readers with a single click
- Sync Kobo devices with your Calibre library
- In-browser eBook reading support for multiple formats
- Upload new books in various formats, including audio formats
- Calibre Custom Columns support
- Content hiding based on categories and Custom Column content per user
- User Interface in brazilian, czech, dutch, english, finnish, french, german, greek, hungarian, italian, japanese, khmer, korean, polish, russian, simplified and traditional chinese, spanish, swedish, turkish, ukrainian
- OPDS feed for eBook reader apps
- Filter and search by titles, authors, tags, series, book format and language
- Create a custom book collection (shelves)
- Support for editing eBook metadata and deleting eBooks from Calibre library
- Support for downloading eBook metadata from various sources, sources can be extended via external plugins
- Support for converting eBooks through Calibre binaries
- Restrict eBook download to logged-in users
- Support for public user registration
- Send eBooks to E-Readers with the click of a button
- Sync your Kobo devices through Calibre-Web with your Calibre library
- Support for reading eBooks directly in the browser (.txt, .epub, .pdf, .cbr, .cbt, .cbz, .djvu)
- Upload new books in many formats, including audio formats (.mp3, .m4a, .m4b)
- Support for Calibre Custom Columns
- Ability to hide content based on categories and Custom Column content per user
- Self-update capability
- "Magic Link" login for easy access on eReaders
- LDAP, Google/GitHub OAuth, and proxy authentication support
- "Magic Link" login to make it easy to log on eReaders
- Login via LDAP, google/github oauth and via proxy authentication
## Installation
#### Installation via pip (recommended)
1. Create a virtual environment for Calibre-Web to avoid conflicts with existing Python dependencies
2. Install Calibre-Web via pip: `pip install calibreweb` (or `pip3` depending on your OS/distro)
3. Install optional features via pip as needed, see [this page](https://github.com/janeczku/calibre-web/wiki/Dependencies-in-Calibre-Web-Linux-and-Windows) for details
4. Start Calibre-Web by typing `cps`
1. To avoid problems with already installed python dependencies, it's recommended to create a virtual environment for Calibre-Web
2. Install Calibre-Web via pip with the command `pip install calibreweb` (Depending on your OS and or distro the command could also be `pip3`).
3. Optional features can also be installed via pip, please refer to [this page](https://github.com/janeczku/calibre-web/wiki/Dependencies-in-Calibre-Web-Linux-and-Windows) for details
4. Calibre-Web can be started afterwards by typing `cps`
*Note: Raspberry Pi OS users may encounter issues during installation. If so, please update pip (`./venv/bin/python3 -m pip install --upgrade pip`) and/or install cargo (`sudo apt install cargo`) before retrying the installation.*
In the Wiki there are also examples for: a [manual installation](https://github.com/janeczku/calibre-web/wiki/Manual-installation), [installation on Linux Mint](https://github.com/janeczku/calibre-web/wiki/How-To:Install-Calibre-Web-in-Linux-Mint-19-or-20), [installation on a Cloud Provider](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-on-a-Cloud-Provider).
Refer to the Wiki for additional installation examples: [manual installation](https://github.com/janeczku/calibre-web/wiki/Manual-installation), [Linux Mint](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-in-Linux-Mint-19-or-20), [Cloud Provider](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-on-a-Cloud-Provider).
## Quick start
## Quick Start
Point your browser to `http://localhost:8083` or `http://localhost:8083/opds` for the OPDS catalog \
Login with default admin login \
Set `Location of Calibre database` to the path of the folder where your Calibre library (metadata.db) lives, push "submit" button \
Optionally a Google Drive can be used to host the calibre library [-> Using Google Drive integration](https://github.com/janeczku/calibre-web/wiki/G-Drive-Setup#using-google-drive-integration) \
Afterwards you can configure your Calibre-Web instance ([Basic Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#basic-configuration) and [UI Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#ui-configuration) on admin page)
1. Open your browser and navigate to `http://localhost:8083` or `http://localhost:8083/opds` for the OPDS catalog
2. Log in with the default admin credentials
3. If you don't have a Calibre database, you can use [this database](https://github.com/janeczku/calibre-web/raw/master/library/metadata.db) (move it out of the Calibre-Web folder to prevent overwriting during updates)
4. Set `Location of Calibre database` to the path of the folder containing your Calibre library (metadata.db) and click "Save"
5. Optionally, use Google Drive to host your Calibre library by following the [Google Drive integration guide](https://github.com/janeczku/calibre-web/wiki/G-Drive-Setup#using-google-drive-integration)
6. Configure your Calibre-Web instance via the admin page, referring to the [Basic Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#basic-configuration) and [UI Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#ui-configuration) guides
#### Default admin login:
*Username:* admin\
*Password:* admin123
#### Default Admin Login:
- **Username:** admin
- **Password:** admin123
## Requirements
- Python 3.5+
- [Imagemagick](https://imagemagick.org/script/download.php) for cover extraction from EPUBs (Windows users may need to install [Ghostscript](https://ghostscript.com/releases/gsdnld.html) for PDF cover extraction)
- Optional: [Calibre desktop program](https://calibre-ebook.com/download) for on-the-fly conversion and metadata editing (set "calibre's converter tool" path on the setup page)
- Optional: [Kepubify tool](https://github.com/pgaskin/kepubify/releases/latest) for Kobo device support (place the binary in `/opt/kepubify` on Linux or `C:\Program Files\kepubify` on Windows)
python 3.5+
Optionally, to enable on-the-fly conversion from one ebook format to another when using the send-to-ereader feature, or during editing of ebooks metadata:
[Download and install](https://calibre-ebook.com/download) the Calibre desktop program for your platform and enter the folder including program name (normally /opt/calibre/ebook-convert, or C:\Program Files\calibre\ebook-convert.exe) in the field "calibre's converter tool" on the setup page.
[Download](https://github.com/pgaskin/kepubify/releases/latest) Kepubify tool for your platform and place the binary starting with `kepubify` in Linux: `/opt/kepubify` Windows: `C:\Program Files\kepubify`.
## Docker Images
Pre-built Docker images are available in the following Docker Hub repositories (maintained by the LinuxServer team):
A pre-built Docker image is available in these Docker Hub repository (maintained by the LinuxServer team):
#### **LinuxServer - x64, aarch64**
- [Docker Hub](https://hub.docker.com/r/linuxserver/calibre-web)
- [GitHub](https://github.com/linuxserver/docker-calibre-web)
- [GitHub - Optional Calibre layer](https://github.com/linuxserver/docker-mods/tree/universal-calibre)
#### **LinuxServer - x64, armhf, aarch64**
+ Docker Hub - [https://hub.docker.com/r/linuxserver/calibre-web](https://hub.docker.com/r/linuxserver/calibre-web)
+ Github - [https://github.com/linuxserver/docker-calibre-web](https://github.com/linuxserver/docker-calibre-web)
+ Github - (Optional Calibre layer) - [https://github.com/linuxserver/docker-calibre-web/tree/calibre](https://github.com/linuxserver/docker-calibre-web/tree/calibre)
Include the environment variable `DOCKER_MODS=linuxserver/mods:universal-calibre` in your Docker run/compose file to add the Calibre `ebook-convert` binary (x64 only). Omit this variable for a lightweight image.
This image has the option to pull in an extra docker manifest layer to include the Calibre `ebook-convert` binary. Just include the environmental variable `DOCKER_MODS=linuxserver/calibre-web:calibre` in your docker run/docker compose file. **(x64 only)**
If you do not need this functionality then this can be omitted, keeping the image as lightweight as possible.
Both the Calibre-Web and Calibre-Mod images are rebuilt automatically on new releases of Calibre-Web and Calibre respectively, and on updates to any included base image packages on a weekly basis if required.
+ The "path to convertertool" should be set to `/usr/bin/ebook-convert`
+ The "path to unrar" should be set to `/usr/bin/unrar`
Both the Calibre-Web and Calibre-Mod images are automatically rebuilt on new releases and updates.
# Contact
- Set "path to convertertool" to `/usr/bin/ebook-convert`
- Set "path to unrar" to `/usr/bin/unrar`
Just reach us out on [Discord](https://discord.gg/h2VsJ2NEfB)
## Contributor Recognition
For further information, How To's and FAQ please check the [Wiki](https://github.com/janeczku/calibre-web/wiki)
We would like to thank all the [contributors](https://github.com/janeczku/calibre-web/graphs/contributors) and maintainers of Calibre-Web for their valuable input and dedication to the project. Your contributions are greatly appreciated.
# Contributing to Calibre-Web
## Contact
Join us on [Discord](https://discord.gg/h2VsJ2NEfB)
For more information, How To's, and FAQs, please visit the [Wiki](https://github.com/janeczku/calibre-web/wiki)
## Contributing to Calibre-Web
Check out our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)

View File

@ -38,13 +38,6 @@ To receive fixes for security vulnerabilities it is required to always upgrade t
| V 0.6.18 | Possible SQL Injection is prevented in user table Thanks to Iman Sharafaldin (Forward Security) |CVE-2022-30765|
| V 0.6.18 | The SSRF protection no longer can be bypassed by IPV6/IPV4 embedding. Thanks to @416e6e61 |CVE-2022-0939|
| V 0.6.18 | The SSRF protection no longer can be bypassed to connect to other servers in the local network. Thanks to @michaellrowley |CVE-2022-0990|
| V 0.6.20 | Credentials for emails are now stored encrypted ||
| V 0.6.20 | Login is rate limited ||
| V 0.6.20 | Passwordstrength can be forced ||
| V 0.6.21 | SMTP server credentials are no longer returned to client ||
| V 0.6.21 | Cross-site scripting (XSS) stored in href bypasses filter using data wrapper no longer possible ||
| V 0.6.21 | Cross-site scripting (XSS) is no longer possible via pathchooser ||
| V 0.6.21 | Error Handling at non existent rating, language, and user downloaded books was fixed ||
## Statement regarding Log4j (CVE-2021-44228 and related)

View File

@ -1,4 +1,3 @@
[python: **.py]
# has to be executed with jinja2 >=2.9 to have autoescape enabled automatically
[jinja2: **/templates/**.*ml]
extensions=jinja2.ext.autoescape,jinja2.ext.with_

22
cps.py
View File

@ -21,33 +21,13 @@ import os
import sys
# Add local path to sys.path, so we can import cps
# Add local path to sys.path so we can import cps
path = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, path)
from cps.main import main
def hide_console_windows():
import ctypes
import os
hwnd = ctypes.windll.kernel32.GetConsoleWindow()
if hwnd != 0:
try:
import win32process
except ImportError:
print("To hide console window install 'pywin32' using 'pip install pywin32'")
return
ctypes.windll.user32.ShowWindow(hwnd, 0)
ctypes.windll.kernel32.CloseHandle(hwnd)
_, pid = win32process.GetWindowThreadProcessId(hwnd)
os.system('taskkill /PID ' + str(pid) + ' /f')
if __name__ == '__main__':
if os.name == "nt":
hide_console_windows()
main()

View File

@ -21,32 +21,15 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from flask_login import LoginManager, confirm_login
from flask import session, current_app
from flask_login.utils import decode_cookie
from flask_login.signals import user_loaded_from_cookie
from flask_login import LoginManager
from flask import session
class MyLoginManager(LoginManager):
def _session_protection_failed(self):
sess = session._get_current_object()
_session = session._get_current_object()
ident = self._session_identifier_generator()
if(sess and not (len(sess) == 1
and sess.get('csrf_token', None))) and ident != sess.get('_id', None):
if(_session and not (len(_session) == 1
and _session.get('csrf_token', None))) and ident != _session.get('_id', None):
return super(). _session_protection_failed()
return False
def _load_user_from_remember_cookie(self, cookie):
user_id = decode_cookie(cookie)
if user_id is not None:
session["_user_id"] = user_id
session["_fresh"] = False
user = None
if self._user_callback:
user = self._user_callback(user_id)
if user is not None:
app = current_app._get_current_object()
user_loaded_from_cookie.send(app, user=user)
# if session was restored from remember me cookie make login valid
confirm_login()
return user
return None

100
cps/__init__.py Executable file → Normal file
View File

@ -36,16 +36,11 @@ from .reverseproxy import ReverseProxied
from .server import WebServer
from .dep_check import dependency_check
from .updater import Updater
from .babel import babel, get_locale
from .babel import babel
from . import config_sql
from . import cache_buster
from . import ub, db
try:
from flask_limiter import Limiter
limiter_present = True
except ImportError:
limiter_present = False
try:
from flask_wtf.csrf import CSRFProtect
wtf_present = True
@ -64,8 +59,7 @@ mimetypes.add_type('application/x-mobi8-ebook', '.azw3')
mimetypes.add_type('application/x-cbr', '.cbr')
mimetypes.add_type('application/x-cbz', '.cbz')
mimetypes.add_type('application/x-cbt', '.cbt')
mimetypes.add_type('application/x-cb7', '.cb7')
mimetypes.add_type('image/vnd.djv', '.djv')
mimetypes.add_type('image/vnd.djvu', '.djvu')
mimetypes.add_type('application/mpeg', '.mpeg')
mimetypes.add_type('application/mpeg', '.mp3')
mimetypes.add_type('application/mp4', '.m4a')
@ -87,9 +81,9 @@ app.config.update(
lm = MyLoginManager()
cli_param = CliParameter()
config = config_sql._ConfigSQL()
config = config_sql.ConfigSQL()
cli_param = CliParameter()
if wtf_present:
csrf = CSRFProtect()
@ -102,28 +96,32 @@ web_server = WebServer()
updater_thread = Updater()
if limiter_present:
limiter = Limiter(key_func=True, headers_enabled=True, auto_check=False, swallow_errors=False)
else:
limiter = None
def create_app():
lm.login_view = 'web.login'
lm.anonymous_user = ub.Anonymous
lm.session_protection = 'strong'
if csrf:
csrf.init_app(app)
cli_param.init()
ub.init_db(cli_param.settings_path)
ub.init_db(cli_param.settings_path, cli_param.user_credentials)
# pylint: disable=no-member
encrypt_key, error = config_sql.get_encryption_key(os.path.dirname(cli_param.settings_path))
config_sql.load_configuration(config, ub.session, cli_param)
config_sql.load_configuration(ub.session, encrypt_key)
config.init_config(ub.session, encrypt_key, cli_param)
db.CalibreDB.update_config(config)
db.CalibreDB.setup_db(config.config_calibre_dir, cli_param.settings_path)
calibre_db.init_db()
if error:
log.error(error)
ub.password_change(cli_param.user_credentials)
updater_thread.init_updater(config, web_server)
# Perform dry run of updater and exit afterwards
if cli_param.dry_run:
updater_thread.dry_run()
sys.exit(0)
updater_thread.start()
if sys.version_info < (3, 0):
log.info(
@ -134,32 +132,15 @@ def create_app():
'please update your installation to Python3 ***')
web_server.stop(True)
sys.exit(5)
lm.login_view = 'web.login'
lm.anonymous_user = ub.Anonymous
lm.session_protection = 'strong' if config.config_session == 1 else "basic"
db.CalibreDB.update_config(config)
db.CalibreDB.setup_db(config.config_calibre_dir, cli_param.settings_path)
calibre_db.init_db()
updater_thread.init_updater(config, web_server)
# Perform dry run of updater and exit afterward
if cli_param.dry_run:
updater_thread.dry_run()
sys.exit(0)
updater_thread.start()
requirements = dependency_check()
for res in requirements:
if res['found'] == "not installed":
message = ('Cannot import {name} module, it is needed to run calibre-web, '
'please install it using "pip install {name}"').format(name=res["name"])
log.info(message)
print("*** " + message + " ***")
web_server.stop(True)
sys.exit(8)
for res in requirements + dependency_check(True):
log.info('*** "{}" version does not meet the requirements. '
if not wtf_present:
log.info('*** "flask-WTF" is needed for calibre-web to run. '
'Please install it using pip: "pip install flask-WTF" ***')
print('*** "flask-WTF" is needed for calibre-web to run. '
'Please install it using pip: "pip install flask-WTF" ***')
web_server.stop(True)
sys.exit(7)
for res in dependency_check() + dependency_check(True):
log.info('*** "{}" version does not fit the requirements. '
'Should: {}, Found: {}, please consider installing required version ***'
.format(res['name'],
res['target'],
@ -169,16 +150,14 @@ def create_app():
if os.environ.get('FLASK_DEBUG'):
cache_buster.init_cache_busting(app)
log.info('Starting Calibre Web...')
Principal(app)
lm.init_app(app)
app.secret_key = os.getenv('SECRET_KEY', config_sql.get_flask_session_key(ub.session))
web_server.init_app(app, config)
if hasattr(babel, "localeselector"):
babel.init_app(app)
babel.localeselector(get_locale)
else:
babel.init_app(app, locale_selector=get_locale)
babel.init_app(app)
from . import services
@ -186,22 +165,9 @@ def create_app():
services.ldap.init_app(app, config)
if services.goodreads_support:
services.goodreads_support.connect(config.config_goodreads_api_key,
config.config_goodreads_api_secret,
config.config_use_goodreads)
config.store_calibre_uuid(calibre_db, db.Library_Id)
# Configure rate limiter
# https://limits.readthedocs.io/en/stable/storage.html
app.config.update(RATELIMIT_ENABLED=config.config_ratelimiter)
if config.config_limiter_uri != "" and not cli_param.memory_backend:
app.config.update(RATELIMIT_STORAGE_URI=config.config_limiter_uri)
if config.config_limiter_options != "":
app.config.update(RATELIMIT_STORAGE_OPTIONS=config.config_limiter_options)
try:
limiter.init_app(app)
except Exception as e:
log.error('Wrong Flask Limiter configuration, falling back to default: {}'.format(e))
app.config.update(RATELIMIT_STORAGE_URI=None)
limiter.init_app(app)
# Register scheduled tasks
from .schedule import register_scheduled_tasks, register_startup_tasks
register_scheduled_tasks(config.schedule_reconnect)

View File

@ -49,9 +49,9 @@ sorted_modules = OrderedDict((sorted(modules.items(), key=lambda x: x[0].casefol
def collect_stats():
if constants.NIGHTLY_VERSION[0] == "$Format:%H$":
calibre_web_version = constants.STABLE_VERSION['version'].replace("b", " Beta")
calibre_web_version = constants.STABLE_VERSION['version']
else:
calibre_web_version = (constants.STABLE_VERSION['version'].replace("b", " Beta") + ' - '
calibre_web_version = (constants.STABLE_VERSION['version'] + ' - '
+ constants.NIGHTLY_VERSION[0].replace('%', '%%') + ' - '
+ constants.NIGHTLY_VERSION[1].replace('%', '%%'))
@ -81,4 +81,4 @@ def stats():
categories = calibre_db.session.query(db.Tags).count()
series = calibre_db.session.query(db.Series).count()
return render_title_template('stats.html', bookcounter=counter, authorcounter=authors, versions=collect_stats(),
categorycounter=categories, seriecounter=series, title=_("Statistics"), page="stat")
categorycounter=categories, seriecounter=series, title=_(u"Statistics"), page="stat")

View File

@ -22,21 +22,19 @@
import os
import re
import base64
import json
import operator
import time
import sys
import string
from datetime import datetime, timedelta
from datetime import time as datetime_time
from functools import wraps
from urllib.parse import urlparse
from flask import Blueprint, flash, redirect, url_for, abort, request, make_response, send_from_directory, g, Response
from markupsafe import Markup
from flask_login import login_required, current_user, logout_user
from flask_login import login_required, current_user, logout_user, confirm_login
from flask_babel import gettext as _
from flask_babel import get_locale, format_time, format_datetime, format_timedelta
from flask_babel import get_locale, format_time, format_datetime, format_timedelta
from flask import session as flask_session
from sqlalchemy import and_
from sqlalchemy.orm.attributes import flag_modified
@ -48,35 +46,33 @@ from . import db, calibre_db, ub, web_server, config, updater_thread, gdriveutil
kobo_sync_status, schedule
from .helper import check_valid_domain, send_test_mail, reset_password, generate_password_hash, check_email, \
valid_email, check_username
from .embed_helper import get_calibre_binarypath
from .gdriveutils import is_gdrive_ready, gdrive_support
from .render_template import render_title_template, get_sidebar_config
from .services.worker import WorkerThread
from .babel import get_available_translations, get_available_locale, get_user_locale_language
from . import debug_info
log = logger.create()
feature_support = {
'ldap': bool(services.ldap),
'goodreads': bool(services.goodreads_support),
'kobo': bool(services.kobo),
'updater': constants.UPDATER_AVAILABLE,
'gmail': bool(services.gmail),
'scheduler': schedule.use_APScheduler,
'gdrive': gdrive_support
}
'ldap': bool(services.ldap),
'goodreads': bool(services.goodreads_support),
'kobo': bool(services.kobo),
'updater': constants.UPDATER_AVAILABLE,
'gmail': bool(services.gmail),
'scheduler': schedule.use_APScheduler,
'gdrive': gdrive_support
}
try:
import rarfile # pylint: disable=unused-import
feature_support['rar'] = True
except (ImportError, SyntaxError):
feature_support['rar'] = False
try:
from .oauth_bb import oauth_check, oauthblueprints
feature_support['oauth'] = True
except ImportError as err:
log.debug('Cannot import Flask-Dance, login with Oauth will not work: %s', err)
@ -84,6 +80,7 @@ except ImportError as err:
oauthblueprints = []
oauth_check = {}
admi = Blueprint('admin', __name__)
@ -103,26 +100,25 @@ def admin_required(f):
@admi.before_app_request
def before_request():
try:
if not ub.check_user_session(current_user.id,
flask_session.get('_id')) and 'opds' not in request.path \
and config.config_session == 1:
logout_user()
except AttributeError:
pass # ? fails on requesting /ajax/emailstat during restart ?
# make remember me function work
if current_user.is_authenticated:
confirm_login()
if not ub.check_user_session(current_user.id, flask_session.get('_id')) and 'opds' not in request.path:
logout_user()
g.constants = constants
g.google_site_verification = os.getenv('GOOGLE_SITE_VERIFICATION', '')
g.user = current_user
g.allow_registration = config.config_public_reg
g.allow_anonymous = config.config_anonbrowse
g.allow_upload = config.config_uploading
g.current_theme = config.config_theme
g.config_authors_max = config.config_authors_max
g.shelves_access = ub.session.query(ub.Shelf).filter(
or_(ub.Shelf.is_public == 1, ub.Shelf.user_id == current_user.id)).order_by(ub.Shelf.name).all()
if '/static/' not in request.path and not config.db_configured and \
request.endpoint not in ('admin.ajax_db_config',
'admin.simulatedbchange',
'admin.db_configuration',
'web.login',
'web.login_post',
'web.logout',
'admin.load_dialogtexts',
'admin.ajax_pathchooser'):
@ -140,39 +136,28 @@ def admin_forbidden():
@admin_required
def shutdown():
task = request.get_json().get('parameter', -1)
show_text = {}
showtext = {}
if task in (0, 1): # valid commandos received
# close all database connections
calibre_db.dispose()
ub.dispose()
if task == 0:
show_text['text'] = _('Server restarted, please reload page.')
showtext['text'] = _(u'Server restarted, please reload page')
else:
show_text['text'] = _('Performing Server shutdown, please close window.')
showtext['text'] = _(u'Performing shutdown of server, please close window')
# stop gevent/tornado server
web_server.stop(task == 0)
return json.dumps(show_text)
return json.dumps(showtext)
if task == 2:
log.warning("reconnecting to calibre database")
calibre_db.reconnect_db(config, ub.app_DB_path)
show_text['text'] = _('Success! Database Reconnected')
return json.dumps(show_text)
showtext['text'] = _(u'Reconnect successful')
return json.dumps(showtext)
show_text['text'] = _('Unknown command')
return json.dumps(show_text), 400
@admi.route("/metadata_backup", methods=["POST"])
@login_required
@admin_required
def queue_metadata_backup():
show_text = {}
log.warning("Queuing all books for metadata backup")
helper.set_all_metadata_dirty()
show_text['text'] = _('Success! Books queued for Metadata Backup, please check Tasks for result')
return json.dumps(show_text)
showtext['text'] = _(u'Unknown command')
return json.dumps(showtext), 400
# method is available without login and not protected by CSRF to make it easy reachable, is per default switched off
@ -204,32 +189,32 @@ def update_thumbnails():
def admin():
version = updater_thread.get_current_version_info()
if version is False:
commit = _('Unknown')
commit = _(u'Unknown')
else:
if 'datetime' in version:
commit = version['datetime']
tz = timedelta(seconds=time.timezone if (time.localtime().tm_isdst == 0) else time.altzone)
form_date = datetime.strptime(commit[:19], "%Y-%m-%dT%H:%M:%S")
if len(commit) > 19: # check if string has timezone
if len(commit) > 19: # check if string has timezone
if commit[19] == '+':
form_date -= timedelta(hours=int(commit[20:22]), minutes=int(commit[23:]))
elif commit[19] == '-':
form_date += timedelta(hours=int(commit[20:22]), minutes=int(commit[23:]))
commit = format_datetime(form_date - tz, format='short')
else:
commit = version['version'].replace("b", " Beta")
commit = version['version']
all_user = ub.session.query(ub.User).all()
# email_settings = mail_config.get_mail_settings()
email_settings = config.get_mail_settings()
schedule_time = format_time(datetime_time(hour=config.schedule_start_time), format="short")
t = timedelta(hours=config.schedule_duration // 60, minutes=config.schedule_duration % 60)
schedule_duration = format_timedelta(t, threshold=.99)
return render_title_template("admin.html", allUser=all_user, config=config, commit=commit,
return render_title_template("admin.html", allUser=all_user, email=email_settings, config=config, commit=commit,
feature_support=feature_support, schedule_time=schedule_time,
schedule_duration=schedule_duration,
title=_("Admin page"), page="admin")
title=_(u"Admin page"), page="admin")
@admi.route("/admin/dbconfig", methods=["GET", "POST"])
@ -249,7 +234,7 @@ def configuration():
config=config,
provider=oauthblueprints,
feature_support=feature_support,
title=_("Basic Configuration"), page="config")
title=_(u"Basic Configuration"), page="config")
@admi.route("/admin/ajaxconfig", methods=["POST"])
@ -277,9 +262,9 @@ def calibreweb_alive():
@login_required
@admin_required
def view_configuration():
read_column = calibre_db.session.query(db.CustomColumns) \
read_column = calibre_db.session.query(db.CustomColumns)\
.filter(and_(db.CustomColumns.datatype == 'bool', db.CustomColumns.mark_for_delete == 0)).all()
restrict_columns = calibre_db.session.query(db.CustomColumns) \
restrict_columns = calibre_db.session.query(db.CustomColumns)\
.filter(and_(db.CustomColumns.datatype == 'text', db.CustomColumns.mark_for_delete == 0)).all()
languages = calibre_db.speaking_language()
translations = get_available_locale()
@ -287,7 +272,7 @@ def view_configuration():
restrictColumns=restrict_columns,
languages=languages,
translations=translations,
title=_("UI Configuration"), page="uiconfig")
title=_(u"UI Configuration"), page="uiconfig")
@admi.route("/admin/usertable")
@ -298,11 +283,11 @@ def edit_user_table():
languages = calibre_db.speaking_language()
translations = get_available_locale()
all_user = ub.session.query(ub.User)
tags = calibre_db.session.query(db.Tags) \
.join(db.books_tags_link) \
.join(db.Books) \
tags = calibre_db.session.query(db.Tags)\
.join(db.books_tags_link)\
.join(db.Books)\
.filter(calibre_db.common_filters()) \
.group_by(text('books_tags_link.tag')) \
.group_by(text('books_tags_link.tag'))\
.order_by(db.Tags.name).all()
if config.config_restricted_column:
custom_values = calibre_db.session.query(db.cc_classes[config.config_restricted_column]).all()
@ -321,7 +306,7 @@ def edit_user_table():
all_roles=constants.ALL_ROLES,
kobo_support=kobo_support,
sidebar_settings=constants.sidebar_settings,
title=_("Edit Users"),
title=_(u"Edit Users"),
page="usertable")
@ -479,20 +464,20 @@ def edit_list_user(param):
elif param.endswith('role'):
value = int(vals['field_index'])
if user.name == "Guest" and value in \
[constants.ROLE_ADMIN, constants.ROLE_PASSWD, constants.ROLE_EDIT_SHELFS]:
[constants.ROLE_ADMIN, constants.ROLE_PASSWD, constants.ROLE_EDIT_SHELFS]:
raise Exception(_("Guest can't have this role"))
# check for valid value, last on checks for power of 2 value
if value > 0 and value <= constants.ROLE_VIEWER and (value & value - 1 == 0 or value == 1):
if value > 0 and value <= constants.ROLE_VIEWER and (value & value-1 == 0 or value == 1):
if vals['value'] == 'true':
user.role |= value
elif vals['value'] == 'false':
if value == constants.ROLE_ADMIN:
if not ub.session.query(ub.User). \
filter(ub.User.role.op('&')(constants.ROLE_ADMIN) == constants.ROLE_ADMIN,
ub.User.id != user.id).count():
if not ub.session.query(ub.User).\
filter(ub.User.role.op('&')(constants.ROLE_ADMIN) == constants.ROLE_ADMIN,
ub.User.id != user.id).count():
return Response(
json.dumps([{'type': "danger",
'message': _("No admin user remaining, can't remove admin role",
'message': _(u"No admin user remaining, can't remove admin role",
nick=user.name)}]), mimetype='application/json')
user.role &= ~value
else:
@ -504,7 +489,7 @@ def edit_list_user(param):
if user.name == "Guest" and value == constants.SIDEBAR_READ_AND_UNREAD:
raise Exception(_("Guest can't have this view"))
# check for valid value, last on checks for power of 2 value
if value > 0 and value <= constants.SIDEBAR_LIST and (value & value - 1 == 0 or value == 1):
if value > 0 and value <= constants.SIDEBAR_LIST and (value & value-1 == 0 or value == 1):
if vals['value'] == 'true':
user.sidebar_view |= value
elif vals['value'] == 'false':
@ -569,13 +554,13 @@ def update_view_configuration():
calibre_db.update_title_sort(config)
if not check_valid_read_column(to_save.get("config_read_column", "0")):
flash(_("Invalid Read Column"), category="error")
flash(_(u"Invalid Read Column"), category="error")
log.debug("Invalid Read column")
return view_configuration()
_config_int(to_save, "config_read_column")
if not check_valid_restricted_column(to_save.get("config_restricted_column", "0")):
flash(_("Invalid Restricted Column"), category="error")
flash(_(u"Invalid Restricted Column"), category="error")
log.debug("Invalid Restricted Column")
return view_configuration()
_config_int(to_save, "config_restricted_column")
@ -595,7 +580,7 @@ def update_view_configuration():
config.config_default_show |= constants.DETAIL_RANDOM
config.save()
flash(_("Calibre-Web configuration updated"), category="success")
flash(_(u"Calibre-Web configuration updated"), category="success")
log.debug("Calibre-Web configuration updated")
before_request()
@ -657,7 +642,7 @@ def edit_domain(allow):
@admin_required
def add_domain(allow):
domain_name = request.form.to_dict()['domainname'].replace('*', '%').replace('?', '_').lower()
check = ub.session.query(ub.Registration).filter(ub.Registration.domain == domain_name) \
check = ub.session.query(ub.Registration).filter(ub.Registration.domain == domain_name)\
.filter(ub.Registration.allow == allow).first()
if not check:
new_domain = ub.Registration(domain=domain_name, allow=allow)
@ -875,16 +860,16 @@ def delete_restriction(res_type, user_id):
@login_required
@admin_required
def list_restriction(res_type, user_id):
if res_type == 0: # Tags as template
restrict = [{'Element': x, 'type': _('Deny'), 'id': 'd' + str(i)}
if res_type == 0: # Tags as template
restrict = [{'Element': x, 'type': _('Deny'), 'id': 'd'+str(i)}
for i, x in enumerate(config.list_denied_tags()) if x != '']
allow = [{'Element': x, 'type': _('Allow'), 'id': 'a' + str(i)}
allow = [{'Element': x, 'type': _('Allow'), 'id': 'a'+str(i)}
for i, x in enumerate(config.list_allowed_tags()) if x != '']
json_dumps = restrict + allow
elif res_type == 1: # CustomC as template
restrict = [{'Element': x, 'type': _('Deny'), 'id': 'd' + str(i)}
restrict = [{'Element': x, 'type': _('Deny'), 'id': 'd'+str(i)}
for i, x in enumerate(config.list_denied_column_values()) if x != '']
allow = [{'Element': x, 'type': _('Allow'), 'id': 'a' + str(i)}
allow = [{'Element': x, 'type': _('Allow'), 'id': 'a'+str(i)}
for i, x in enumerate(config.list_allowed_column_values()) if x != '']
json_dumps = restrict + allow
elif res_type == 2: # Tags per user
@ -892,9 +877,9 @@ def list_restriction(res_type, user_id):
usr = ub.session.query(ub.User).filter(ub.User.id == user_id).first()
else:
usr = current_user
restrict = [{'Element': x, 'type': _('Deny'), 'id': 'd' + str(i)}
restrict = [{'Element': x, 'type': _('Deny'), 'id': 'd'+str(i)}
for i, x in enumerate(usr.list_denied_tags()) if x != '']
allow = [{'Element': x, 'type': _('Allow'), 'id': 'a' + str(i)}
allow = [{'Element': x, 'type': _('Allow'), 'id': 'a'+str(i)}
for i, x in enumerate(usr.list_allowed_tags()) if x != '']
json_dumps = restrict + allow
elif res_type == 3: # CustomC per user
@ -902,9 +887,9 @@ def list_restriction(res_type, user_id):
usr = ub.session.query(ub.User).filter(ub.User.id == user_id).first()
else:
usr = current_user
restrict = [{'Element': x, 'type': _('Deny'), 'id': 'd' + str(i)}
restrict = [{'Element': x, 'type': _('Deny'), 'id': 'd'+str(i)}
for i, x in enumerate(usr.list_denied_column_values()) if x != '']
allow = [{'Element': x, 'type': _('Allow'), 'id': 'a' + str(i)}
allow = [{'Element': x, 'type': _('Allow'), 'id': 'a'+str(i)}
for i, x in enumerate(usr.list_allowed_column_values()) if x != '']
json_dumps = restrict + allow
else:
@ -917,15 +902,11 @@ def list_restriction(res_type, user_id):
@admi.route("/ajax/fullsync", methods=["POST"])
@login_required
def ajax_self_fullsync():
return do_full_kobo_sync(current_user.id)
@admi.route("/ajax/fullsync/<int:userid>", methods=["POST"])
@login_required
@admin_required
def ajax_fullsync(userid):
return do_full_kobo_sync(userid)
def ajax_fullsync():
count = ub.session.query(ub.KoboSyncedBooks).filter(current_user.id == ub.KoboSyncedBooks.user_id).delete()
message = _("{} sync entries deleted").format(count)
ub.session_commit(message)
return Response(json.dumps([{"type": "success", "message": message}]), mimetype='application/json')
@admi.route("/ajax/pathchooser/")
@ -935,17 +916,10 @@ def ajax_pathchooser():
return pathchooser()
def do_full_kobo_sync(userid):
count = ub.session.query(ub.KoboSyncedBooks).filter(userid == ub.KoboSyncedBooks.user_id).delete()
message = _("{} sync entries deleted").format(count)
ub.session_commit(message)
return Response(json.dumps([{"type": "success", "message": message}]), mimetype='application/json')
def check_valid_read_column(column):
if column != "0":
if not calibre_db.session.query(db.CustomColumns).filter(db.CustomColumns.id == column) \
.filter(and_(db.CustomColumns.datatype == 'bool', db.CustomColumns.mark_for_delete == 0)).all():
.filter(and_(db.CustomColumns.datatype == 'bool', db.CustomColumns.mark_for_delete == 0)).all():
return False
return True
@ -953,7 +927,7 @@ def check_valid_read_column(column):
def check_valid_restricted_column(column):
if column != "0":
if not calibre_db.session.query(db.CustomColumns).filter(db.CustomColumns.id == column) \
.filter(and_(db.CustomColumns.datatype == 'text', db.CustomColumns.mark_for_delete == 0)).all():
.filter(and_(db.CustomColumns.datatype == 'text', db.CustomColumns.mark_for_delete == 0)).all():
return False
return True
@ -981,7 +955,7 @@ def prepare_tags(user, action, tags_name, id_list):
raise Exception(_("Tag not found"))
new_tags_list = [x.name for x in tags]
else:
tags = calibre_db.session.query(db.cc_classes[config.config_restricted_column]) \
tags = calibre_db.session.query(db.cc_classes[config.config_restricted_column])\
.filter(db.cc_classes[config.config_restricted_column].id.in_(id_list)).all()
new_tags_list = [x.value for x in tags]
saved_tags_list = user.__dict__[tags_name].split(",") if len(user.__dict__[tags_name]) else []
@ -994,19 +968,6 @@ def prepare_tags(user, action, tags_name, id_list):
return ",".join(saved_tags_list)
def get_drives(current):
drive_letters = []
for d in string.ascii_uppercase:
if os.path.exists('{}:'.format(d)) and current[0].lower() != d.lower():
drive = "{}:\\".format(d)
data = {"name": drive, "fullpath": drive}
data["sort"] = "_" + data["fullpath"].lower()
data["type"] = "dir"
data["size"] = ""
drive_letters.append(data)
return drive_letters
def pathchooser():
browse_for = "folder"
folder_only = request.args.get('folder', False) == "true"
@ -1014,45 +975,43 @@ def pathchooser():
path = os.path.normpath(request.args.get('path', ""))
if os.path.isfile(path):
old_file = path
oldfile = path
path = os.path.dirname(path)
else:
old_file = ""
oldfile = ""
absolute = False
if os.path.isdir(path):
# if os.path.isabs(path):
cwd = os.path.realpath(path)
absolute = True
# else:
# cwd = os.path.relpath(path)
else:
cwd = os.getcwd()
cwd = os.path.normpath(os.path.realpath(cwd))
parent_dir = os.path.dirname(cwd)
parentdir = os.path.dirname(cwd)
if not absolute:
if os.path.realpath(cwd) == os.path.realpath("/"):
cwd = os.path.relpath(cwd)
else:
cwd = os.path.relpath(cwd) + os.path.sep
parent_dir = os.path.relpath(parent_dir) + os.path.sep
parentdir = os.path.relpath(parentdir) + os.path.sep
files = []
if os.path.realpath(cwd) == os.path.realpath("/") \
or (sys.platform == "win32" and os.path.realpath(cwd)[1:] == os.path.realpath("/")[1:]):
# we are in root
parent_dir = ""
if sys.platform == "win32":
files = get_drives(cwd)
if os.path.realpath(cwd) == os.path.realpath("/"):
parentdir = ""
try:
folders = os.listdir(cwd)
except Exception:
folders = []
files = []
for f in folders:
try:
sanitized_f = str(Markup.escape(f))
data = {"name": sanitized_f, "fullpath": os.path.join(cwd, sanitized_f)}
data = {"name": f, "fullpath": os.path.join(cwd, f)}
data["sort"] = data["fullpath"].lower()
except Exception:
continue
@ -1082,9 +1041,9 @@ def pathchooser():
context = {
"cwd": cwd,
"files": files,
"parentdir": parent_dir,
"parentdir": parentdir,
"type": browse_for,
"oldfile": old_file,
"oldfile": oldfile,
"absolute": absolute,
}
return json.dumps(context)
@ -1103,7 +1062,7 @@ def _config_checkbox_int(to_save, x):
def _config_string(to_save, x):
return config.set_from_dictionary(to_save, x, lambda y: y.strip().strip(u'\u200B\u200C\u200D\ufeff') if y else y)
return config.set_from_dictionary(to_save, x, lambda y: y.strip() if y else y)
def _configuration_gdrive_helper(to_save):
@ -1122,10 +1081,10 @@ def _configuration_gdrive_helper(to_save):
if not gdrive_secrets:
return _configuration_result(_('client_secrets.json Is Not Configured For Web Application'))
gdriveutils.update_settings(
gdrive_secrets['client_id'],
gdrive_secrets['client_secret'],
gdrive_secrets['redirect_uris'][0]
)
gdrive_secrets['client_id'],
gdrive_secrets['client_secret'],
gdrive_secrets['redirect_uris'][0]
)
# always show Google Drive settings, but in case of error deny support
new_gdrive_value = (not gdrive_error) and ("config_use_google_drive" in to_save)
@ -1142,12 +1101,12 @@ def _configuration_oauth_helper(to_save):
reboot_required = False
for element in oauthblueprints:
if to_save["config_" + str(element['id']) + "_oauth_client_id"] != element['oauth_client_id'] \
or to_save["config_" + str(element['id']) + "_oauth_client_secret"] != element['oauth_client_secret']:
or to_save["config_" + str(element['id']) + "_oauth_client_secret"] != element['oauth_client_secret']:
reboot_required = True
element['oauth_client_id'] = to_save["config_" + str(element['id']) + "_oauth_client_id"]
element['oauth_client_secret'] = to_save["config_" + str(element['id']) + "_oauth_client_secret"]
if to_save["config_" + str(element['id']) + "_oauth_client_id"] \
and to_save["config_" + str(element['id']) + "_oauth_client_secret"]:
and to_save["config_" + str(element['id']) + "_oauth_client_secret"]:
active_oauths += 1
element["active"] = 1
else:
@ -1177,6 +1136,7 @@ def _configuration_logfile_helper(to_save):
def _configuration_ldap_helper(to_save):
reboot_required = False
reboot_required |= _config_string(to_save, "config_ldap_provider_url")
reboot_required |= _config_int(to_save, "config_ldap_port")
reboot_required |= _config_int(to_save, "config_ldap_authentication")
reboot_required |= _config_string(to_save, "config_ldap_dn")
@ -1191,26 +1151,21 @@ def _configuration_ldap_helper(to_save):
reboot_required |= _config_string(to_save, "config_ldap_cert_path")
reboot_required |= _config_string(to_save, "config_ldap_key_path")
_config_string(to_save, "config_ldap_group_name")
address = urlparse(to_save.get("config_ldap_provider_url", ""))
to_save["config_ldap_provider_url"] = (address.hostname or address.path).strip("/")
reboot_required |= _config_string(to_save, "config_ldap_provider_url")
if to_save.get("config_ldap_serv_password_e", "") != "":
if to_save.get("config_ldap_serv_password", "") != "":
reboot_required |= 1
config.set_from_dictionary(to_save, "config_ldap_serv_password_e")
config.set_from_dictionary(to_save, "config_ldap_serv_password", base64.b64encode, encode='UTF-8')
config.save()
if not config.config_ldap_provider_url \
or not config.config_ldap_port \
or not config.config_ldap_dn \
or not config.config_ldap_user_object:
or not config.config_ldap_user_object:
return reboot_required, _configuration_result(_('Please Enter a LDAP Provider, '
'Port, DN and User Object Identifier'))
if config.config_ldap_authentication > constants.LDAP_AUTH_ANONYMOUS:
if config.config_ldap_authentication > constants.LDAP_AUTH_UNAUTHENTICATE:
if not config.config_ldap_serv_username or not bool(config.config_ldap_serv_password_e):
if not config.config_ldap_serv_username or not bool(config.config_ldap_serv_password):
return reboot_required, _configuration_result(_('Please Enter a LDAP Service Account and Password'))
else:
if not config.config_ldap_serv_username:
@ -1274,16 +1229,16 @@ def new_user():
content.default_language = config.config_default_language
return render_title_template("user_edit.html", new_user=1, content=content,
config=config, translations=translations,
languages=languages, title=_("Add New User"), page="newuser",
languages=languages, title=_(u"Add new user"), page="newuser",
kobo_support=kobo_support, registered_oauth=oauth_check)
@admi.route("/admin/mailsettings", methods=["GET"])
@admi.route("/admin/mailsettings")
@login_required
@admin_required
def edit_mailsettings():
content = config.get_mail_settings()
return render_title_template("email_edit.html", content=content, title=_("Edit Email Server Settings"),
return render_title_template("email_edit.html", content=content, title=_(u"Edit E-mail Server Settings"),
page="mailset", feature_support=feature_support)
@ -1302,7 +1257,7 @@ def update_mailsettings():
elif to_save.get("gmail"):
try:
config.mail_gmail_token = services.gmail.setup_gmail(config.mail_gmail_token)
flash(_("Success! Gmail Account Verified."), category="success")
flash(_(u"Gmail Account Verification Successful"), category="success")
except Exception as ex:
flash(str(ex), category="error")
log.error(ex)
@ -1311,9 +1266,8 @@ def update_mailsettings():
else:
_config_int(to_save, "mail_port")
_config_int(to_save, "mail_use_ssl")
if to_save.get("mail_password_e", ""):
_config_string(to_save, "mail_password_e")
_config_int(to_save, "mail_size", lambda y: int(y) * 1024 * 1024)
_config_string(to_save, "mail_password")
_config_int(to_save, "mail_size", lambda y: int(y)*1024*1024)
config.mail_server = to_save.get('mail_server', "").strip()
config.mail_from = to_save.get('mail_from', "").strip()
config.mail_login = to_save.get('mail_login', "").strip()
@ -1322,24 +1276,24 @@ def update_mailsettings():
except (OperationalError, InvalidRequestError) as e:
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
return edit_mailsettings()
except Exception as e:
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
return edit_mailsettings()
if to_save.get("test"):
if current_user.email:
result = send_test_mail(current_user.email, current_user.name)
if result is None:
flash(_("Test e-mail queued for sending to %(email)s, please check Tasks for result",
flash(_(u"Test e-mail queued for sending to %(email)s, please check Tasks for result",
email=current_user.email), category="info")
else:
flash(_("There was an error sending the Test e-mail: %(res)s", res=result), category="error")
flash(_(u"There was an error sending the Test e-mail: %(res)s", res=result), category="error")
else:
flash(_("Please configure your e-mail address first..."), category="error")
flash(_(u"Please configure your e-mail address first..."), category="error")
else:
flash(_("Email Server Settings updated"), category="success")
flash(_(u"E-mail server settings updated"), category="success")
return edit_mailsettings()
@ -1353,16 +1307,16 @@ def edit_scheduledtasks():
duration_field = list()
for n in range(24):
time_field.append((n, format_time(datetime_time(hour=n), format="short", )))
time_field.append((n, format_time(datetime_time(hour=n), format="short",)))
for n in range(5, 65, 5):
t = timedelta(hours=n // 60, minutes=n % 60)
duration_field.append((n, format_timedelta(t, threshold=.97)))
duration_field.append((n, format_timedelta(t, threshold=.9)))
return render_title_template("schedule_edit.html",
config=content,
starttime=time_field,
duration=duration_field,
title=_("Edit Scheduled Tasks Settings"))
title=_(u"Edit Scheduled Tasks Settings"))
@admi.route("/admin/scheduledtasks", methods=["POST"])
@ -1372,24 +1326,23 @@ def update_scheduledtasks():
error = False
to_save = request.form.to_dict()
if 0 <= int(to_save.get("schedule_start_time")) <= 23:
_config_int( to_save, "schedule_start_time")
_config_int(to_save, "schedule_start_time")
else:
flash(_("Invalid start time for task specified"), category="error")
flash(_(u"Invalid start time for task specified"), category="error")
error = True
if 0 < int(to_save.get("schedule_duration")) <= 60:
_config_int(to_save, "schedule_duration")
else:
flash(_("Invalid duration for task specified"), category="error")
flash(_(u"Invalid duration for task specified"), category="error")
error = True
_config_checkbox(to_save, "schedule_generate_book_covers")
_config_checkbox(to_save, "schedule_generate_series_covers")
_config_checkbox(to_save, "schedule_metadata_backup")
_config_checkbox(to_save, "schedule_reconnect")
if not error:
try:
config.save()
flash(_("Scheduled tasks settings updated"), category="success")
flash(_(u"Scheduled tasks settings updated"), category="success")
# Cancel any running tasks
schedule.end_scheduled_tasks()
@ -1399,7 +1352,7 @@ def update_scheduledtasks():
except IntegrityError:
ub.session.rollback()
log.error("An unknown error occurred while saving scheduled tasks settings")
flash(_("Oops! An unknown error occurred. Please try again later."), category="error")
flash(_(u"An unknown error occurred. Please try again later."), category="error")
except OperationalError:
ub.session.rollback()
log.error("Settings DB is not Writeable")
@ -1414,7 +1367,7 @@ def update_scheduledtasks():
def edit_user(user_id):
content = ub.session.query(ub.User).filter(ub.User.id == int(user_id)).first() # type: ub.User
if not content or (not config.config_anonbrowse and content.name == "Guest"):
flash(_("User not found"), category="error")
flash(_(u"User not found"), category="error")
return redirect(url_for('admin.admin'))
languages = calibre_db.speaking_language(return_all_languages=True)
translations = get_available_locale()
@ -1433,7 +1386,7 @@ def edit_user(user_id):
registered_oauth=oauth_check,
mail_configured=config.get_mail_server_configured(),
kobo_support=kobo_support,
title=_("Edit User %(nick)s", nick=content.name),
title=_(u"Edit User %(nick)s", nick=content.name),
page="edituser")
@ -1444,14 +1397,14 @@ def reset_user_password(user_id):
if current_user is not None and current_user.is_authenticated:
ret, message = reset_password(user_id)
if ret == 1:
log.debug("Password for user %s reset", message)
flash(_("Success! Password for user %(user)s reset", user=message), category="success")
log.debug(u"Password for user %s reset", message)
flash(_(u"Password for user %(user)s reset", user=message), category="success")
elif ret == 0:
log.error("An unknown error occurred. Please try again later.")
flash(_("Oops! An unknown error occurred. Please try again later."), category="error")
log.error(u"An unknown error occurred. Please try again later.")
flash(_(u"An unknown error occurred. Please try again later."), category="error")
else:
log.error("Please configure the SMTP mail settings.")
flash(_("Oops! Please configure the SMTP mail settings."), category="error")
log.error(u"Please configure the SMTP mail settings first...")
flash(_(u"Please configure the SMTP mail settings first..."), category="error")
return redirect(url_for('admin.admin'))
@ -1462,7 +1415,7 @@ def view_logfile():
logfiles = {0: logger.get_logfile(config.config_logfile),
1: logger.get_accesslogfile(config.config_access_logfile)}
return render_title_template("logviewer.html",
title=_("Logfile viewer"),
title=_(u"Logfile viewer"),
accesslog_enable=config.config_access_log,
log_enable=bool(config.config_logfile != logger.LOG_TO_STDOUT),
logfiles=logfiles,
@ -1512,7 +1465,7 @@ def download_debug():
@admin_required
def get_update_status():
if feature_support['updater']:
log.info("Update status requested")
log.info(u"Update status requested")
return updater_thread.get_available_updates(request.method)
else:
return ''
@ -1605,7 +1558,7 @@ def ldap_import_create_user(user, user_data):
ub.session.add(content)
try:
ub.session.commit()
return 1, None # increase no of users
return 1, None # increase no of users
except Exception as ex:
log.warning("Failed to create LDAP user: %s - %s", user, ex)
ub.session.rollback()
@ -1631,10 +1584,7 @@ def import_ldap_users():
imported = 0
for username in new_users:
if isinstance(username, bytes):
user = username.decode('utf-8')
else:
user = username
user = username.decode('utf-8')
if '=' in user:
# if member object field is empty take user object as filter
if config.config_ldap_member_user_object:
@ -1710,7 +1660,7 @@ def _db_configuration_update_helper():
except (OperationalError, InvalidRequestError) as e:
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
_db_configuration_result(_("Oops! Database Error: %(error)s.", error=e.orig), gdrive_error)
_db_configuration_result(_(u"Database error: %(error)s.", error=e.orig), gdrive_error)
try:
metadata_db = os.path.join(to_save['config_calibre_dir'], "metadata.db")
if config.config_use_google_drive and is_gdrive_ready() and not os.path.exists(metadata_db):
@ -1720,7 +1670,7 @@ def _db_configuration_update_helper():
return _db_configuration_result('{}'.format(ex), gdrive_error)
if db_change or not db_valid or not config.db_configured \
or config.config_calibre_dir != to_save["config_calibre_dir"]:
or config.config_calibre_dir != to_save["config_calibre_dir"]:
if not os.path.exists(metadata_db) or not to_save['config_calibre_dir']:
return _db_configuration_result(_('DB Location is not Valid, Please Enter Correct Path'), gdrive_error)
else:
@ -1742,10 +1692,7 @@ def _db_configuration_update_helper():
_config_string(to_save, "config_calibre_dir")
calibre_db.update_config(config)
if not os.access(os.path.join(config.config_calibre_dir, "metadata.db"), os.W_OK):
flash(_("DB is not Writeable"), category="warning")
_config_string(to_save, "config_calibre_split_dir")
config.config_calibre_split = to_save.get('config_calibre_split', 0) == "on"
calibre_db.update_config(config)
flash(_(u"DB is not Writeable"), category="warning")
config.save()
return _db_configuration_result(None, gdrive_error)
@ -1766,7 +1713,6 @@ def _configuration_update_helper():
_config_checkbox_int(to_save, "config_uploading")
_config_checkbox_int(to_save, "config_unicode_filename")
_config_checkbox_int(to_save, "config_embed_metadata")
# Reboot on config_anonbrowse with enabled ldap, as decoraters are changed in this case
reboot_required |= (_config_checkbox_int(to_save, "config_anonbrowse")
and config.config_login_type == constants.LOGIN_LDAP)
@ -1783,14 +1729,8 @@ def _configuration_update_helper():
constants.EXTENSIONS_UPLOAD = config.config_upload_formats.split(',')
_config_string(to_save, "config_calibre")
_config_string(to_save, "config_binariesdir")
_config_string(to_save, "config_converterpath")
_config_string(to_save, "config_kepubifypath")
if "config_binariesdir" in to_save:
calibre_status = helper.check_calibre(config.config_binariesdir)
if calibre_status:
return _configuration_result(calibre_status)
to_save["config_converterpath"] = get_calibre_binarypath("ebook-convert")
_config_string(to_save, "config_converterpath")
reboot_required |= _config_int(to_save, "config_login_type")
@ -1809,8 +1749,10 @@ def _configuration_update_helper():
# Goodreads configuration
_config_checkbox(to_save, "config_use_goodreads")
_config_string(to_save, "config_goodreads_api_key")
_config_string(to_save, "config_goodreads_api_secret")
if services.goodreads_support:
services.goodreads_support.connect(config.config_goodreads_api_key,
config.config_goodreads_api_secret,
config.config_use_goodreads)
_config_int(to_save, "config_updatechannel")
@ -1823,28 +1765,10 @@ def _configuration_update_helper():
if config.config_login_type == constants.LOGIN_OAUTH:
reboot_required |= _configuration_oauth_helper(to_save)
# logfile configuration
reboot, message = _configuration_logfile_helper(to_save)
if message:
return message
reboot_required |= reboot
# security configuration
_config_checkbox(to_save, "config_password_policy")
_config_checkbox(to_save, "config_password_number")
_config_checkbox(to_save, "config_password_lower")
_config_checkbox(to_save, "config_password_upper")
_config_checkbox(to_save, "config_password_character")
_config_checkbox(to_save, "config_password_special")
if 0 < int(to_save.get("config_password_min_length", "0")) < 41:
_config_int(to_save, "config_password_min_length")
else:
return _configuration_result(_('Password length has to be between 1 and 40'))
reboot_required |= _config_int(to_save, "config_session")
reboot_required |= _config_checkbox(to_save, "config_ratelimiter")
reboot_required |= _config_string(to_save, "config_limiter_uri")
reboot_required |= _config_string(to_save, "config_limiter_options")
# Rarfile Content configuration
_config_string(to_save, "config_rarfile_location")
if "config_rarfile_location" in to_save:
@ -1854,7 +1778,7 @@ def _configuration_update_helper():
except (OperationalError, InvalidRequestError) as e:
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
_configuration_result(_("Oops! Database Error: %(error)s.", error=e.orig))
_configuration_result(_(u"Database error: %(error)s.", error=e.orig))
config.save()
if reboot_required:
@ -1870,7 +1794,7 @@ def _configuration_result(error_flash=None, reboot=False):
config.load()
resp['result'] = [{'type': "danger", 'message': error_flash}]
else:
resp['result'] = [{'type': "success", 'message': _("Calibre-Web configuration updated")}]
resp['result'] = [{'type': "success", 'message': _(u"Calibre-Web configuration updated")}]
resp['reboot'] = reboot
resp['config_upload'] = config.config_upload_formats
return Response(json.dumps(resp), mimetype='application/json')
@ -1901,7 +1825,7 @@ def _db_configuration_result(error_flash=None, gdrive_error=None):
gdriveError=gdrive_error,
gdrivefolders=gdrivefolders,
feature_support=feature_support,
title=_("Database Configuration"), page="dbconfig")
title=_(u"Database Configuration"), page="dbconfig")
def _handle_new_user(to_save, content, languages, translations, kobo_support):
@ -1913,11 +1837,11 @@ def _handle_new_user(to_save, content, languages, translations, kobo_support):
content.sidebar_view |= constants.DETAIL_RANDOM
content.role = constants.selected_roles(to_save)
content.password = generate_password_hash(to_save["password"])
try:
if not to_save["name"] or not to_save["email"] or not to_save["password"]:
log.info("Missing entries on new user")
raise Exception(_("Oops! Please complete all fields."))
content.password = generate_password_hash(helper.valid_password(to_save.get("password", "")))
raise Exception(_(u"Please fill out all fields!"))
content.email = check_email(to_save["email"])
# Query username, if not existing, change
content.name = check_username(to_save["name"])
@ -1925,13 +1849,13 @@ def _handle_new_user(to_save, content, languages, translations, kobo_support):
content.kindle_mail = valid_email(to_save["kindle_mail"])
if config.config_public_reg and not check_valid_domain(content.email):
log.info("E-mail: {} for new user is not from valid domain".format(content.email))
raise Exception(_("E-mail is not from valid domain"))
raise Exception(_(u"E-mail is not from valid domain"))
except Exception as ex:
flash(str(ex), category="error")
return render_title_template("user_edit.html", new_user=1, content=content,
config=config,
translations=translations,
languages=languages, title=_("Add new user"), page="newuser",
languages=languages, title=_(u"Add new user"), page="newuser",
kobo_support=kobo_support, registered_oauth=oauth_check)
try:
content.allowed_tags = config.config_allowed_tags
@ -1942,17 +1866,17 @@ def _handle_new_user(to_save, content, languages, translations, kobo_support):
content.kobo_only_shelves_sync = to_save.get("kobo_only_shelves_sync", 0) == "on"
ub.session.add(content)
ub.session.commit()
flash(_("User '%(user)s' created", user=content.name), category="success")
flash(_(u"User '%(user)s' created", user=content.name), category="success")
log.debug("User {} created".format(content.name))
return redirect(url_for('admin.admin'))
except IntegrityError:
ub.session.rollback()
log.error("Found an existing account for {} or {}".format(content.name, content.email))
flash(_("Oops! An account already exists for this Email. or name."), category="error")
flash(_("Found an existing account for this e-mail address or name."), category="error")
except OperationalError as e:
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
def _delete_user(content):
@ -1980,10 +1904,10 @@ def _delete_user(content):
log.info("User {} deleted".format(content.name))
return _("User '%(nick)s' deleted", nick=content.name)
else:
# log.warning(_("Can't delete Guest User"))
log.warning(_("Can't delete Guest User"))
raise Exception(_("Can't delete Guest User"))
else:
# log.warning("No admin user remaining, can't delete user")
log.warning("No admin user remaining, can't delete user")
raise Exception(_("No admin user remaining, can't delete user"))
@ -2001,6 +1925,14 @@ def _handle_edit_user(to_save, content, languages, translations, kobo_support):
log.warning("No admin user remaining, can't remove admin role from {}".format(content.name))
flash(_("No admin user remaining, can't remove admin role"), category="error")
return redirect(url_for('admin.admin'))
if to_save.get("password"):
content.password = generate_password_hash(to_save["password"])
anonymous = content.is_anonymous
content.role = constants.selected_roles(to_save)
if anonymous:
content.role |= constants.ROLE_ANONYMOUS
else:
content.role &= ~constants.ROLE_ANONYMOUS
val = [int(k[5:]) for k in to_save if k.startswith('show_')]
sidebar, __ = get_sidebar_config()
@ -2028,20 +1960,8 @@ def _handle_edit_user(to_save, content, languages, translations, kobo_support):
if to_save.get("locale"):
content.locale = to_save["locale"]
try:
anonymous = content.is_anonymous
content.role = constants.selected_roles(to_save)
if anonymous:
content.role |= constants.ROLE_ANONYMOUS
else:
content.role &= ~constants.ROLE_ANONYMOUS
if to_save.get("password", ""):
content.password = generate_password_hash(helper.valid_password(to_save.get("password", "")))
new_email = valid_email(to_save.get("email", content.email))
if not new_email:
raise Exception(_("Email can't be empty and has to be a valid Email"))
if new_email != content.email:
content.email = check_email(new_email)
if to_save.get("email", content.email) != content.email:
content.email = check_email(to_save["email"])
# Query username, if not existing, change
if to_save.get("name", content.name) != content.name:
if to_save.get("name") == "Guest":
@ -2061,19 +1981,19 @@ def _handle_edit_user(to_save, content, languages, translations, kobo_support):
content=content,
config=config,
registered_oauth=oauth_check,
title=_("Edit User %(nick)s", nick=content.name),
title=_(u"Edit User %(nick)s", nick=content.name),
page="edituser")
try:
ub.session_commit()
flash(_("User '%(nick)s' updated", nick=content.name), category="success")
flash(_(u"User '%(nick)s' updated", nick=content.name), category="success")
except IntegrityError as ex:
ub.session.rollback()
log.error("An unknown error occurred while changing user: {}".format(str(ex)))
flash(_("Oops! An unknown error occurred. Please try again later."), category="error")
flash(_(u"An unknown error occurred. Please try again later."), category="error")
except OperationalError as e:
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
return ""

View File

@ -1,8 +1,7 @@
from babel import negotiate_locale
from flask_babel import Babel, Locale
from babel.core import UnknownLocaleError
from flask import request
from flask_login import current_user
from flask import request, g
from . import logger
@ -10,12 +9,14 @@ log = logger.create()
babel = Babel()
@babel.localeselector
def get_locale():
# if a user is logged in, use the locale from the user settings
if current_user is not None and hasattr(current_user, "locale"):
# if the account is the guest account bypass the config lang settings
if current_user.name != 'Guest':
return current_user.locale
user = getattr(g, 'user', None)
if user is not None and hasattr(user, "locale"):
if user.name != 'Guest': # if the account is the guest account bypass the config lang settings
return user.locale
preferred = list()
if request.accept_languages:

View File

@ -1,53 +0,0 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2018-2019 OzzieIsaacs
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from . import logger
from lxml.etree import ParserError
try:
# at least bleach 6.0 is needed -> incomplatible change from list arguments to set arguments
from bleach import clean_text as clean_html
BLEACH = True
except ImportError:
try:
BLEACH = False
from nh3 import clean as clean_html
except ImportError:
try:
BLEACH = False
from lxml.html.clean import clean_html
except ImportError:
clean_html = None
log = logger.create()
def clean_string(unsafe_text, book_id=0):
try:
if BLEACH:
safe_text = clean_html(unsafe_text, tags=set(), attributes=set())
else:
safe_text = clean_html(unsafe_text)
except ParserError as e:
log.error("Comments of book {} are corrupted: {}".format(book_id, e))
safe_text = ""
except TypeError as e:
log.error("Comments can't be parsed, maybe 'lxml' is too new, try installing 'bleach': {}".format(e))
safe_text = ""
return safe_text

View File

@ -29,8 +29,8 @@ from .constants import DEFAULT_SETTINGS_FILE, DEFAULT_GDRIVE_FILE
def version_info():
if _NIGHTLY_VERSION[1].startswith('$Format'):
return "Calibre-Web version: %s - unknown git-clone" % _STABLE_VERSION['version'].replace("b", " Beta")
return "Calibre-Web version: %s -%s" % (_STABLE_VERSION['version'].replace("b", " Beta"), _NIGHTLY_VERSION[1])
return "Calibre-Web version: %s - unknown git-clone" % _STABLE_VERSION['version']
return "Calibre-Web version: %s -%s" % (_STABLE_VERSION['version'], _NIGHTLY_VERSION[1])
class CliParameter(object):
@ -48,11 +48,9 @@ class CliParameter(object):
'works only in combination with keyfile')
parser.add_argument('-k', metavar='path', help='path and name to SSL keyfile, e.g. /opt/test.key, '
'works only in combination with certfile')
parser.add_argument('-o', metavar='path', help='path and name Calibre-Web logfile')
parser.add_argument('-v', '--version', action='version', help='Shows version number and exits Calibre-Web',
version=version_info())
parser.add_argument('-i', metavar='ip-address', help='Server IP-Address to listen')
parser.add_argument('-m', action='store_true', help='Use Memory-backend as limiter backend, use this parameter in case of miss configured backend')
parser.add_argument('-s', metavar='user:pass',
help='Sets specific username to new password and exits Calibre-Web')
parser.add_argument('-f', action='store_true', help='Flag is depreciated and will be removed in next version')
@ -62,7 +60,6 @@ class CliParameter(object):
parser.add_argument('-r', action='store_true', help='Enable public database reconnect route under /reconnect')
args = parser.parse_args()
self.logpath = args.o or ""
self.settings_path = args.p or os.path.join(_CONFIG_DIR, DEFAULT_SETTINGS_FILE)
self.gd_path = args.g or os.path.join(_CONFIG_DIR, DEFAULT_GDRIVE_FILE)
@ -99,8 +96,6 @@ class CliParameter(object):
if args.k == "":
self.keyfilepath = ""
# overwrite limiter backend
self.memory_backend = args.m or None
# dry run updater
self.dry_run = args.d or None
# enable reconnect endpoint for docker database reconnect

View File

@ -36,12 +36,6 @@ try:
from comicapi import __version__ as comic_version
except ImportError:
comic_version = ''
try:
from comicapi.comicarchive import load_archive_plugins
import comicapi.utils
comicapi.utils.add_rar_paths()
except ImportError:
load_archive_plugins = None
except (ImportError, LookupError) as e:
log.debug('Cannot import comicapi, extracting comic metadata will not work: %s', e)
import zipfile
@ -52,12 +46,6 @@ except (ImportError, LookupError) as e:
except (ImportError, SyntaxError) as e:
log.debug('Cannot import rarfile, extracting cover files from rar files will not work: %s', e)
use_rarfile = False
try:
import py7zr
use_7zip = True
except (ImportError, SyntaxError) as e:
log.debug('Cannot import py7zr, extracting cover files from CB7 files will not work: %s', e)
use_7zip = False
use_comic_meta = False
@ -90,40 +78,23 @@ def _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_exec
if len(ext) > 1:
extension = ext[1].lower()
if extension in cover.COVER_EXTENSIONS:
cover_data = cf.read([name])
cover_data = cf.read(name)
break
except Exception as ex:
log.error('Rarfile failed with error: {}'.format(ex))
elif original_file_extension.upper() == '.CB7' and use_7zip:
cf = py7zr.SevenZipFile(tmp_file_name)
for name in cf.getnames():
ext = os.path.splitext(name)
if len(ext) > 1:
extension = ext[1].lower()
if extension in cover.COVER_EXTENSIONS:
try:
cover_data = cf.read([name])[name].read()
except (py7zr.Bad7zFile, OSError) as ex:
log.error('7Zip file failed with error: {}'.format(ex))
break
log.debug('Rarfile failed with error: {}'.format(ex))
return cover_data, extension
def _extract_cover(tmp_file_name, original_file_extension, rar_executable):
cover_data = extension = None
if use_comic_meta:
try:
archive = ComicArchive(tmp_file_name, rar_exe_path=rar_executable)
except TypeError:
archive = ComicArchive(tmp_file_name)
name_list = archive.getPageNameList if hasattr(archive, "getPageNameList") else archive.get_page_name_list
for index, name in enumerate(name_list()):
archive = ComicArchive(tmp_file_name, rar_exe_path=rar_executable)
for index, name in enumerate(archive.getPageNameList()):
ext = os.path.splitext(name)
if len(ext) > 1:
extension = ext[1].lower()
if extension in cover.COVER_EXTENSIONS:
get_page = archive.getPage if hasattr(archive, "getPageNameList") else archive.get_page
cover_data = get_page(index)
cover_data = archive.getPage(index)
break
else:
cover_data, extension = _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_executable)
@ -132,26 +103,17 @@ def _extract_cover(tmp_file_name, original_file_extension, rar_executable):
def get_comic_info(tmp_file_path, original_file_name, original_file_extension, rar_executable):
if use_comic_meta:
try:
archive = ComicArchive(tmp_file_path, rar_exe_path=rar_executable)
except TypeError:
load_archive_plugins(force=True, rar=rar_executable)
archive = ComicArchive(tmp_file_path)
if hasattr(archive, "seemsToBeAComicArchive"):
seems_archive = archive.seemsToBeAComicArchive
else:
seems_archive = archive.seems_to_be_a_comic_archive
if seems_archive():
has_metadata = archive.hasMetadata if hasattr(archive, "hasMetadata") else archive.has_metadata
if has_metadata(MetaDataStyle.CIX):
archive = ComicArchive(tmp_file_path, rar_exe_path=rar_executable)
if archive.seemsToBeAComicArchive():
if archive.hasMetadata(MetaDataStyle.CIX):
style = MetaDataStyle.CIX
elif has_metadata(MetaDataStyle.CBI):
elif archive.hasMetadata(MetaDataStyle.CBI):
style = MetaDataStyle.CBI
else:
style = None
read_metadata = archive.readMetadata if hasattr(archive, "readMetadata") else archive.read_metadata
loaded_metadata = read_metadata(style)
# if style is not None:
loaded_metadata = archive.readMetadata(style)
lang = loaded_metadata.language or ""
loaded_metadata.language = isoLanguages.get_lang3(lang)
@ -176,7 +138,7 @@ def get_comic_info(tmp_file_path, original_file_name, original_file_extension, r
file_path=tmp_file_path,
extension=original_file_extension,
title=original_file_name,
author='Unknown',
author=u'Unknown',
cover=_extract_cover(tmp_file_path, original_file_extension, rar_executable),
description="",
tags="",

View File

@ -23,10 +23,6 @@ import json
from sqlalchemy import Column, String, Integer, SmallInteger, Boolean, BLOB, JSON
from sqlalchemy.exc import OperationalError
from sqlalchemy.sql.expression import text
from sqlalchemy import exists
from cryptography.fernet import Fernet
import cryptography.exceptions
from base64 import urlsafe_b64decode
try:
# Compatibility with sqlalchemy 2.0
from sqlalchemy.orm import declarative_base
@ -34,7 +30,6 @@ except ImportError:
from sqlalchemy.ext.declarative import declarative_base
from . import constants, logger
from .subproc_wrapper import process_wait
log = logger.create()
@ -61,8 +56,7 @@ class _Settings(_Base):
mail_port = Column(Integer, default=25)
mail_use_ssl = Column(SmallInteger, default=0)
mail_login = Column(String, default='mail@example.com')
mail_password_e = Column(String)
mail_password = Column(String)
mail_password = Column(String, default='mypassword')
mail_from = Column(String, default='automailer <mail@example.com>')
mail_size = Column(Integer, default=25*1024*1024)
mail_server_type = Column(SmallInteger, default=0)
@ -70,25 +64,24 @@ class _Settings(_Base):
config_calibre_dir = Column(String)
config_calibre_uuid = Column(String)
config_calibre_split = Column(Boolean, default=False)
config_calibre_split_dir = Column(String)
config_port = Column(Integer, default=constants.DEFAULT_PORT)
config_external_port = Column(Integer, default=constants.DEFAULT_PORT)
config_certfile = Column(String)
config_keyfile = Column(String)
config_trustedhosts = Column(String, default='')
config_calibre_web_title = Column(String, default='Calibre-Web')
config_calibre_web_title = Column(String, default=u'Calibre-Web')
config_books_per_page = Column(Integer, default=60)
config_random_books = Column(Integer, default=4)
config_authors_max = Column(Integer, default=0)
config_read_column = Column(Integer, default=0)
config_title_regex = Column(String, default=r'^(A|The|An|Der|Die|Das|Den|Ein|Eine|Einen|Dem|Des|Einem|Eines|Le|La|Les|L\'|Un|Une)\s+')
config_title_regex = Column(String, default=r'^(A|The|An|Der|Die|Das|Den|Ein|Eine|Einen|Dem|Des|Einem|Eines)\s+')
# config_mature_content_tags = Column(String, default='')
config_theme = Column(Integer, default=0)
config_log_level = Column(SmallInteger, default=logger.DEFAULT_LOG_LEVEL)
config_logfile = Column(String, default=logger.DEFAULT_LOG_FILE)
config_logfile = Column(String)
config_access_log = Column(SmallInteger, default=0)
config_access_logfile = Column(String, default=logger.DEFAULT_ACCESS_LOG)
config_access_logfile = Column(String)
config_uploading = Column(SmallInteger, default=0)
config_anonbrowse = Column(SmallInteger, default=0)
@ -114,6 +107,7 @@ class _Settings(_Base):
config_use_goodreads = Column(Boolean, default=False)
config_goodreads_api_key = Column(String)
config_goodreads_api_secret = Column(String)
config_register_email = Column(Boolean, default=False)
config_login_type = Column(Integer, default=0)
@ -123,8 +117,7 @@ class _Settings(_Base):
config_ldap_port = Column(SmallInteger, default=389)
config_ldap_authentication = Column(SmallInteger, default=constants.LDAP_AUTH_SIMPLE)
config_ldap_serv_username = Column(String, default='cn=admin,dc=example,dc=org')
config_ldap_serv_password_e = Column(String)
config_ldap_serv_password = Column(String)
config_ldap_serv_password = Column(String, default="")
config_ldap_encryption = Column(SmallInteger, default=0)
config_ldap_cacert_path = Column(String, default="")
config_ldap_cert_path = Column(String, default="")
@ -139,12 +132,10 @@ class _Settings(_Base):
config_kepubifypath = Column(String, default=None)
config_converterpath = Column(String, default=None)
config_binariesdir = Column(String, default=None)
config_calibre = Column(String)
config_rarfile_location = Column(String, default=None)
config_upload_formats = Column(String, default=','.join(constants.EXTENSIONS_UPLOAD))
config_unicode_filename = Column(Boolean, default=False)
config_embed_metadata = Column(Boolean, default=True)
config_updatechannel = Column(Integer, default=constants.UPDATE_STABLE)
@ -156,45 +147,29 @@ class _Settings(_Base):
schedule_generate_book_covers = Column(Boolean, default=False)
schedule_generate_series_covers = Column(Boolean, default=False)
schedule_reconnect = Column(Boolean, default=False)
schedule_metadata_backup = Column(Boolean, default=False)
config_password_policy = Column(Boolean, default=True)
config_password_min_length = Column(Integer, default=8)
config_password_number = Column(Boolean, default=True)
config_password_lower = Column(Boolean, default=True)
config_password_upper = Column(Boolean, default=True)
config_password_character = Column(Boolean, default=True)
config_password_special = Column(Boolean, default=True)
config_session = Column(Integer, default=1)
config_ratelimiter = Column(Boolean, default=True)
config_limiter_uri = Column(String, default="")
config_limiter_options = Column(String, default="")
def __repr__(self):
return self.__class__.__name__
# Class holds all application specific settings in calibre-web
class ConfigSQL(object):
class _ConfigSQL(object):
# pylint: disable=no-member
def __init__(self):
self.__dict__["dirty"] = list()
pass
def init_config(self, session, secret_key, cli):
def init_config(self, session, cli):
self._session = session
self._settings = None
self.db_configured = None
self.config_calibre_dir = None
self._fernet = Fernet(secret_key)
self.cli = cli
self.load()
self.cli = cli
change = False
if self.config_binariesdir == None: # pylint: disable=access-member-before-definition
if self.config_converterpath == None: # pylint: disable=access-member-before-definition
change = True
self.config_binariesdir = autodetect_calibre_binaries()
self.config_converterpath = autodetect_converter_binary(self.config_binariesdir)
self.config_converterpath = autodetect_calibre_binary()
if self.config_kepubifypath == None: # pylint: disable=access-member-before-definition
change = True
@ -318,10 +293,10 @@ class ConfigSQL(object):
setattr(self, field, new_value)
return True
def to_dict(self):
def toDict(self):
storage = {}
for k, v in self.__dict__.items():
if k[0] != '_' and not k.endswith("_e") and not k == "cli":
if k[0] != '_' and not k.endswith("password") and not k.endswith("secret") and not k == "cli":
storage[k] = v
return storage
@ -335,13 +310,7 @@ class ConfigSQL(object):
column = s.__class__.__dict__.get(k)
if column.default is not None:
v = column.default.arg
if k.endswith("_e") and v is not None:
try:
setattr(self, k, self._fernet.decrypt(v).decode())
except cryptography.fernet.InvalidToken:
setattr(self, k, "")
else:
setattr(self, k, v)
setattr(self, k, v)
have_metadata_db = bool(self.config_calibre_dir)
if have_metadata_db:
@ -349,37 +318,30 @@ class ConfigSQL(object):
have_metadata_db = os.path.isfile(db_file)
self.db_configured = have_metadata_db
constants.EXTENSIONS_UPLOAD = [x.lstrip().rstrip().lower() for x in self.config_upload_formats.split(',')]
from . import cli_param
if os.environ.get('FLASK_DEBUG'):
logfile = logger.setup(logger.LOG_TO_STDOUT, logger.logging.DEBUG)
else:
# pylint: disable=access-member-before-definition
logfile = logger.setup(cli_param.logpath or self.config_logfile, self.config_log_level)
if logfile != os.path.abspath(self.config_logfile):
if logfile != os.path.abspath(cli_param.logpath):
log.warning("Log path %s not valid, falling back to default", self.config_logfile)
logfile = logger.setup(self.config_logfile, self.config_log_level)
if logfile != self.config_logfile:
log.warning("Log path %s not valid, falling back to default", self.config_logfile)
self.config_logfile = logfile
s.config_logfile = logfile
self._session.merge(s)
try:
self._session.commit()
except OperationalError as e:
log.error('Database error: %s', e)
self._session.rollback()
self.__dict__["dirty"] = list()
def save(self):
"""Apply all configuration values to the underlying storage."""
s = self._read_from_storage() # type: _Settings
for k in self.dirty:
for k, v in self.__dict__.items():
if k[0] == '_':
continue
if hasattr(s, k):
if k.endswith("_e"):
setattr(s, k, self._fernet.encrypt(self.__dict__[k].encode()))
else:
setattr(s, k, self.__dict__[k])
setattr(s, k, v)
log.debug("_ConfigSQL updating storage")
self._session.merge(s)
@ -395,11 +357,9 @@ class ConfigSQL(object):
log.error(error)
log.warning("invalidating configuration")
self.db_configured = False
# self.config_calibre_dir = None
self.save()
def get_book_path(self):
return self.config_calibre_split_dir if self.config_calibre_split_dir else self.config_calibre_dir
def store_calibre_uuid(self, calibre_db, Library_table):
try:
calibre_uuid = calibre_db.session.query(Library_table).one_or_none()
@ -409,34 +369,8 @@ class ConfigSQL(object):
except AttributeError:
pass
def __setattr__(self, attr_name, attr_value):
super().__setattr__(attr_name, attr_value)
self.__dict__["dirty"].append(attr_name)
def _encrypt_fields(session, secret_key):
try:
session.query(exists().where(_Settings.mail_password_e)).scalar()
except OperationalError:
with session.bind.connect() as conn:
conn.execute(text("ALTER TABLE settings ADD column 'mail_password_e' String"))
conn.execute(text("ALTER TABLE settings ADD column 'config_ldap_serv_password_e' String"))
session.commit()
crypter = Fernet(secret_key)
settings = session.query(_Settings.mail_password, _Settings.config_ldap_serv_password).first()
if settings.mail_password:
session.query(_Settings).update(
{_Settings.mail_password_e: crypter.encrypt(settings.mail_password.encode())})
if settings.config_ldap_serv_password:
session.query(_Settings).update(
{_Settings.config_ldap_serv_password_e:
crypter.encrypt(settings.config_ldap_serv_password.encode())})
session.commit()
def _migrate_table(session, orm_class, secret_key=None):
if secret_key:
_encrypt_fields(session, secret_key)
def _migrate_table(session, orm_class):
changed = False
for column_name, column in orm_class.__dict__.items():
@ -474,35 +408,17 @@ def _migrate_table(session, orm_class, secret_key=None):
session.rollback()
def autodetect_calibre_binaries():
def autodetect_calibre_binary():
if sys.platform == "win32":
calibre_path = ["C:\\program files\\calibre\\",
"C:\\program files(x86)\\calibre\\",
"C:\\program files(x86)\\calibre2\\",
"C:\\program files\\calibre2\\"]
calibre_path = ["C:\\program files\\calibre\\ebook-convert.exe",
"C:\\program files(x86)\\calibre\\ebook-convert.exe",
"C:\\program files(x86)\\calibre2\\ebook-convert.exe",
"C:\\program files\\calibre2\\ebook-convert.exe"]
else:
calibre_path = ["/opt/calibre/"]
calibre_path = ["/opt/calibre/ebook-convert"]
for element in calibre_path:
supported_binary_paths = [os.path.join(element, binary)
for binary in constants.SUPPORTED_CALIBRE_BINARIES.values()]
if all(os.path.isfile(binary_path) and os.access(binary_path, os.X_OK)
for binary_path in supported_binary_paths):
values = [process_wait([binary_path, "--version"],
pattern=r'\(calibre (.*)\)') for binary_path in supported_binary_paths]
if all(values):
version = values[0].group(1)
log.debug("calibre version %s", version)
return element
return ""
def autodetect_converter_binary(calibre_path):
if sys.platform == "win32":
converter_path = os.path.join(calibre_path, "ebook-convert.exe")
else:
converter_path = os.path.join(calibre_path, "ebook-convert")
if calibre_path and os.path.isfile(converter_path) and os.access(converter_path, os.X_OK):
return converter_path
if os.path.isfile(element) and os.access(element, os.X_OK):
return element
return ""
@ -530,18 +446,22 @@ def autodetect_kepubify_binary():
return ""
def _migrate_database(session, secret_key):
def _migrate_database(session):
# make sure the table is created, if it does not exist
_Base.metadata.create_all(session.bind)
_migrate_table(session, _Settings, secret_key)
_migrate_table(session, _Settings)
_migrate_table(session, _Flask_Settings)
def load_configuration(session, secret_key):
_migrate_database(session, secret_key)
def load_configuration(conf, session, cli):
_migrate_database(session)
if not session.query(_Settings).count():
session.add(_Settings())
session.commit()
# conf = _ConfigSQL()
conf.init_config(session, cli)
# return conf
def get_flask_session_key(_session):
@ -551,25 +471,3 @@ def get_flask_session_key(_session):
_session.add(flask_settings)
_session.commit()
return flask_settings.flask_session_key
def get_encryption_key(key_path):
key_file = os.path.join(key_path, ".key")
generate = True
error = ""
if os.path.exists(key_file) and os.path.getsize(key_file) > 32:
with open(key_file, "rb") as f:
key = f.read()
try:
urlsafe_b64decode(key)
generate = False
except ValueError:
pass
if generate:
key = Fernet.generate_key()
try:
with open(key_file, "wb") as f:
f.write(key)
except PermissionError as e:
error = e
return key, error

View File

@ -34,8 +34,6 @@ UPDATER_AVAILABLE = True
# Base dir is parent of current file, necessary if called from different folder
BASE_DIR = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), os.pardir))
# if executable file the files should be placed in the parent dir (parallel to the exe file)
STATIC_DIR = os.path.join(BASE_DIR, 'cps', 'static')
TEMPLATES_DIR = os.path.join(BASE_DIR, 'cps', 'templates')
TRANSLATIONS_DIR = os.path.join(BASE_DIR, 'cps', 'translations')
@ -51,9 +49,6 @@ if HOME_CONFIG:
CONFIG_DIR = os.environ.get('CALIBRE_DBPATH', home_dir)
else:
CONFIG_DIR = os.environ.get('CALIBRE_DBPATH', BASE_DIR)
if getattr(sys, 'frozen', False):
CONFIG_DIR = os.path.abspath(os.path.join(CONFIG_DIR, os.pardir))
DEFAULT_SETTINGS_FILE = "app.db"
DEFAULT_GDRIVE_FILE = "gdrive.db"
@ -149,18 +144,13 @@ del env_CALIBRE_PORT
EXTENSIONS_AUDIO = {'mp3', 'mp4', 'ogg', 'opus', 'wav', 'flac', 'm4a', 'm4b'}
EXTENSIONS_CONVERT_FROM = ['pdf', 'epub', 'mobi', 'azw3', 'docx', 'rtf', 'fb2', 'lit', 'lrf',
'txt', 'htmlz', 'rtf', 'odt', 'cbz', 'cbr', 'prc']
'txt', 'htmlz', 'rtf', 'odt', 'cbz', 'cbr']
EXTENSIONS_CONVERT_TO = ['pdf', 'epub', 'mobi', 'azw3', 'docx', 'rtf', 'fb2',
'lit', 'lrf', 'txt', 'htmlz', 'rtf', 'odt']
EXTENSIONS_UPLOAD = {'txt', 'pdf', 'epub', 'kepub', 'mobi', 'azw', 'azw3', 'cbr', 'cbz', 'cbt', 'cb7', 'djvu', 'djv',
EXTENSIONS_UPLOAD = {'txt', 'pdf', 'epub', 'kepub', 'mobi', 'azw', 'azw3', 'cbr', 'cbz', 'cbt', 'djvu',
'prc', 'doc', 'docx', 'fb2', 'html', 'rtf', 'lit', 'odt', 'mp3', 'mp4', 'ogg',
'opus', 'wav', 'flac', 'm4a', 'm4b'}
_extension = ""
if sys.platform == "win32":
_extension = ".exe"
SUPPORTED_CALIBRE_BINARIES = {binary:binary + _extension for binary in ["ebook-convert", "calibredb"]}
def has_flag(value, bit_flag):
return bit_flag == (bit_flag & (value or 0))
@ -173,12 +163,13 @@ def selected_roles(dictionary):
BookMeta = namedtuple('BookMeta', 'file_path, extension, title, author, cover, description, tags, series, '
'series_id, languages, publisher, pubdate, identifiers')
# python build process likes to have x.y.zbw -> b for beta and w a counting number
STABLE_VERSION = {'version': '0.6.22b'}
STABLE_VERSION = {'version': '0.6.19'}
NIGHTLY_VERSION = dict()
NIGHTLY_VERSION[0] = '$Format:%H$'
NIGHTLY_VERSION[1] = '$Format:%cI$'
# NIGHTLY_VERSION[0] = 'bb7d2c6273ae4560e83950d36d64533343623a57'
# NIGHTLY_VERSION[1] = '2018-09-09T10:13:08+02:00'
# CACHE
CACHE_TYPE_THUMBNAILS = 'thumbnails'

167
cps/db.py
View File

@ -111,77 +111,66 @@ class Identifiers(Base):
def format_type(self):
format_type = self.type.lower()
if format_type == 'amazon':
return "Amazon"
return u"Amazon"
elif format_type.startswith("amazon_"):
return "Amazon.{0}".format(format_type[7:])
return u"Amazon.{0}".format(format_type[7:])
elif format_type == "isbn":
return "ISBN"
return u"ISBN"
elif format_type == "doi":
return "DOI"
return u"DOI"
elif format_type == "douban":
return "Douban"
return u"Douban"
elif format_type == "goodreads":
return "Goodreads"
return u"Goodreads"
elif format_type == "babelio":
return "Babelio"
return u"Babelio"
elif format_type == "google":
return "Google Books"
return u"Google Books"
elif format_type == "kobo":
return "Kobo"
elif format_type == "barnesnoble":
return "Barnes & Noble"
return u"Kobo"
elif format_type == "litres":
return "ЛитРес"
return u"ЛитРес"
elif format_type == "issn":
return "ISSN"
return u"ISSN"
elif format_type == "isfdb":
return "ISFDB"
return u"ISFDB"
if format_type == "lubimyczytac":
return "Lubimyczytac"
if format_type == "databazeknih":
return "Databáze knih"
return u"Lubimyczytac"
else:
return self.type
def __repr__(self):
format_type = self.type.lower()
if format_type == "amazon" or format_type == "asin":
return "https://amazon.com/dp/{0}".format(self.val)
return u"https://amazon.com/dp/{0}".format(self.val)
elif format_type.startswith('amazon_'):
return "https://amazon.{0}/dp/{1}".format(format_type[7:], self.val)
return u"https://amazon.{0}/dp/{1}".format(format_type[7:], self.val)
elif format_type == "isbn":
return "https://www.worldcat.org/isbn/{0}".format(self.val)
return u"https://www.worldcat.org/isbn/{0}".format(self.val)
elif format_type == "doi":
return "https://dx.doi.org/{0}".format(self.val)
return u"https://dx.doi.org/{0}".format(self.val)
elif format_type == "goodreads":
return "https://www.goodreads.com/book/show/{0}".format(self.val)
return u"https://www.goodreads.com/book/show/{0}".format(self.val)
elif format_type == "babelio":
return "https://www.babelio.com/livres/titre/{0}".format(self.val)
return u"https://www.babelio.com/livres/titre/{0}".format(self.val)
elif format_type == "douban":
return "https://book.douban.com/subject/{0}".format(self.val)
return u"https://book.douban.com/subject/{0}".format(self.val)
elif format_type == "google":
return "https://books.google.com/books?id={0}".format(self.val)
return u"https://books.google.com/books?id={0}".format(self.val)
elif format_type == "kobo":
return "https://www.kobo.com/ebook/{0}".format(self.val)
elif format_type == "barnesnoble":
return "https://www.barnesandnoble.com/w/{0}".format(self.val)
return u"https://www.kobo.com/ebook/{0}".format(self.val)
elif format_type == "lubimyczytac":
return "https://lubimyczytac.pl/ksiazka/{0}/ksiazka".format(self.val)
return u"https://lubimyczytac.pl/ksiazka/{0}/ksiazka".format(self.val)
elif format_type == "litres":
return "https://www.litres.ru/{0}".format(self.val)
return u"https://www.litres.ru/{0}".format(self.val)
elif format_type == "issn":
return "https://portal.issn.org/resource/ISSN/{0}".format(self.val)
return u"https://portal.issn.org/resource/ISSN/{0}".format(self.val)
elif format_type == "isfdb":
return "http://www.isfdb.org/cgi-bin/pl.cgi?{0}".format(self.val)
elif format_type == "databazeknih":
return "https://www.databazeknih.cz/knihy/{0}".format(self.val)
return u"http://www.isfdb.org/cgi-bin/pl.cgi?{0}".format(self.val)
elif self.val.lower().startswith("javascript:"):
return quote(self.val)
elif self.val.lower().startswith("data:"):
link , __, __ = str.partition(self.val, ",")
return link
else:
return "{0}".format(self.val)
return u"{0}".format(self.val)
class Comments(Base):
@ -199,7 +188,7 @@ class Comments(Base):
return self.text
def __repr__(self):
return "<Comments({0})>".format(self.text)
return u"<Comments({0})>".format(self.text)
class Tags(Base):
@ -214,11 +203,8 @@ class Tags(Base):
def get(self):
return self.name
def __eq__(self, other):
return self.name == other
def __repr__(self):
return "<Tags('{0})>".format(self.name)
return u"<Tags('{0})>".format(self.name)
class Authors(Base):
@ -229,7 +215,7 @@ class Authors(Base):
sort = Column(String(collation='NOCASE'))
link = Column(String, nullable=False, default="")
def __init__(self, name, sort, link=""):
def __init__(self, name, sort, link):
self.name = name
self.sort = sort
self.link = link
@ -237,11 +223,8 @@ class Authors(Base):
def get(self):
return self.name
def __eq__(self, other):
return self.name == other
def __repr__(self):
return "<Authors('{0},{1}{2}')>".format(self.name, self.sort, self.link)
return u"<Authors('{0},{1}{2}')>".format(self.name, self.sort, self.link)
class Series(Base):
@ -258,11 +241,8 @@ class Series(Base):
def get(self):
return self.name
def __eq__(self, other):
return self.name == other
def __repr__(self):
return "<Series('{0},{1}')>".format(self.name, self.sort)
return u"<Series('{0},{1}')>".format(self.name, self.sort)
class Ratings(Base):
@ -277,11 +257,8 @@ class Ratings(Base):
def get(self):
return self.rating
def __eq__(self, other):
return self.rating == other
def __repr__(self):
return "<Ratings('{0}')>".format(self.rating)
return u"<Ratings('{0}')>".format(self.rating)
class Languages(Base):
@ -294,16 +271,13 @@ class Languages(Base):
self.lang_code = lang_code
def get(self):
if hasattr(self, "language_name"):
if self.language_name:
return self.language_name
else:
return self.lang_code
def __eq__(self, other):
return self.lang_code == other
def __repr__(self):
return "<Languages('{0}')>".format(self.lang_code)
return u"<Languages('{0}')>".format(self.lang_code)
class Publishers(Base):
@ -320,11 +294,8 @@ class Publishers(Base):
def get(self):
return self.name
def __eq__(self, other):
return self.name == other
def __repr__(self):
return "<Publishers('{0},{1}')>".format(self.name, self.sort)
return u"<Publishers('{0},{1}')>".format(self.name, self.sort)
class Data(Base):
@ -348,16 +319,7 @@ class Data(Base):
return self.name
def __repr__(self):
return "<Data('{0},{1}{2}{3}')>".format(self.book, self.format, self.uncompressed_size, self.name)
class Metadata_Dirtied(Base):
__tablename__ = 'metadata_dirtied'
id = Column(Integer, primary_key=True, autoincrement=True)
book = Column(Integer, ForeignKey('books.id'), nullable=False, unique=True)
def __init__(self, book):
self.book = book
return u"<Data('{0},{1}{2}{3}')>".format(self.book, self.format, self.uncompressed_size, self.name)
class Books(Base):
@ -402,7 +364,7 @@ class Books(Base):
self.has_cover = (has_cover != None)
def __repr__(self):
return "<Books('{0},{1}{2}{3}{4}{5}{6}{7}{8}')>".format(self.title, self.sort, self.author_sort,
return u"<Books('{0},{1}{2}{3}{4}{5}{6}{7}{8}')>".format(self.title, self.sort, self.author_sort,
self.timestamp, self.pubdate, self.series_index,
self.last_modified, self.path, self.has_cover)
@ -428,33 +390,6 @@ class CustomColumns(Base):
display_dict = json.loads(self.display)
return display_dict
def to_json(self, value, extra, sequence):
content = dict()
content['table'] = "custom_column_" + str(self.id)
content['column'] = "value"
content['datatype'] = self.datatype
content['is_multiple'] = None if not self.is_multiple else "|"
content['kind'] = "field"
content['name'] = self.name
content['search_terms'] = ['#' + self.label]
content['label'] = self.label
content['colnum'] = self.id
content['display'] = self.get_display_dict()
content['is_custom'] = True
content['is_category'] = self.datatype in ['text', 'rating', 'enumeration', 'series']
content['link_column'] = "value"
content['category_sort'] = "value"
content['is_csp'] = False
content['is_editable'] = self.editable
content['rec_index'] = sequence + 22 # toDo why ??
if isinstance(value, datetime):
content['#value#'] = {"__class__": "datetime.datetime", "__value__": value.strftime("%Y-%m-%dT%H:%M:%S+00:00")}
else:
content['#value#'] = value
content['#extra#'] = extra
content['is_multiple2'] = {} if not self.is_multiple else {"cache_to_list": "|", "ui_to_list": ",", "list_to_ui": ", "}
return json.dumps(content, ensure_ascii=False)
class AlchemyEncoder(json.JSONEncoder):
@ -667,7 +602,7 @@ class CalibreDB:
cls.session_factory = scoped_session(sessionmaker(autocommit=False,
autoflush=True,
bind=cls.engine, future=True))
bind=cls.engine))
for inst in cls.instances:
inst.init_session()
@ -706,18 +641,6 @@ class CalibreDB:
def get_book_format(self, book_id, file_format):
return self.session.query(Data).filter(Data.book == book_id).filter(Data.format == file_format).first()
def set_metadata_dirty(self, book_id):
if not self.session.query(Metadata_Dirtied).filter(Metadata_Dirtied.book == book_id).one_or_none():
self.session.add(Metadata_Dirtied(book_id))
def delete_dirty_metadata(self, book_id):
try:
self.session.query(Metadata_Dirtied).filter(Metadata_Dirtied.book == book_id).delete()
self.session.commit()
except (OperationalError) as e:
self.session.rollback()
log.error("Database error: {}".format(e))
# Language and content filters for displaying in the UI
def common_filters(self, allow_show_archived=False, return_all_languages=False):
if not allow_show_archived:
@ -843,7 +766,8 @@ class CalibreDB:
entries = list()
pagination = list()
try:
pagination = Pagination(page, pagesize, query.count())
pagination = Pagination(page, pagesize,
len(query.all()))
entries = query.order_by(*order).offset(off).limit(pagesize).all()
except Exception as ex:
log.error_or_exception(ex)
@ -853,6 +777,8 @@ class CalibreDB:
# Orders all Authors in the list according to authors sort
def order_authors(self, entries, list_return=False, combined=False):
# entries_copy = copy.deepcopy(entries)
# entries_copy =[]
for entry in entries:
if combined:
sort_authors = entry.Books.author_sort.split('&')
@ -1017,12 +943,7 @@ class CalibreDB:
title = title[len(prep):] + ', ' + prep
return title.strip()
try:
# sqlalchemy <1.4.24
conn = conn or self.session.connection().connection.driver_connection
except AttributeError:
# sqlalchemy >1.4.24 and sqlalchemy 2.0
conn = conn or self.session.connection().connection.connection
conn = conn or self.session.connection().connection.connection
try:
conn.create_function("title_sort", 1, _title_sort)
except sqliteOperationalError:

View File

@ -65,7 +65,7 @@ def send_debug():
file_list.remove(element)
memory_zip = BytesIO()
with zipfile.ZipFile(memory_zip, 'w', compression=zipfile.ZIP_DEFLATED) as zf:
zf.writestr('settings.txt', json.dumps(config.to_dict(), sort_keys=True, indent=2))
zf.writestr('settings.txt', json.dumps(config.toDict(), sort_keys=True, indent=2))
zf.writestr('libs.txt', json.dumps(collect_stats(), sort_keys=True, indent=2, cls=lazyEncoder))
for fp in file_list:
zf.write(fp, os.path.basename(fp))

View File

@ -61,7 +61,7 @@ def dependency_check(optional=False):
deps = load_dependencies(optional)
for dep in deps:
try:
dep_version_int = [int(x) if x.isnumeric() else 0 for x in dep[0].split('.')]
dep_version_int = [int(x) for x in dep[0].split('.')]
low_check = [int(x) for x in dep[3].split('.')]
high_check = [int(x) for x in dep[5].split('.')]
except AttributeError:

243
cps/editbooks.py Normal file → Executable file
View File

@ -25,43 +25,29 @@ from datetime import datetime
import json
from shutil import copyfile
from uuid import uuid4
from markupsafe import escape, Markup # dependency of flask
from markupsafe import escape # dependency of flask
from functools import wraps
# from lxml.etree import ParserError
#try:
# # at least bleach 6.0 is needed -> incomplatible change from list arguments to set arguments
# from bleach import clean_text as clean_html
# BLEACH = True
#except ImportError:
# try:
# BLEACH = False
# from nh3 import clean as clean_html
# except ImportError:
# try:
# BLEACH = False
# from lxml.html.clean import clean_html
# except ImportError:
# clean_html = None
try:
from lxml.html.clean import clean_html
except ImportError:
clean_html = None
from flask import Blueprint, request, flash, redirect, url_for, abort, Response
from flask import Blueprint, request, flash, redirect, url_for, abort, Markup, Response
from flask_babel import gettext as _
from flask_babel import lazy_gettext as N_
from flask_babel import get_locale
from flask_login import current_user, login_required
from sqlalchemy.exc import OperationalError, IntegrityError, InterfaceError
from sqlalchemy.exc import OperationalError, IntegrityError
from sqlalchemy.orm.exc import StaleDataError
from sqlalchemy.sql.expression import func
from . import constants, logger, isoLanguages, gdriveutils, uploader, helper, kobo_sync_status
from .clean_html import clean_string
from . import config, ub, db, calibre_db
from .services.worker import WorkerThread
from .tasks.upload import TaskUpload
from .render_template import render_title_template
from .usermanagement import login_required_if_no_ano
from .kobo_sync_status import change_archived_books
from .redirect import get_redirect_location
editbook = Blueprint('edit-book', __name__)
@ -98,7 +84,7 @@ def delete_book_from_details(book_id):
@editbook.route("/delete/<int:book_id>/<string:book_format>", methods=["POST"])
@login_required
def delete_book_ajax(book_id, book_format):
return delete_book_from_table(book_id, book_format, False, request.form.to_dict().get('location', ""))
return delete_book_from_table(book_id, book_format, False)
@editbook.route("/admin/book/<int:book_id>", methods=['GET'])
@ -121,7 +107,7 @@ def edit_book(book_id):
book = calibre_db.get_filtered_book(book_id, allow_show_archived=True)
# Book not found
if not book:
flash(_("Oops! Selected book is unavailable. File does not exist or is not accessible"),
flash(_(u"Oops! Selected book title is unavailable. File does not exist or is not accessible"),
category="error")
return redirect(url_for("web.index"))
@ -139,7 +125,7 @@ def edit_book(book_id):
edited_books_id = book.id
modify_date = True
title_author_error = helper.update_dir_structure(edited_books_id,
config.get_book_path(),
config.config_calibre_dir,
input_authors[0],
renamed_author=renamed)
if title_author_error:
@ -165,7 +151,7 @@ def edit_book(book_id):
if to_save.get("cover_url", None):
if not current_user.role_upload():
edit_error = True
flash(_("User has no rights to upload cover"), category="error")
flash(_(u"User has no rights to upload cover"), category="error")
if to_save["cover_url"].endswith('/static/generic_cover.jpg'):
book.has_cover = 0
else:
@ -217,7 +203,6 @@ def edit_book(book_id):
if modify_date:
book.last_modified = datetime.utcnow()
kobo_sync_status.remove_synced_book(edited_books_id, all=True)
calibre_db.set_metadata_dirty(book.id)
calibre_db.session.merge(book)
calibre_db.session.commit()
@ -237,10 +222,10 @@ def edit_book(book_id):
calibre_db.session.rollback()
flash(str(e), category="error")
return redirect(url_for('web.show_book', book_id=book.id))
except (OperationalError, IntegrityError, StaleDataError, InterfaceError) as e:
except (OperationalError, IntegrityError, StaleDataError) as e:
log.error_or_exception("Database error: {}".format(e))
calibre_db.session.rollback()
flash(_("Oops! Database Error: %(error)s.", error=e.orig if hasattr(e, "orig") else e), category="error")
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
return redirect(url_for('web.show_book', book_id=book.id))
except Exception as ex:
log.error_or_exception(ex)
@ -284,7 +269,7 @@ def upload():
meta.extension.lower())
else:
error = helper.update_dir_structure(book_id,
config.get_book_path(),
config.config_calibre_dir,
input_authors[0],
meta.file_path,
title_dir + meta.extension.lower(),
@ -292,8 +277,6 @@ def upload():
move_coverfile(meta, db_book)
if modify_date:
calibre_db.set_metadata_dirty(book_id)
# save data to database, reread data
calibre_db.session.commit()
@ -302,7 +285,7 @@ def upload():
if error:
flash(error, category="error")
link = '<a href="{}">{}</a>'.format(url_for('web.show_book', book_id=book_id), escape(title))
upload_text = N_("File %(file)s uploaded", file=link)
upload_text = N_(u"File %(file)s uploaded", file=link)
WorkerThread.add(current_user.name, TaskUpload(upload_text, escape(title)))
helper.add_book_to_thumbnail_cache(book_id)
@ -316,8 +299,7 @@ def upload():
except (OperationalError, IntegrityError, StaleDataError) as e:
calibre_db.session.rollback()
log.error_or_exception("Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig if hasattr(e, "orig") else e),
category="error")
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
return Response(json.dumps({"location": url_for("web.index")}), mimetype='application/json')
@ -330,19 +312,19 @@ def convert_bookformat(book_id):
book_format_to = request.form.get('book_format_to', None)
if (book_format_from is None) or (book_format_to is None):
flash(_("Source or destination format for conversion missing"), category="error")
flash(_(u"Source or destination format for conversion missing"), category="error")
return redirect(url_for('edit-book.show_edit_book', book_id=book_id))
log.info('converting: book id: %s from: %s to: %s', book_id, book_format_from, book_format_to)
rtn = helper.convert_book_format(book_id, config.get_book_path(), book_format_from.upper(),
rtn = helper.convert_book_format(book_id, config.config_calibre_dir, book_format_from.upper(),
book_format_to.upper(), current_user.name)
if rtn is None:
flash(_("Book successfully queued for converting to %(book_format)s",
flash(_(u"Book successfully queued for converting to %(book_format)s",
book_format=book_format_to),
category="success")
else:
flash(_("There was an error converting this book: %(res)s", res=rtn), category="error")
flash(_(u"There was an error converting this book: %(res)s", res=rtn), category="error")
return redirect(url_for('edit-book.show_edit_book', book_id=book_id))
@ -404,7 +386,7 @@ def edit_list_book(param):
elif param == 'title':
sort_param = book.sort
if handle_title_on_edit(book, vals.get('value', "")):
rename_error = helper.update_dir_structure(book.id, config.get_book_path())
rename_error = helper.update_dir_structure(book.id, config.config_calibre_dir)
if not rename_error:
ret = Response(json.dumps({'success': True, 'newValue': book.title}),
mimetype='application/json')
@ -422,7 +404,7 @@ def edit_list_book(param):
mimetype='application/json')
elif param == 'authors':
input_authors, __, renamed = handle_author_on_edit(book, vals['value'], vals.get('checkA', None) == "true")
rename_error = helper.update_dir_structure(book.id, config.get_book_path(), input_authors[0],
rename_error = helper.update_dir_structure(book.id, config.config_calibre_dir, input_authors[0],
renamed_author=renamed)
if not rename_error:
ret = Response(json.dumps({
@ -466,7 +448,7 @@ def edit_list_book(param):
calibre_db.session.rollback()
log.error_or_exception("Database error: {}".format(e))
ret = Response(json.dumps({'success': False,
'msg': 'Database error: {}'.format(e.orig if hasattr(e, "orig") else e)}),
'msg': 'Database error: {}'.format(e.orig)}),
mimetype='application/json')
return ret
@ -484,7 +466,7 @@ def get_sorted_entry(field, bookid):
if field == 'sort':
return json.dumps({'sort': book.title})
if field == 'author_sort':
return json.dumps({'authors': " & ".join([a.name for a in calibre_db.order_authors([book])])})
return json.dumps({'author_sort': book.author})
return ""
@ -526,10 +508,10 @@ def merge_list_book():
for element in from_book.data:
if element.format not in to_file:
# create new data entry with: book_id, book_format, uncompressed_size, name
filepath_new = os.path.normpath(os.path.join(config.get_book_path(),
filepath_new = os.path.normpath(os.path.join(config.config_calibre_dir,
to_book.path,
to_name + "." + element.format.lower()))
filepath_old = os.path.normpath(os.path.join(config.get_book_path(),
filepath_old = os.path.normpath(os.path.join(config.config_calibre_dir,
from_book.path,
element.name + "." + element.format.lower()))
copyfile(filepath_old, filepath_new)
@ -569,16 +551,15 @@ def table_xchange_author_title():
if edited_books_id:
# toDo: Handle error
edit_error = helper.update_dir_structure(edited_books_id, config.get_book_path(), input_authors[0],
edit_error = helper.update_dir_structure(edited_books_id, config.config_calibre_dir, input_authors[0],
renamed_author=renamed)
if modify_date:
book.last_modified = datetime.utcnow()
calibre_db.set_metadata_dirty(book.id)
try:
calibre_db.session.commit()
except (OperationalError, IntegrityError, StaleDataError) as e:
calibre_db.session.rollback()
log.error_or_exception("Database error: {}".format(e))
log.error_or_exception("Database error: %s", e)
return json.dumps({'success': False})
if config.config_use_google_drive:
@ -588,9 +569,9 @@ def table_xchange_author_title():
def merge_metadata(to_save, meta):
if to_save.get('author_name', "") == _('Unknown'):
if to_save.get('author_name', "") == _(u'Unknown'):
to_save['author_name'] = ''
if to_save.get('book_title', "") == _('Unknown'):
if to_save.get('book_title', "") == _(u'Unknown'):
to_save['book_title'] = ''
for s_field, m_field in [
('tags', 'tags'), ('author_name', 'author'), ('series', 'series'),
@ -612,8 +593,6 @@ def identifier_list(to_save, book):
val_key = id_val_prefix + type_key[len(id_type_prefix):]
if val_key not in to_save.keys():
continue
if to_save[val_key].startswith("data:"):
to_save[val_key], __, __ = str.partition(to_save[val_key], ",")
result.append(db.Identifiers(to_save[val_key], type_value, book.id))
return result
@ -628,7 +607,7 @@ def prepare_authors(authr):
# we have all author names now
if input_authors == ['']:
input_authors = [_('Unknown')] # prevent empty Author
input_authors = [_(u'Unknown')] # prevent empty Author
renamed = list()
for in_aut in input_authors:
@ -645,11 +624,11 @@ def prepare_authors(authr):
def prepare_authors_on_upload(title, authr):
if title != _('Unknown') and authr != _('Unknown'):
if title != _(u'Unknown') and authr != _(u'Unknown'):
entry = calibre_db.check_exists_book(authr, title)
if entry:
log.info("Uploaded book probably exists in library")
flash(_("Uploaded book probably exists in the library, consider to change before upload new: ")
flash(_(u"Uploaded book probably exists in the library, consider to change before upload new: ")
+ Markup(render_title_template('book_exists_flash.html', entry=entry)), category="warning")
input_authors, renamed = prepare_authors(authr)
@ -704,7 +683,7 @@ def create_book_on_upload(modify_date, meta):
modify_date |= edit_book_languages(meta.languages, db_book, upload_mode=True, invalid=invalid)
if invalid:
for lang in invalid:
flash(_("'%(langname)s' is not a valid language", langname=lang), category="warning")
flash(_(u"'%(langname)s' is not a valid language", langname=lang), category="warning")
# handle tags
modify_date |= edit_book_tags(meta.tags, db_book)
@ -754,7 +733,7 @@ def file_handling_on_upload(requested_file):
meta = uploader.upload(requested_file, config.config_rarfile_location)
except (IOError, OSError):
log.error("File %s could not saved to temp dir", requested_file.filename)
flash(_("File %(filename)s could not saved to temp dir",
flash(_(u"File %(filename)s could not saved to temp dir",
filename=requested_file.filename), category="error")
return None, Response(json.dumps({"location": url_for("web.index")}), mimetype='application/json')
return meta, None
@ -766,7 +745,7 @@ def move_coverfile(meta, db_book):
cover_file = meta.cover
else:
cover_file = os.path.join(constants.STATIC_DIR, 'generic_cover.jpg')
new_cover_path = os.path.join(config.get_book_path(), db_book.path)
new_cover_path = os.path.join(config.config_calibre_dir, db_book.path)
try:
os.makedirs(new_cover_path, exist_ok=True)
copyfile(cover_file, os.path.join(new_cover_path, "cover.jpg"))
@ -774,7 +753,7 @@ def move_coverfile(meta, db_book):
os.unlink(meta.cover)
except OSError as e:
log.error("Failed to move cover file %s: %s", new_cover_path, e)
flash(_("Failed to Move Cover File %(file)s: %(error)s", file=new_cover_path,
flash(_(u"Failed to Move Cover File %(file)s: %(error)s", file=new_cover_path,
error=e),
category="error")
@ -788,7 +767,7 @@ def delete_whole_book(book_id, book):
# check if only this book links to:
# author, language, series, tags, custom columns
modify_database_object([''], book.authors, db.Authors, calibre_db.session, 'author')
modify_database_object([u''], book.authors, db.Authors, calibre_db.session, 'author')
modify_database_object([u''], book.tags, db.Tags, calibre_db.session, 'tags')
modify_database_object([u''], book.series, db.Series, calibre_db.session, 'series')
modify_database_object([u''], book.languages, db.Languages, calibre_db.session, 'languages')
@ -825,7 +804,7 @@ def delete_whole_book(book_id, book):
calibre_db.session.query(db.Books).filter(db.Books.id == book_id).delete()
def render_delete_book_result(book_format, json_response, warning, book_id, location=""):
def render_delete_book_result(book_format, json_response, warning, book_id):
if book_format:
if json_response:
return json.dumps([warning, {"location": url_for("edit-book.show_edit_book", book_id=book_id),
@ -837,22 +816,22 @@ def render_delete_book_result(book_format, json_response, warning, book_id, loca
return redirect(url_for('edit-book.show_edit_book', book_id=book_id))
else:
if json_response:
return json.dumps([warning, {"location": get_redirect_location(location, "web.index"),
return json.dumps([warning, {"location": url_for('web.index'),
"type": "success",
"format": book_format,
"message": _('Book Successfully Deleted')}])
else:
flash(_('Book Successfully Deleted'), category="success")
return redirect(get_redirect_location(location, "web.index"))
return redirect(url_for('web.index'))
def delete_book_from_table(book_id, book_format, json_response, location=""):
def delete_book_from_table(book_id, book_format, json_response):
warning = {}
if current_user.role_delete_books():
book = calibre_db.get_book(book_id)
if book:
try:
result, error = helper.delete_book(book, config.get_book_path(), book_format=book_format.upper())
result, error = helper.delete_book(book, config.config_calibre_dir, book_format=book_format.upper())
if not result:
if json_response:
return json.dumps([{"location": url_for("edit-book.show_edit_book", book_id=book_id),
@ -893,7 +872,7 @@ def delete_book_from_table(book_id, book_format, json_response, location=""):
else:
# book not found
log.error('Book with id "%s" could not be deleted: not found', book_id)
return render_delete_book_result(book_format, json_response, warning, book_id, location)
return render_delete_book_result(book_format, json_response, warning, book_id)
message = _("You are missing permissions to delete books")
if json_response:
return json.dumps({"location": url_for("edit-book.show_edit_book", book_id=book_id),
@ -909,7 +888,7 @@ def render_edit_book(book_id):
cc = calibre_db.session.query(db.CustomColumns).filter(db.CustomColumns.datatype.notin_(db.cc_exceptions)).all()
book = calibre_db.get_filtered_book(book_id, allow_show_archived=True)
if not book:
flash(_("Oops! Selected book is unavailable. File does not exist or is not accessible"),
flash(_(u"Oops! Selected book title is unavailable. File does not exist or is not accessible"),
category="error")
return redirect(url_for("web.index"))
@ -944,7 +923,7 @@ def render_edit_book(book_id):
if kepub_possible:
allowed_conversion_formats.append('kepub')
return render_title_template('book_edit.html', book=book, authors=author_names, cc=cc,
title=_("edit metadata"), page="editbook",
title=_(u"edit metadata"), page="editbook",
conversion_formats=allowed_conversion_formats,
config=config,
source_formats=valid_source_formats)
@ -1005,18 +984,7 @@ def edit_book_series_index(series_index, book):
def edit_book_comments(comments, book):
modify_date = False
if comments:
comments = clean_string(comments, book.id)
#try:
# if BLEACH:
# comments = clean_html(comments, tags=set(), attributes=set())
# else:
# comments = clean_html(comments)
#except ParserError as e:
# log.error("Comments of book {} are corrupted: {}".format(book.id, e))
# comments = ""
#except TypeError as e:
# log.error("Comments can't be parsed, maybe 'lxml' is too new, try installing 'bleach': {}".format(e))
# comments = ""
comments = clean_html(comments)
if len(book.comments):
if book.comments[0].text != comments:
book.comments[0].text = comments
@ -1040,7 +1008,7 @@ def edit_book_languages(languages, book, upload_mode=False, invalid=None):
if isinstance(invalid, list):
invalid.append(lang)
else:
raise ValueError(_("'%(langname)s' is not a valid language", langname=lang))
raise ValueError(_(u"'%(langname)s' is not a valid language", langname=lang))
# ToDo: Not working correct
if upload_mode and len(input_l) == 1:
# If the language of the file is excluded from the users view, it's not imported, to allow the user to view
@ -1074,19 +1042,7 @@ def edit_cc_data_value(book_id, book, c, to_save, cc_db_value, cc_string):
elif c.datatype == 'comments':
to_save[cc_string] = Markup(to_save[cc_string]).unescape()
if to_save[cc_string]:
to_save[cc_string] = clean_string(to_save[cc_string], book_id)
#try:
# if BLEACH:
# to_save[cc_string] = clean_html(to_save[cc_string], tags=set(), attributes=set())
# else:
# to_save[cc_string] = clean_html(to_save[cc_string])
#except ParserError as e:
# log.error("Customs Comments of book {} are corrupted: {}".format(book_id, e))
# to_save[cc_string] = ""
#except TypeError as e:
# to_save[cc_string] = ""
# log.error("Customs Comments can't be parsed, maybe 'lxml' is too new, "
# "try installing 'bleach': {}".format(e))
to_save[cc_string] = clean_html(to_save[cc_string])
elif c.datatype == 'datetime':
try:
to_save[cc_string] = datetime.strptime(to_save[cc_string], "%Y-%m-%d")
@ -1163,10 +1119,9 @@ def edit_cc_data(book_id, book, to_save, cc):
cc_db_value = None
if to_save[cc_string].strip():
if c.datatype in ['int', 'bool', 'float', "datetime", "comments"]:
change, to_save = edit_cc_data_value(book_id, book, c, to_save, cc_db_value, cc_string)
changed, to_save = edit_cc_data_value(book_id, book, c, to_save, cc_db_value, cc_string)
else:
change, to_save = edit_cc_data_string(book, c, to_save, cc_db_value, cc_string)
changed |= change
changed, to_save = edit_cc_data_string(book, c, to_save, cc_db_value, cc_string)
else:
if cc_db_value is not None:
# remove old cc_val
@ -1195,7 +1150,7 @@ def upload_single_file(file_request, book, book_id):
# check for empty request
if requested_file.filename != '':
if not current_user.role_upload():
flash(_("User has no rights to upload additional file formats"), category="error")
flash(_(u"User has no rights to upload additional file formats"), category="error")
return False
if '.' in requested_file.filename:
file_ext = requested_file.filename.rsplit('.', 1)[-1].lower()
@ -1208,7 +1163,7 @@ def upload_single_file(file_request, book, book_id):
return False
file_name = book.path.rsplit('/', 1)[-1]
filepath = os.path.normpath(os.path.join(config.get_book_path(), book.path))
filepath = os.path.normpath(os.path.join(config.config_calibre_dir, book.path))
saved_filename = os.path.join(filepath, file_name + '.' + file_ext)
# check if file path exists, otherwise create it, copy file to calibre path and delete temp file
@ -1216,12 +1171,12 @@ def upload_single_file(file_request, book, book_id):
try:
os.makedirs(filepath)
except OSError:
flash(_("Failed to create path %(path)s (Permission denied).", path=filepath), category="error")
flash(_(u"Failed to create path %(path)s (Permission denied).", path=filepath), category="error")
return False
try:
requested_file.save(saved_filename)
except OSError:
flash(_("Failed to store file %(file)s.", file=saved_filename), category="error")
flash(_(u"Failed to store file %(file)s.", file=saved_filename), category="error")
return False
file_size = os.path.getsize(saved_filename)
@ -1239,18 +1194,17 @@ def upload_single_file(file_request, book, book_id):
except (OperationalError, IntegrityError, StaleDataError) as e:
calibre_db.session.rollback()
log.error_or_exception("Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig if hasattr(e, "orig") else e),
category="error")
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
return False # return redirect(url_for('web.show_book', book_id=book.id))
# Queue uploader info
link = '<a href="{}">{}</a>'.format(url_for('web.show_book', book_id=book.id), escape(book.title))
upload_text = N_("File format %(ext)s added to %(book)s", ext=file_ext.upper(), book=link)
upload_text = N_(u"File format %(ext)s added to %(book)s", ext=file_ext.upper(), book=link)
WorkerThread.add(current_user.name, TaskUpload(upload_text, escape(book.title)))
return uploader.process(
saved_filename, *os.path.splitext(requested_file.filename),
rar_executable=config.config_rarfile_location)
rarExecutable=config.config_rarfile_location)
return None
@ -1260,7 +1214,7 @@ def upload_cover(cover_request, book):
# check for empty request
if requested_file.filename != '':
if not current_user.role_upload():
flash(_("User has no rights to upload cover"), category="error")
flash(_(u"User has no rights to upload cover"), category="error")
return False
ret, message = helper.save_cover(requested_file, book.path)
if ret is True:
@ -1284,18 +1238,18 @@ def handle_title_on_edit(book, book_title):
def handle_author_on_edit(book, author_name, update_stored=True):
change = False
# handle author(s)
input_authors, renamed = prepare_authors(author_name)
# change |= modify_database_object(input_authors, book.authors, db.Authors, calibre_db.session, 'author')
change = modify_database_object(input_authors, book.authors, db.Authors, calibre_db.session, 'author')
# Search for each author if author is in database, if not, author name and sorted author name is generated new
# everything then is assembled for sorted author field in database
sort_authors_list = list()
for inp in input_authors:
stored_author = calibre_db.session.query(db.Authors).filter(db.Authors.name == inp).first()
if not stored_author:
stored_author = helper.get_sorted_author(inp.replace('|', ','))
stored_author = helper.get_sorted_author(inp)
else:
stored_author = stored_author.sort
sort_authors_list.append(helper.get_sorted_author(stored_author))
@ -1303,9 +1257,6 @@ def handle_author_on_edit(book, author_name, update_stored=True):
if book.author_sort != sort_authors and update_stored:
book.author_sort = sort_authors
change = True
change |= modify_database_object(input_authors, book.authors, db.Authors, calibre_db.session, 'author')
return input_authors, change, renamed
@ -1313,15 +1264,14 @@ def search_objects_remove(db_book_object, db_type, input_elements):
del_elements = []
for c_elements in db_book_object:
found = False
#if db_type == 'languages':
# type_elements = c_elements.lang_code
if db_type == 'custom':
if db_type == 'languages':
type_elements = c_elements.lang_code
elif db_type == 'custom':
type_elements = c_elements.value
else:
# type_elements = c_elements.name
type_elements = c_elements
type_elements = c_elements.name
for inp_element in input_elements:
if type_elements == inp_element:
if inp_element.lower() == type_elements.lower():
found = True
break
# if the element was not found in the new list, add it to remove list
@ -1335,11 +1285,13 @@ def search_objects_add(db_book_object, db_type, input_elements):
for inp_element in input_elements:
found = False
for c_elements in db_book_object:
if db_type == 'custom':
if db_type == 'languages':
type_elements = c_elements.lang_code
elif db_type == 'custom':
type_elements = c_elements.value
else:
type_elements = c_elements
if type_elements == inp_element:
type_elements = c_elements.name
if inp_element == type_elements:
found = True
break
if not found:
@ -1355,7 +1307,6 @@ def remove_objects(db_book_object, db_session, del_elements):
changed = True
if len(del_element.books) == 0:
db_session.delete(del_element)
db_session.flush()
return changed
@ -1369,34 +1320,27 @@ def add_objects(db_book_object, db_object, db_session, db_type, add_elements):
db_filter = db_object.name
for add_element in add_elements:
# check if an element with that name exists
changed = True
# db_session.query(db.Tags).filter((func.lower(db.Tags.name).ilike("GênOt"))).all()
db_element = db_session.query(db_object).filter((func.lower(db_filter).ilike(add_element))).first()
# db_element = db_session.query(db_object).filter(func.lower(db_filter) == add_element.lower()).first()
db_element = db_session.query(db_object).filter(db_filter == add_element).first()
# if no element is found add it
if db_type == 'author':
new_element = db_object(add_element, helper.get_sorted_author(add_element.replace('|', ',')), "")
elif db_type == 'series':
new_element = db_object(add_element, add_element)
elif db_type == 'custom':
new_element = db_object(value=add_element)
elif db_type == 'publisher':
new_element = db_object(add_element, None)
else: # db_type should be tag or language
new_element = db_object(add_element)
if db_element is None:
if db_type == 'author':
new_element = db_object(add_element, helper.get_sorted_author(add_element.replace('|', ',')))
elif db_type == 'series':
new_element = db_object(add_element, add_element)
elif db_type == 'custom':
new_element = db_object(value=add_element)
elif db_type == 'publisher':
new_element = db_object(add_element, None)
else: # db_type should be tag or language
new_element = db_object(add_element)
changed = True
db_session.add(new_element)
db_book_object.append(new_element)
else:
db_no_case = db_session.query(db_object).filter(db_filter == add_element).first()
if db_no_case:
# check for new case of element
db_element = create_objects_for_addition(db_element, add_element, db_type)
else:
db_element = create_objects_for_addition(db_element, add_element, db_type)
db_element = create_objects_for_addition(db_element, add_element, db_type)
# add element to book
changed = True
db_book_object.append(db_element)
return changed
@ -1431,24 +1375,13 @@ def modify_database_object(input_elements, db_book_object, db_object, db_session
if not isinstance(input_elements, list):
raise TypeError(str(input_elements) + " should be passed as a list")
input_elements = [x for x in input_elements if x != '']
changed = False
# If elements are renamed (upper lower case), rename it
for rec_a, rec_b in zip(db_book_object, input_elements):
if db_type == "custom":
if rec_a.value.casefold() == rec_b.casefold() and rec_a.value != rec_b:
create_objects_for_addition(rec_a, rec_b, db_type)
else:
if rec_a.get().casefold() == rec_b.casefold() and rec_a.get() != rec_b:
create_objects_for_addition(rec_a, rec_b, db_type)
# we have all input element (authors, series, tags) names now
# we have all input element (authors, series, tags) names now
# 1. search for elements to remove
del_elements = search_objects_remove(db_book_object, db_type, input_elements)
# 2. search for elements that need to be added
add_elements = search_objects_add(db_book_object, db_type, input_elements)
# if there are elements to remove, we remove them now
changed |= remove_objects(db_book_object, db_session, del_elements)
changed = remove_objects(db_book_object, db_session, del_elements)
# if there are elements to add, we add them now!
if len(add_elements) > 0:
changed |= add_objects(db_book_object, db_object, db_session, db_type, add_elements)

View File

@ -1,63 +0,0 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2024 OzzieIsaacs
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from uuid import uuid4
import os
from .file_helper import get_temp_dir
from .subproc_wrapper import process_open
from . import logger, config
from .constants import SUPPORTED_CALIBRE_BINARIES
log = logger.create()
def do_calibre_export(book_id, book_format):
try:
quotes = [3, 5, 7, 9]
tmp_dir = get_temp_dir()
calibredb_binarypath = get_calibre_binarypath("calibredb")
temp_file_name = str(uuid4())
my_env = os.environ.copy()
if config.config_calibre_split:
my_env['CALIBRE_OVERRIDE_DATABASE_PATH'] = os.path.join(config.config_calibre_dir, "metadata.db")
library_path = config.config_calibre_split_dir
else:
library_path = config.config_calibre_dir
opf_command = [calibredb_binarypath, 'export', '--dont-write-opf', '--with-library', library_path,
'--to-dir', tmp_dir, '--formats', book_format, "--template", "{}".format(temp_file_name),
str(book_id)]
p = process_open(opf_command, quotes, my_env)
_, err = p.communicate()
if err:
log.error('Metadata embedder encountered an error: %s', err)
return tmp_dir, temp_file_name
except OSError as ex:
# ToDo real error handling
log.error_or_exception(ex)
return None, None
def get_calibre_binarypath(binary):
binariesdir = config.config_binariesdir
if binariesdir:
try:
return os.path.join(binariesdir, SUPPORTED_CALIBRE_BINARIES[binary])
except KeyError as ex:
log.error("Binary not supported by Calibre-Web: %s", SUPPORTED_CALIBRE_BINARIES[binary])
pass
return ""

View File

@ -21,47 +21,25 @@ import zipfile
from lxml import etree
from . import isoLanguages, cover
from . import config, logger
from .helper import split_authors
from .epub_helper import get_content_opf, default_ns
from .constants import BookMeta
log = logger.create()
def _extract_cover(zip_file, cover_file, cover_path, tmp_file_name):
if cover_file is None:
return None
cf = extension = None
zip_cover_path = os.path.join(cover_path, cover_file).replace('\\', '/')
prefix = os.path.splitext(tmp_file_name)[0]
tmp_cover_name = prefix + '.' + os.path.basename(zip_cover_path)
ext = os.path.splitext(tmp_cover_name)
if len(ext) > 1:
extension = ext[1].lower()
if extension in cover.COVER_EXTENSIONS:
cf = zip_file.read(zip_cover_path)
return cover.cover_processing(tmp_file_name, cf, extension)
def get_epub_layout(book, book_data):
file_path = os.path.normpath(os.path.join(config.get_book_path(),
book.path, book_data.name + "." + book_data.format.lower()))
try:
tree, __ = get_content_opf(file_path, default_ns)
p = tree.xpath('/pkg:package/pkg:metadata', namespaces=default_ns)[0]
layout = p.xpath('pkg:meta[@property="rendition:layout"]/text()', namespaces=default_ns)
except (etree.XMLSyntaxError, KeyError, IndexError, OSError) as e:
log.error("Could not parse epub metadata of book {} during kobo sync: {}".format(book.id, e))
layout = []
if len(layout) == 0:
return None
else:
return layout[0]
cf = extension = None
zip_cover_path = os.path.join(cover_path, cover_file).replace('\\', '/')
prefix = os.path.splitext(tmp_file_name)[0]
tmp_cover_name = prefix + '.' + os.path.basename(zip_cover_path)
ext = os.path.splitext(tmp_cover_name)
if len(ext) > 1:
extension = ext[1].lower()
if extension in cover.COVER_EXTENSIONS:
cf = zip_file.read(zip_cover_path)
return cover.cover_processing(tmp_file_name, cf, extension)
def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
@ -71,7 +49,13 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
'dc': 'http://purl.org/dc/elements/1.1/'
}
tree, cf_name = get_content_opf(tmp_file_path, ns)
epub_zip = zipfile.ZipFile(tmp_file_path)
txt = epub_zip.read('META-INF/container.xml')
tree = etree.fromstring(txt)
cf_name = tree.xpath('n:rootfiles/n:rootfile/@full-path', namespaces=ns)[0]
cf = epub_zip.read(cf_name)
tree = etree.fromstring(cf)
cover_path = os.path.dirname(cf_name)
@ -89,20 +73,20 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
elif s == 'date':
epub_metadata[s] = tmp[0][:10]
else:
epub_metadata[s] = tmp[0].strip()
epub_metadata[s] = tmp[0]
else:
epub_metadata[s] = 'Unknown'
if epub_metadata['subject'] == 'Unknown':
epub_metadata['subject'] = ''
if epub_metadata['publisher'] == 'Unknown':
if epub_metadata['publisher'] == u'Unknown':
epub_metadata['publisher'] = ''
if epub_metadata['date'] == 'Unknown':
if epub_metadata['date'] == u'Unknown':
epub_metadata['date'] = ''
if epub_metadata['description'] == 'Unknown':
if epub_metadata['description'] == u'Unknown':
description = tree.xpath("//*[local-name() = 'description']/text()")
if len(description) > 0:
epub_metadata['description'] = description
@ -114,19 +98,15 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
epub_metadata = parse_epub_series(ns, tree, epub_metadata)
epub_zip = zipfile.ZipFile(tmp_file_path)
cover_file = parse_epub_cover(ns, tree, epub_zip, cover_path, tmp_file_path)
identifiers = []
for node in p.xpath('dc:identifier', namespaces=ns):
try:
identifier_name = node.attrib.values()[-1]
except IndexError:
continue
identifier_value = node.text
if identifier_name in ('uuid', 'calibre') or identifier_value is None:
continue
identifiers.append([identifier_name, identifier_value])
identifier_name=node.attrib.values()[-1];
identifier_value=node.text;
if identifier_name in ('uuid','calibre'):
continue;
identifiers.append( [identifier_name, identifier_value] )
if not epub_metadata['title']:
title = original_file_name
@ -151,40 +131,40 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
def parse_epub_cover(ns, tree, epub_zip, cover_path, tmp_file_path):
cover_section = tree.xpath("/pkg:package/pkg:manifest/pkg:item[@id='cover-image']/@href", namespaces=ns)
cover_file = None
# if len(cover_section) > 0:
for cs in cover_section:
cover_file = _extract_cover(epub_zip, cs, cover_path, tmp_file_path)
if cover_file:
return cover_file
meta_cover = tree.xpath("/pkg:package/pkg:metadata/pkg:meta[@name='cover']/@content", namespaces=ns)
if len(meta_cover) > 0:
cover_section = tree.xpath(
"/pkg:package/pkg:manifest/pkg:item[@id='"+meta_cover[0]+"']/@href", namespaces=ns)
if not cover_section:
cover_section = tree.xpath(
"/pkg:package/pkg:manifest/pkg:item[@properties='" + meta_cover[0] + "']/@href", namespaces=ns)
else:
cover_section = tree.xpath("/pkg:package/pkg:guide/pkg:reference/@href", namespaces=ns)
cover_file = None
for cs in cover_section:
if cs.endswith('.xhtml') or cs.endswith('.html'):
markup = epub_zip.read(os.path.join(cover_path, cs))
markup_tree = etree.fromstring(markup)
# no matter xhtml or html with no namespace
img_src = markup_tree.xpath("//*[local-name() = 'img']/@src")
# Alternative image source
if not len(img_src):
img_src = markup_tree.xpath("//attribute::*[contains(local-name(), 'href')]")
if len(img_src):
# img_src maybe start with "../"" so fullpath join then relpath to cwd
filename = os.path.relpath(os.path.join(os.path.dirname(os.path.join(cover_path, cover_section[0])),
img_src[0]))
cover_file = _extract_cover(epub_zip, filename, "", tmp_file_path)
else:
cover_file = _extract_cover(epub_zip, cs, cover_path, tmp_file_path)
if cover_file:
break
if not cover_file:
meta_cover = tree.xpath("/pkg:package/pkg:metadata/pkg:meta[@name='cover']/@content", namespaces=ns)
if len(meta_cover) > 0:
cover_section = tree.xpath(
"/pkg:package/pkg:manifest/pkg:item[@id='"+meta_cover[0]+"']/@href", namespaces=ns)
if not cover_section:
cover_section = tree.xpath(
"/pkg:package/pkg:manifest/pkg:item[@properties='" + meta_cover[0] + "']/@href", namespaces=ns)
else:
cover_section = tree.xpath("/pkg:package/pkg:guide/pkg:reference/@href", namespaces=ns)
for cs in cover_section:
filetype = cs.rsplit('.', 1)[-1]
if filetype == "xhtml" or filetype == "html": # if cover is (x)html format
markup = epub_zip.read(os.path.join(cover_path, cs))
markup_tree = etree.fromstring(markup)
# no matter xhtml or html with no namespace
img_src = markup_tree.xpath("//*[local-name() = 'img']/@src")
# Alternative image source
if not len(img_src):
img_src = markup_tree.xpath("//attribute::*[contains(local-name(), 'href')]")
if len(img_src):
# img_src maybe start with "../"" so fullpath join then relpath to cwd
filename = os.path.relpath(os.path.join(os.path.dirname(os.path.join(cover_path, cover_section[0])),
img_src[0]))
cover_file = _extract_cover(epub_zip, filename, "", tmp_file_path)
else:
cover_file = _extract_cover(epub_zip, cs, cover_path, tmp_file_path)
if cover_file: break
return cover_file

View File

@ -1,166 +0,0 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2018 lemmsh, Kennyl, Kyosfonica, matthazinski
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import zipfile
from lxml import etree
from . import isoLanguages
default_ns = {
'n': 'urn:oasis:names:tc:opendocument:xmlns:container',
'pkg': 'http://www.idpf.org/2007/opf',
}
OPF_NAMESPACE = "http://www.idpf.org/2007/opf"
PURL_NAMESPACE = "http://purl.org/dc/elements/1.1/"
OPF = "{%s}" % OPF_NAMESPACE
PURL = "{%s}" % PURL_NAMESPACE
etree.register_namespace("opf", OPF_NAMESPACE)
etree.register_namespace("dc", PURL_NAMESPACE)
OPF_NS = {None: OPF_NAMESPACE} # the default namespace (no prefix)
NSMAP = {'dc': PURL_NAMESPACE, 'opf': OPF_NAMESPACE}
def updateEpub(src, dest, filename, data, ):
# create a temp copy of the archive without filename
with zipfile.ZipFile(src, 'r') as zin:
with zipfile.ZipFile(dest, 'w') as zout:
zout.comment = zin.comment # preserve the comment
for item in zin.infolist():
if item.filename != filename:
zout.writestr(item, zin.read(item.filename))
# now add filename with its new data
with zipfile.ZipFile(dest, mode='a', compression=zipfile.ZIP_DEFLATED) as zf:
zf.writestr(filename, data)
def get_content_opf(file_path, ns=default_ns):
epubZip = zipfile.ZipFile(file_path)
txt = epubZip.read('META-INF/container.xml')
tree = etree.fromstring(txt)
cf_name = tree.xpath('n:rootfiles/n:rootfile/@full-path', namespaces=ns)[0]
cf = epubZip.read(cf_name)
return etree.fromstring(cf), cf_name
def create_new_metadata_backup(book, custom_columns, export_language, translated_cover_name, lang_type=3):
# generate root package element
package = etree.Element(OPF + "package", nsmap=OPF_NS)
package.set("unique-identifier", "uuid_id")
package.set("version", "2.0")
# generate metadata element and all sub elements of it
metadata = etree.SubElement(package, "metadata", nsmap=NSMAP)
identifier = etree.SubElement(metadata, PURL + "identifier", id="calibre_id", nsmap=NSMAP)
identifier.set(OPF + "scheme", "calibre")
identifier.text = str(book.id)
identifier2 = etree.SubElement(metadata, PURL + "identifier", id="uuid_id", nsmap=NSMAP)
identifier2.set(OPF + "scheme", "uuid")
identifier2.text = book.uuid
for i in book.identifiers:
identifier = etree.SubElement(metadata, PURL + "identifier", nsmap=NSMAP)
identifier.set(OPF + "scheme", i.format_type())
identifier.text = str(i.val)
title = etree.SubElement(metadata, PURL + "title", nsmap=NSMAP)
title.text = book.title
for author in book.authors:
creator = etree.SubElement(metadata, PURL + "creator", nsmap=NSMAP)
creator.text = str(author.name)
creator.set(OPF + "file-as", book.author_sort) # ToDo Check
creator.set(OPF + "role", "aut")
contributor = etree.SubElement(metadata, PURL + "contributor", nsmap=NSMAP)
contributor.text = "calibre (5.7.2) [https://calibre-ebook.com]"
contributor.set(OPF + "file-as", "calibre") # ToDo Check
contributor.set(OPF + "role", "bkp")
date = etree.SubElement(metadata, PURL + "date", nsmap=NSMAP)
date.text = '{d.year:04}-{d.month:02}-{d.day:02}T{d.hour:02}:{d.minute:02}:{d.second:02}'.format(d=book.pubdate)
if book.comments and book.comments[0].text:
for b in book.comments:
description = etree.SubElement(metadata, PURL + "description", nsmap=NSMAP)
description.text = b.text
for b in book.publishers:
publisher = etree.SubElement(metadata, PURL + "publisher", nsmap=NSMAP)
publisher.text = str(b.name)
if not book.languages:
language = etree.SubElement(metadata, PURL + "language", nsmap=NSMAP)
language.text = export_language
else:
for b in book.languages:
language = etree.SubElement(metadata, PURL + "language", nsmap=NSMAP)
language.text = str(b.lang_code) if lang_type == 3 else isoLanguages.get(part3=b.lang_code).part1
for b in book.tags:
subject = etree.SubElement(metadata, PURL + "subject", nsmap=NSMAP)
subject.text = str(b.name)
etree.SubElement(metadata, "meta", name="calibre:author_link_map",
content="{" + ", ".join(['"' + str(a.name) + '": ""' for a in book.authors]) + "}",
nsmap=NSMAP)
for b in book.series:
etree.SubElement(metadata, "meta", name="calibre:series",
content=str(str(b.name)),
nsmap=NSMAP)
if book.series:
etree.SubElement(metadata, "meta", name="calibre:series_index",
content=str(book.series_index),
nsmap=NSMAP)
if len(book.ratings) and book.ratings[0].rating > 0:
etree.SubElement(metadata, "meta", name="calibre:rating",
content=str(book.ratings[0].rating),
nsmap=NSMAP)
etree.SubElement(metadata, "meta", name="calibre:timestamp",
content='{d.year:04}-{d.month:02}-{d.day:02}T{d.hour:02}:{d.minute:02}:{d.second:02}'.format(
d=book.timestamp),
nsmap=NSMAP)
etree.SubElement(metadata, "meta", name="calibre:title_sort",
content=book.sort,
nsmap=NSMAP)
sequence = 0
for cc in custom_columns:
value = None
extra = None
cc_entry = getattr(book, "custom_column_" + str(cc.id))
if cc_entry.__len__():
value = [c.value for c in cc_entry] if cc.is_multiple else cc_entry[0].value
extra = cc_entry[0].extra if hasattr(cc_entry[0], "extra") else None
etree.SubElement(metadata, "meta", name="calibre:user_metadata:#{}".format(cc.label),
content=cc.to_json(value, extra, sequence),
nsmap=NSMAP)
sequence += 1
# generate guide element and all sub elements of it
# Title is translated from default export language
guide = etree.SubElement(package, "guide")
etree.SubElement(guide, "reference", type="cover", title=translated_cover_name, href="cover.jpg")
return package
def replace_metadata(tree, package):
rep_element = tree.xpath('/pkg:package/pkg:metadata', namespaces=default_ns)[0]
new_element = package.xpath('//metadata', namespaces=default_ns)[0]
tree.replace(rep_element, new_element)
return etree.tostring(tree,
xml_declaration=True,
encoding='utf-8',
pretty_print=True).decode('utf-8')

View File

@ -38,19 +38,19 @@ def get_fb2_info(tmp_file_path, original_file_extension):
if len(last_name):
last_name = last_name[0]
else:
last_name = ''
last_name = u''
middle_name = element.xpath('fb:middle-name/text()', namespaces=ns)
if len(middle_name):
middle_name = middle_name[0]
else:
middle_name = ''
middle_name = u''
first_name = element.xpath('fb:first-name/text()', namespaces=ns)
if len(first_name):
first_name = first_name[0]
else:
first_name = ''
return (first_name + ' '
+ middle_name + ' '
first_name = u''
return (first_name + u' '
+ middle_name + u' '
+ last_name)
author = str(", ".join(map(get_author, authors)))
@ -59,12 +59,12 @@ def get_fb2_info(tmp_file_path, original_file_extension):
if len(title):
title = str(title[0])
else:
title = ''
title = u''
description = tree.xpath('/fb:FictionBook/fb:description/fb:publish-info/fb:book-name/text()', namespaces=ns)
if len(description):
description = str(description[0])
else:
description = ''
description = u''
return BookMeta(
file_path=tmp_file_path,

View File

@ -1,32 +0,0 @@
# -*- coding: utf-8 -*-
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
# Copyright (C) 2023 OzzieIsaacs
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from tempfile import gettempdir
import os
import shutil
def get_temp_dir():
tmp_dir = os.path.join(gettempdir(), 'calibre_web')
if not os.path.isdir(tmp_dir):
os.mkdir(tmp_dir)
return tmp_dir
def del_temp_dir():
tmp_dir = os.path.join(gettempdir(), 'calibre_web')
shutil.rmtree(tmp_dir)

View File

@ -23,6 +23,7 @@
import os
import hashlib
import json
import tempfile
from uuid import uuid4
from time import time
from shutil import move, copyfile
@ -33,7 +34,6 @@ from flask_login import login_required
from . import logger, gdriveutils, config, ub, calibre_db, csrf
from .admin import admin_required
from .file_helper import get_temp_dir
gdrive = Blueprint('gdrive', __name__, url_prefix='/gdrive')
log = logger.create()
@ -55,7 +55,7 @@ def authenticate_google_drive():
try:
authUrl = gdriveutils.Gauth.Instance().auth.GetAuthUrl()
except gdriveutils.InvalidConfigError:
flash(_('Google Drive setup not completed, try to deactivate and activate Google Drive again'),
flash(_(u'Google Drive setup not completed, try to deactivate and activate Google Drive again'),
category="error")
return redirect(url_for('web.index'))
return redirect(authUrl)
@ -91,9 +91,9 @@ def watch_gdrive():
config.save()
except HttpError as e:
reason=json.loads(e.content)['error']['errors'][0]
if reason['reason'] == 'push.webhookUrlUnauthorized':
flash(_('Callback domain is not verified, '
'please follow steps to verify domain in google developer console'), category="error")
if reason['reason'] == u'push.webhookUrlUnauthorized':
flash(_(u'Callback domain is not verified, '
u'please follow steps to verify domain in google developer console'), category="error")
else:
flash(reason['message'], category="error")
@ -139,7 +139,9 @@ try:
dbpath = os.path.join(config.config_calibre_dir, "metadata.db").encode()
if not response['deleted'] and response['file']['title'] == 'metadata.db' \
and response['file']['md5Checksum'] != hashlib.md5(dbpath): # nosec
tmp_dir = get_temp_dir()
tmp_dir = os.path.join(tempfile.gettempdir(), 'calibre_web')
if not os.path.isdir(tmp_dir):
os.mkdir(tmp_dir)
log.info('Database file updated')
copyfile(dbpath, os.path.join(tmp_dir, "metadata.db_" + str(current_milli_time())))

View File

@ -34,6 +34,7 @@ except ImportError:
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.exc import OperationalError, InvalidRequestError, IntegrityError
from sqlalchemy.orm.exc import StaleDataError
from sqlalchemy.sql.expression import text
try:
from httplib2 import __version__ as httplib2_version
@ -146,7 +147,7 @@ engine = create_engine('sqlite:///{0}'.format(cli_param.gd_path), echo=False)
Base = declarative_base()
# Open session for database connection
Session = sessionmaker(autoflush=False)
Session = sessionmaker()
Session.configure(bind=engine)
session = scoped_session(Session)
@ -173,12 +174,30 @@ class PermissionAdded(Base):
return str(self.gdrive_id)
def migrate():
if not engine.dialect.has_table(engine.connect(), "permissions_added"):
PermissionAdded.__table__.create(bind = engine)
for sql in session.execute(text("select sql from sqlite_master where type='table'")):
if 'CREATE TABLE gdrive_ids' in sql[0]:
currUniqueConstraint = 'UNIQUE (gdrive_id)'
if currUniqueConstraint in sql[0]:
sql=sql[0].replace(currUniqueConstraint, 'UNIQUE (gdrive_id, path)')
sql=sql.replace(GdriveId.__tablename__, GdriveId.__tablename__ + '2')
session.execute(sql)
session.execute("INSERT INTO gdrive_ids2 (id, gdrive_id, path) SELECT id, "
"gdrive_id, path FROM gdrive_ids;")
session.commit()
session.execute('DROP TABLE %s' % 'gdrive_ids')
session.execute('ALTER TABLE gdrive_ids2 RENAME to gdrive_ids')
break
if not os.path.exists(cli_param.gd_path):
try:
Base.metadata.create_all(engine)
except Exception as ex:
log.error("Error connect to database: {} - {}".format(cli_param.gd_path, ex))
raise
migrate()
def getDrive(drive=None, gauth=None):
@ -325,7 +344,7 @@ def getFileFromEbooksFolder(path, fileName):
def moveGdriveFileRemote(origin_file_id, new_title):
origin_file_id['title'] = new_title
origin_file_id['title']= new_title
origin_file_id.Upload()
@ -403,7 +422,7 @@ def copyToDrive(drive, uploadFile, createRoot, replaceFiles,
driveFile.Upload()
def uploadFileToEbooksFolder(destFile, f, string=False):
def uploadFileToEbooksFolder(destFile, f):
drive = getDrive(Gdrive.Instance().drive)
parent = getEbooksFolder(drive)
splitDir = destFile.split('/')
@ -416,10 +435,7 @@ def uploadFileToEbooksFolder(destFile, f, string=False):
else:
driveFile = drive.CreateFile({'title': x,
'parents': [{"kind": "drive#fileLink", 'id': parent['id']}], })
if not string:
driveFile.SetContentFile(f)
else:
driveFile.SetContentString(f)
driveFile.SetContentFile(f)
driveFile.Upload()
else:
existing_Folder = drive.ListFile({'q': "title = '%s' and '%s' in parents and trashed = false" %
@ -540,7 +556,7 @@ def updateGdriveCalibreFromLocal():
# update gdrive.db on edit of books title
def updateDatabaseOnEdit(ID,newPath):
sqlCheckPath = newPath if newPath[-1] == '/' else newPath + '/'
sqlCheckPath = newPath if newPath[-1] == '/' else newPath + u'/'
storedPathName = session.query(GdriveId).filter(GdriveId.gdrive_id == ID).first()
if storedPathName:
storedPathName.path = sqlCheckPath
@ -562,7 +578,6 @@ def deleteDatabaseEntry(ID):
# Gets cover file from gdrive
# ToDo: Check is this right everyone get read permissions on cover files?
def get_cover_via_gdrive(cover_path):
df = getFileFromEbooksFolder(cover_path, 'cover.jpg')
if df:
@ -585,29 +600,6 @@ def get_cover_via_gdrive(cover_path):
else:
return None
# Gets cover file from gdrive
def get_metadata_backup_via_gdrive(metadata_path):
df = getFileFromEbooksFolder(metadata_path, 'metadata.opf')
if df:
if not session.query(PermissionAdded).filter(PermissionAdded.gdrive_id == df['id']).first():
df.GetPermissions()
df.InsertPermission({
'type': 'anyone',
'value': 'anyone',
'role': 'writer', # ToDo needs write access
'withLink': True})
permissionAdded = PermissionAdded()
permissionAdded.gdrive_id = df['id']
session.add(permissionAdded)
try:
session.commit()
except OperationalError as ex:
log.error_or_exception('Database error: {}'.format(ex))
session.rollback()
return df.metadata.get('webContentLink')
else:
return None
# Creates chunks for downloading big files
def partial(total_byte_len, part_size_limit):
s = []

394
cps/helper.py Normal file → Executable file
View File

@ -18,22 +18,20 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import random
import io
import sys
import mimetypes
import re
import regex
import shutil
import socket
from datetime import datetime, timedelta
from tempfile import gettempdir
import requests
import unidecode
from uuid import uuid4
from flask import send_from_directory, make_response, redirect, abort, url_for
from flask_babel import gettext as _
from flask_babel import lazy_gettext as N_
from flask_babel import get_locale
from flask_login import current_user
from sqlalchemy.sql.expression import true, false, and_, or_, text, func
from sqlalchemy.exc import InvalidRequestError, OperationalError
@ -55,16 +53,11 @@ from . import calibre_db, cli_param
from .tasks.convert import TaskConvert
from . import logger, config, db, ub, fs
from . import gdriveutils as gd
from .constants import (STATIC_DIR as _STATIC_DIR, CACHE_TYPE_THUMBNAILS, THUMBNAIL_TYPE_COVER, THUMBNAIL_TYPE_SERIES,
SUPPORTED_CALIBRE_BINARIES)
from .constants import STATIC_DIR as _STATIC_DIR, CACHE_TYPE_THUMBNAILS, THUMBNAIL_TYPE_COVER, THUMBNAIL_TYPE_SERIES
from .subproc_wrapper import process_wait
from .services.worker import WorkerThread
from .tasks.mail import TaskEmail
from .tasks.thumbnail import TaskClearCoverThumbnailCache, TaskGenerateCoverThumbnails
from .tasks.metadata_backup import TaskBackupMetadata
from .file_helper import get_temp_dir
from .epub_helper import get_content_opf, create_new_metadata_backup, updateEpub, replace_metadata
from .embed_helper import do_calibre_export
log = logger.create()
@ -83,29 +76,29 @@ def convert_book_format(book_id, calibre_path, old_book_format, new_book_format,
book = calibre_db.get_book(book_id)
data = calibre_db.get_book_format(book.id, old_book_format)
if not data:
error_message = _("%(format)s format not found for book id: %(book)d", format=old_book_format, book=book_id)
error_message = _(u"%(format)s format not found for book id: %(book)d", format=old_book_format, book=book_id)
log.error("convert_book_format: %s", error_message)
return error_message
file_path = os.path.join(calibre_path, book.path, data.name)
if config.config_use_google_drive:
if not gd.getFileFromEbooksFolder(book.path, data.name + "." + old_book_format.lower()):
error_message = _("%(format)s not found on Google Drive: %(fn)s",
error_message = _(u"%(format)s not found on Google Drive: %(fn)s",
format=old_book_format, fn=data.name + "." + old_book_format.lower())
return error_message
else:
if not os.path.exists(file_path + "." + old_book_format.lower()):
error_message = _("%(format)s not found: %(fn)s",
error_message = _(u"%(format)s not found: %(fn)s",
format=old_book_format, fn=data.name + "." + old_book_format.lower())
return error_message
# read settings and append converter task to queue
if ereader_mail:
settings = config.get_mail_settings()
settings['subject'] = _('Send to eReader') # pretranslate Subject for Email
settings['body'] = _('This Email has been sent via Calibre-Web.')
settings['subject'] = _('Send to E-Reader') # pretranslate Subject for e-mail
settings['body'] = _(u'This e-mail has been sent via Calibre-Web.')
else:
settings = dict()
link = '<a href="{}">{}</a>'.format(url_for('web.show_book', book_id=book.id), escape(book.title)) # prevent xss
txt = "{} -> {}: {}".format(
txt = u"{} -> {}: {}".format(
old_book_format.upper(),
new_book_format.upper(),
link)
@ -117,30 +110,30 @@ def convert_book_format(book_id, calibre_path, old_book_format, new_book_format,
# Texts are not lazy translated as they are supposed to get send out as is
def send_test_mail(ereader_mail, user_name):
WorkerThread.add(user_name, TaskEmail(_('Calibre-Web Test Email'), None, None,
config.get_mail_settings(), ereader_mail, N_("Test Email"),
_('This Email has been sent via Calibre-Web.')))
WorkerThread.add(user_name, TaskEmail(_(u'Calibre-Web test e-mail'), None, None,
config.get_mail_settings(), ereader_mail, N_(u"Test e-mail"),
_(u'This e-mail has been sent via Calibre-Web.')))
return
# Send registration email or password reset email, depending on parameter resend (False means welcome email)
def send_registration_mail(e_mail, user_name, default_password, resend=False):
txt = "Hi %s!\r\n" % user_name
txt = "Hello %s!\r\n" % user_name
if not resend:
txt += "Your account at Calibre-Web has been created.\r\n"
txt += "Please log in using the following information:\r\n"
txt += "Username: %s\r\n" % user_name
txt += "Your new account at Calibre-Web has been created. Thanks for joining us!\r\n"
txt += "Please log in to your account using the following information:\r\n"
txt += "User name: %s\r\n" % user_name
txt += "Password: %s\r\n" % default_password
txt += "Don't forget to change your password after your first login.\r\n"
txt += "Regards,\r\n\r\n"
txt += "Calibre-Web"
txt += "Don't forget to change your password after first login.\r\n"
txt += "Sincerely\r\n\r\n"
txt += "Your Calibre-Web team"
WorkerThread.add(None, TaskEmail(
subject=_('Get Started with Calibre-Web'),
subject=_(u'Get Started with Calibre-Web'),
filepath=None,
attachment=None,
settings=config.get_mail_settings(),
recipient=e_mail,
task_message=N_("Registration Email for user: %(name)s", name=user_name),
task_message=N_(u"Registration e-mail for user: %(name)s", name=user_name),
text=txt
))
return
@ -151,13 +144,13 @@ def check_send_to_ereader_with_converter(formats):
if 'MOBI' in formats and 'EPUB' not in formats:
book_formats.append({'format': 'Epub',
'convert': 1,
'text': _('Convert %(orig)s to %(format)s and send to eReader',
'text': _('Convert %(orig)s to %(format)s and send to E-Reader',
orig='Mobi',
format='Epub')})
if 'AZW3' in formats and 'EPUB' not in formats:
book_formats.append({'format': 'Epub',
'convert': 2,
'text': _('Convert %(orig)s to %(format)s and send to eReader',
'text': _('Convert %(orig)s to %(format)s and send to E-Reader',
orig='Azw3',
format='Epub')})
return book_formats
@ -165,7 +158,7 @@ def check_send_to_ereader_with_converter(formats):
def check_send_to_ereader(entry):
"""
returns all available book formats for sending to eReader
returns all available book formats for sending to E-Reader
"""
formats = list()
book_formats = list()
@ -176,27 +169,31 @@ def check_send_to_ereader(entry):
if 'EPUB' in formats:
book_formats.append({'format': 'Epub',
'convert': 0,
'text': _('Send %(format)s to eReader', format='Epub')})
'text': _('Send %(format)s to E-Reader', format='Epub')})
if 'MOBI' in formats:
book_formats.append({'format': 'Mobi',
'convert': 0,
'text': _('Send %(format)s to E-Reader', format='Mobi')})
if 'PDF' in formats:
book_formats.append({'format': 'Pdf',
'convert': 0,
'text': _('Send %(format)s to eReader', format='Pdf')})
'text': _('Send %(format)s to E-Reader', format='Pdf')})
if 'AZW' in formats:
book_formats.append({'format': 'Azw',
'convert': 0,
'text': _('Send %(format)s to eReader', format='Azw')})
'text': _('Send %(format)s to E-Reader', format='Azw')})
if config.config_converterpath:
book_formats.extend(check_send_to_ereader_with_converter(formats))
return book_formats
else:
log.error('Cannot find book entry %d', entry.id)
log.error(u'Cannot find book entry %d', entry.id)
return None
# Check if a reader is existing for any of the book formats, if not, return empty list, otherwise return
# list with supported formats
def check_read_formats(entry):
extensions_reader = {'TXT', 'PDF', 'EPUB', 'CBZ', 'CBT', 'CBR', 'DJVU', 'DJV'}
extensions_reader = {'TXT', 'PDF', 'EPUB', 'CBZ', 'CBT', 'CBR', 'DJVU'}
book_formats = list()
if len(entry.data):
for ele in iter(entry.data):
@ -206,30 +203,30 @@ def check_read_formats(entry):
# Files are processed in the following order/priority:
# 1: If epub file is existing, it's directly send to eReader email,
# 2: If mobi file is existing, it's converted and send to eReader email,
# 3: If Pdf file is existing, it's directly send to eReader email
# 1: If Mobi file is existing, it's directly send to E-Reader email,
# 2: If Epub file is existing, it's converted and send to E-Reader email,
# 3: If Pdf file is existing, it's directly send to E-Reader email
def send_mail(book_id, book_format, convert, ereader_mail, calibrepath, user_id):
"""Send email with attachments"""
book = calibre_db.get_book(book_id)
if convert == 1:
# returns None if success, otherwise errormessage
return convert_book_format(book_id, calibrepath, 'mobi', book_format.lower(), user_id, ereader_mail)
return convert_book_format(book_id, calibrepath, u'epub', book_format.lower(), user_id, ereader_mail)
if convert == 2:
# returns None if success, otherwise errormessage
return convert_book_format(book_id, calibrepath, 'azw3', book_format.lower(), user_id, ereader_mail)
return convert_book_format(book_id, calibrepath, u'azw3', book_format.lower(), user_id, ereader_mail)
for entry in iter(book.data):
if entry.format.upper() == book_format.upper():
converted_file_name = entry.name + '.' + book_format.lower()
link = '<a href="{}">{}</a>'.format(url_for('web.show_book', book_id=book_id), escape(book.title))
email_text = N_("%(book)s send to eReader", book=link)
WorkerThread.add(user_id, TaskEmail(_("Send to eReader"), book.path, converted_file_name,
email_text = N_(u"%(book)s send to E-Reader", book=link)
WorkerThread.add(user_id, TaskEmail(_(u"Send to E-Reader"), book.path, converted_file_name,
config.get_mail_settings(), ereader_mail,
email_text, _('This Email has been sent via Calibre-Web.'),book.id))
email_text, _(u'This e-mail has been sent via Calibre-Web.')))
return
return _("The requested file could not be read. Maybe wrong permissions?")
return _(u"The requested file could not be read. Maybe wrong permissions?")
def get_valid_filename(value, replace_whitespace=True, chars=128):
@ -237,16 +234,16 @@ def get_valid_filename(value, replace_whitespace=True, chars=128):
Returns the given string converted to a string that can be used for a clean
filename. Limits num characters to 128 max.
"""
if value[-1:] == '.':
value = value[:-1]+'_'
if value[-1:] == u'.':
value = value[:-1]+u'_'
value = value.replace("/", "_").replace(":", "_").strip('\0')
if config.config_unicode_filename:
value = (unidecode.unidecode(value))
if replace_whitespace:
# *+:\"/<>? are replaced by _
value = re.sub(r'[*+:\\\"/<>?]+', '_', value, flags=re.U)
value = re.sub(r'[*+:\\\"/<>?]+', u'_', value, flags=re.U)
# pipe has to be replaced with comma
value = re.sub(r'[|]+', ',', value, flags=re.U)
value = re.sub(r'[|]+', u',', value, flags=re.U)
value = value.encode('utf-8')[:chars].decode('utf-8', errors='ignore').strip()
@ -343,7 +340,7 @@ def edit_book_read_status(book_id, read_status=None):
return "Custom Column No.{} does not exist in calibre database".format(config.config_read_column)
except (OperationalError, InvalidRequestError) as ex:
calibre_db.session.rollback()
log.error("Read status could not set: {}".format(ex))
log.error(u"Read status could not set: {}".format(ex))
return _("Read status could not set: {}".format(ex.orig))
return ""
@ -418,8 +415,8 @@ def clean_author_database(renamed_author, calibre_path="", local_book=None, gdri
g_file = gd.getFileFromEbooksFolder(all_new_path,
file_format.name + '.' + file_format.format.lower())
if g_file:
gd.moveGdriveFileRemote(g_file, all_new_name + '.' + file_format.format.lower())
gd.updateDatabaseOnEdit(g_file['id'], all_new_name + '.' + file_format.format.lower())
gd.moveGdriveFileRemote(g_file, all_new_name + u'.' + file_format.format.lower())
gd.updateDatabaseOnEdit(g_file['id'], all_new_name + u'.' + file_format.format.lower())
else:
log.error("File {} not found on gdrive"
.format(all_new_path, file_format.name + '.' + file_format.format.lower()))
@ -512,25 +509,25 @@ def update_dir_structure_gdrive(book_id, first_author, renamed_author):
authordir = book.path.split('/')[0]
titledir = book.path.split('/')[1]
new_authordir = rename_all_authors(first_author, renamed_author, gdrive=True)
new_titledir = get_valid_filename(book.title, chars=96) + " (" + str(book_id) + ")"
new_titledir = get_valid_filename(book.title, chars=96) + u" (" + str(book_id) + u")"
if titledir != new_titledir:
g_file = gd.getFileFromEbooksFolder(os.path.dirname(book.path), titledir)
if g_file:
gd.moveGdriveFileRemote(g_file, new_titledir)
book.path = book.path.split('/')[0] + '/' + new_titledir
book.path = book.path.split('/')[0] + u'/' + new_titledir
gd.updateDatabaseOnEdit(g_file['id'], book.path) # only child folder affected
else:
return _('File %(file)s not found on Google Drive', file=book.path) # file not found
return _(u'File %(file)s not found on Google Drive', file=book.path) # file not found
if authordir != new_authordir and authordir not in renamed_author:
g_file = gd.getFileFromEbooksFolder(os.path.dirname(book.path), new_titledir)
if g_file:
gd.moveGdriveFolderRemote(g_file, new_authordir)
book.path = new_authordir + '/' + book.path.split('/')[1]
book.path = new_authordir + u'/' + book.path.split('/')[1]
gd.updateDatabaseOnEdit(g_file['id'], book.path)
else:
return _('File %(file)s not found on Google Drive', file=authordir) # file not found
return _(u'File %(file)s not found on Google Drive', file=authordir) # file not found
# change location in database to new author/title path
book.path = os.path.join(new_authordir, new_titledir).replace('\\', '/')
@ -602,7 +599,7 @@ def delete_book_gdrive(book, book_format):
gd.deleteDatabaseEntry(g_file['id'])
g_file.Trash()
else:
error = _('Book path %(path)s not found on Google Drive', path=book.path) # file not found
error = _(u'Book path %(path)s not found on Google Drive', path=book.path) # file not found
return error is None, error
@ -614,7 +611,7 @@ def reset_password(user_id):
if not config.get_mail_server_configured():
return 2, None
try:
password = generate_random_password(config.config_password_min_length)
password = generate_random_password()
existing_user.password = generate_password_hash(password)
ub.session.commit()
send_registration_mail(existing_user.email, existing_user.name, password, True)
@ -623,35 +620,11 @@ def reset_password(user_id):
ub.session.rollback()
return 0, None
def generate_random_password(min_length):
min_length = max(8, min_length) - 4
random_source = "abcdefghijklmnopqrstuvwxyz01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%&*()?"
# select 1 lowercase
s = "abcdefghijklmnopqrstuvwxyz"
password = [s[c % len(s)] for c in os.urandom(1)]
# select 1 uppercase
s = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
password.extend([s[c % len(s)] for c in os.urandom(1)])
# select 1 digit
s = "01234567890"
password.extend([s[c % len(s)] for c in os.urandom(1)])
# select 1 special symbol
s = "!@#$%&*()?"
password.extend([s[c % len(s)] for c in os.urandom(1)])
# generate other characters
password.extend([random_source[c % len(random_source)] for c in os.urandom(min_length)])
# password_list = list(password)
# shuffle all characters
random.SystemRandom().shuffle(password)
return ''.join(password)
'''def generate_random_password(min_length):
def generate_random_password():
s = "abcdefghijklmnopqrstuvwxyz01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%&*()?"
passlen = min_length
return "".join(s[c % len(s)] for c in os.urandom(passlen))'''
passlen = 8
return "".join(s[c % len(s)] for c in os.urandom(passlen))
def uniq(inpt):
@ -666,49 +639,28 @@ def uniq(inpt):
def check_email(email):
email = valid_email(email)
if ub.session.query(ub.User).filter(func.lower(ub.User.email) == email.lower()).first():
log.error("Found an existing account for this Email address")
raise Exception(_("Found an existing account for this Email address"))
log.error(u"Found an existing account for this e-mail address")
raise Exception(_(u"Found an existing account for this e-mail address"))
return email
def check_username(username):
username = username.strip()
if ub.session.query(ub.User).filter(func.lower(ub.User.name) == username.lower()).scalar():
log.error("This username is already taken")
raise Exception(_("This username is already taken"))
log.error(u"This username is already taken")
raise Exception(_(u"This username is already taken"))
return username
def valid_email(email):
email = email.strip()
# if email is not deleted
if email:
# Regex according to https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input/email#validation
if not re.search(r"^[\w.!#$%&'*+\\/=?^_`{|}~-]+@[\w](?:[\w-]{0,61}[\w])?(?:\.[\w](?:[\w-]{0,61}[\w])?)*$",
email):
log.error("Invalid Email address format")
raise Exception(_("Invalid Email address format"))
# Regex according to https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input/email#validation
if not re.search(r"^[\w.!#$%&'*+\\/=?^_`{|}~-]+@[\w](?:[\w-]{0,61}[\w])?(?:\.[\w](?:[\w-]{0,61}[\w])?)*$",
email):
log.error(u"Invalid e-mail address format")
raise Exception(_(u"Invalid e-mail address format"))
return email
def valid_password(check_password):
if config.config_password_policy:
verify = ""
if config.config_password_min_length > 0:
verify += r"^(?=.{" + str(config.config_password_min_length) + ",}$)"
if config.config_password_number:
verify += r"(?=.*?\d)"
if config.config_password_lower:
verify += r"(?=.*?[\p{Ll}])"
if config.config_password_upper:
verify += r"(?=.*?[\p{Lu}])"
if config.config_password_character:
verify += r"(?=.*?[\p{Letter}])"
if config.config_password_special:
verify += r"(?=.*?[^\p{Letter}\s0-9])"
match = regex.match(verify, check_password)
if not match:
raise Exception(_("Password doesn't comply with password validation rules"))
return check_password
# ################################# External interface #################################
@ -731,35 +683,35 @@ def update_dir_structure(book_id,
def delete_book(book, calibrepath, book_format):
if not book_format:
clear_cover_thumbnail_cache(book.id) ## here it breaks
calibre_db.delete_dirty_metadata(book.id)
clear_cover_thumbnail_cache(book.id) ## here it breaks
if config.config_use_google_drive:
return delete_book_gdrive(book, book_format)
else:
return delete_book_file(book, calibrepath, book_format)
def get_cover_on_failure():
try:
return send_from_directory(_STATIC_DIR, "generic_cover.jpg")
except PermissionError:
log.error("No permission to access generic_cover.jpg file.")
abort(403)
def get_cover_on_failure(use_generic_cover):
if use_generic_cover:
try:
return send_from_directory(_STATIC_DIR, "generic_cover.jpg")
except PermissionError:
log.error("No permission to access generic_cover.jpg file.")
abort(403)
abort(404)
def get_book_cover(book_id, resolution=None):
book = calibre_db.get_filtered_book(book_id, allow_show_archived=True)
return get_book_cover_internal(book, resolution=resolution)
return get_book_cover_internal(book, use_generic_cover_on_failure=True, resolution=resolution)
# Called only by kobo sync -> cover not found should be answered with 404 and not with default cover
def get_book_cover_with_uuid(book_uuid, resolution=None):
book = calibre_db.get_book_by_uuid(book_uuid)
if not book:
return # allows kobo.HandleCoverImageRequest to proxy request
return get_book_cover_internal(book, resolution=resolution)
return get_book_cover_internal(book, use_generic_cover_on_failure=False, resolution=resolution)
def get_book_cover_internal(book, resolution=None):
def get_book_cover_internal(book, use_generic_cover_on_failure, resolution=None):
if book and book.has_cover:
# Send the book cover thumbnail if it exists in cache
@ -775,26 +727,26 @@ def get_book_cover_internal(book, resolution=None):
if config.config_use_google_drive:
try:
if not gd.is_gdrive_ready():
return get_cover_on_failure()
return get_cover_on_failure(use_generic_cover_on_failure)
path = gd.get_cover_via_gdrive(book.path)
if path:
return redirect(path)
else:
log.error('{}/cover.jpg not found on Google Drive'.format(book.path))
return get_cover_on_failure()
return get_cover_on_failure(use_generic_cover_on_failure)
except Exception as ex:
log.error_or_exception(ex)
return get_cover_on_failure()
return get_cover_on_failure(use_generic_cover_on_failure)
# Send the book cover from the Calibre directory
else:
cover_file_path = os.path.join(config.get_book_path(), book.path)
cover_file_path = os.path.join(config.config_calibre_dir, book.path)
if os.path.isfile(os.path.join(cover_file_path, "cover.jpg")):
return send_from_directory(cover_file_path, "cover.jpg")
else:
return get_cover_on_failure()
return get_cover_on_failure(use_generic_cover_on_failure)
else:
return get_cover_on_failure()
return get_cover_on_failure(use_generic_cover_on_failure)
def get_book_cover_thumbnail(book, resolution):
@ -817,7 +769,7 @@ def get_series_thumbnail_on_failure(series_id, resolution):
.filter(db.Books.has_cover == 1) \
.first()
return get_book_cover_internal(book, resolution=resolution)
return get_book_cover_internal(book, use_generic_cover_on_failure=True, resolution=resolution)
def get_series_cover_thumbnail(series_id, resolution=None):
@ -881,10 +833,10 @@ def save_cover_from_filestorage(filepath, saved_filename, img):
try:
os.makedirs(filepath)
except OSError:
log.error("Failed to create path for cover")
return False, _("Failed to create path for cover")
log.error(u"Failed to create path for cover")
return False, _(u"Failed to create path for cover")
try:
# upload of jpg file without wand
# upload of jgp file without wand
if isinstance(img, requests.Response):
with open(os.path.join(filepath, saved_filename), 'wb') as f:
f.write(img.content)
@ -897,8 +849,8 @@ def save_cover_from_filestorage(filepath, saved_filename, img):
# upload of jpg/png... from hdd
img.save(os.path.join(filepath, saved_filename))
except (IOError, OSError):
log.error("Cover-file is not a valid image file, or could not be stored")
return False, _("Cover-file is not a valid image file, or could not be stored")
log.error(u"Cover-file is not a valid image file, or could not be stored")
return False, _(u"Cover-file is not a valid image file, or could not be stored")
return True, None
@ -928,7 +880,10 @@ def save_cover(img, book_path):
return False, _("Only jpg/jpeg files are supported as coverfile")
if config.config_use_google_drive:
tmp_dir = get_temp_dir()
tmp_dir = os.path.join(gettempdir(), 'calibre_web')
if not os.path.isdir(tmp_dir):
os.mkdir(tmp_dir)
ret, message = save_cover_from_filestorage(tmp_dir, "uploaded_cover.jpg", img)
if ret is True:
gd.uploadFileToEbooksFolder(os.path.join(book_path, 'cover.jpg').replace("\\", "/"),
@ -938,72 +893,33 @@ def save_cover(img, book_path):
else:
return False, message
else:
return save_cover_from_filestorage(os.path.join(config.get_book_path(), book_path), "cover.jpg", img)
return save_cover_from_filestorage(os.path.join(config.config_calibre_dir, book_path), "cover.jpg", img)
def do_download_file(book, book_format, client, data, headers):
book_name = data.name
if config.config_use_google_drive:
# startTime = time.time()
df = gd.getFileFromEbooksFolder(book.path, book_name + "." + book_format)
df = gd.getFileFromEbooksFolder(book.path, data.name + "." + book_format)
# log.debug('%s', time.time() - startTime)
if df:
if config.config_embed_metadata and (
(book_format == "kepub" and config.config_kepubifypath ) or
(book_format != "kepub" and config.config_binariesdir)):
output_path = os.path.join(config.config_calibre_dir, book.path)
if not os.path.exists(output_path):
os.makedirs(output_path)
output = os.path.join(config.config_calibre_dir, book.path, book_name + "." + book_format)
gd.downloadFile(book.path, book_name + "." + book_format, output)
if book_format == "kepub" and config.config_kepubifypath:
filename, download_name = do_kepubify_metadata_replace(book, output)
elif book_format != "kepub" and config.config_binariesdir:
filename, download_name = do_calibre_export(book.id, book_format)
else:
return gd.do_gdrive_download(df, headers)
return gd.do_gdrive_download(df, headers)
else:
abort(404)
else:
filename = os.path.join(config.get_book_path(), book.path)
if not os.path.isfile(os.path.join(filename, book_name + "." + book_format)):
filename = os.path.join(config.config_calibre_dir, book.path)
if not os.path.isfile(os.path.join(filename, data.name + "." + book_format)):
# ToDo: improve error handling
log.error('File not found: %s', os.path.join(filename, book_name + "." + book_format))
log.error('File not found: %s', os.path.join(filename, data.name + "." + book_format))
if client == "kobo" and book_format == "kepub":
headers["Content-Disposition"] = headers["Content-Disposition"].replace(".kepub", ".kepub.epub")
if book_format == "kepub" and config.config_kepubifypath and config.config_embed_metadata:
filename, download_name = do_kepubify_metadata_replace(book, os.path.join(filename,
book_name + "." + book_format))
elif book_format != "kepub" and config.config_binariesdir and config.config_embed_metadata:
filename, download_name = do_calibre_export(book.id, book_format)
else:
download_name = book_name
response = make_response(send_from_directory(filename, download_name + "." + book_format))
# ToDo Check headers parameter
for element in headers:
response.headers[element[0]] = element[1]
log.info('Downloading file: {}'.format(os.path.join(filename, book_name + "." + book_format)))
return response
def do_kepubify_metadata_replace(book, file_path):
custom_columns = (calibre_db.session.query(db.CustomColumns)
.filter(db.CustomColumns.mark_for_delete == 0)
.filter(db.CustomColumns.datatype.notin_(db.cc_exceptions))
.order_by(db.CustomColumns.label).all())
tree, cf_name = get_content_opf(file_path)
package = create_new_metadata_backup(book, custom_columns, current_user.locale, _("Cover"), lang_type=2)
content = replace_metadata(tree, package)
tmp_dir = get_temp_dir()
temp_file_name = str(uuid4())
# open zipfile and replace metadata block in content.opf
updateEpub(file_path, os.path.join(tmp_dir, temp_file_name + ".kepub"), cf_name, content)
return tmp_dir, temp_file_name
response = make_response(send_from_directory(filename, data.name + "." + book_format))
# ToDo Check headers parameter
for element in headers:
response.headers[element[0]] = element[1]
log.info('Downloading file: {}'.format(os.path.join(filename, data.name + "." + book_format)))
return response
##################################
@ -1013,61 +929,20 @@ def check_unrar(unrar_location):
return
if not os.path.exists(unrar_location):
return _('UnRar binary file not found')
return _('Unrar binary file not found')
try:
unrar_location = [unrar_location]
value = process_wait(unrar_location, pattern='UNRAR (.*) freeware')
if value:
version = value.group(1)
log.debug("UnRar version %s", version)
log.debug("unrar version %s", version)
except (OSError, UnicodeDecodeError) as err:
log.error_or_exception(err)
return _('Error executing UnRar')
def check_calibre(calibre_location):
if not calibre_location:
return
if not os.path.exists(calibre_location):
return _('Could not find the specified directory')
if not os.path.isdir(calibre_location):
return _('Please specify a directory, not a file')
try:
supported_binary_paths = [os.path.join(calibre_location, binary)
for binary in SUPPORTED_CALIBRE_BINARIES.values()]
binaries_available = [os.path.isfile(binary_path) for binary_path in supported_binary_paths]
binaries_executable = [os.access(binary_path, os.X_OK) for binary_path in supported_binary_paths]
if all(binaries_available) and all(binaries_executable):
values = [process_wait([binary_path, "--version"], pattern=r'\(calibre (.*)\)')
for binary_path in supported_binary_paths]
if all(values):
version = values[0].group(1)
log.debug("calibre version %s", version)
else:
return _('Calibre binaries not viable')
else:
ret_val = []
missing_binaries=[path for path, available in
zip(SUPPORTED_CALIBRE_BINARIES.values(), binaries_available) if not available]
missing_perms=[path for path, available in
zip(SUPPORTED_CALIBRE_BINARIES.values(), binaries_executable) if not available]
if missing_binaries:
ret_val.append(_('Missing calibre binaries: %(missing)s', missing=", ".join(missing_binaries)))
if missing_perms:
ret_val.append(_('Missing executable permissions: %(missing)s', missing=", ".join(missing_perms)))
return ", ".join(ret_val)
except (OSError, UnicodeDecodeError) as err:
log.error_or_exception(err)
return _('Error executing Calibre')
def json_serial(obj):
"""JSON serializer for objects not serializable by default json code"""
@ -1092,38 +967,43 @@ def tags_filters():
# checks if domain is in database (including wildcards)
# example SELECT * FROM @TABLE WHERE 'abcdefg' LIKE Name;
# example SELECT * FROM @TABLE WHERE 'abcdefg' LIKE Name;
# from https://code.luasoftware.com/tutorials/flask/execute-raw-sql-in-flask-sqlalchemy/
# in all calls the email address is checked for validity
def check_valid_domain(domain_text):
sql = "SELECT * FROM registration WHERE (:domain LIKE domain and allow = 1);"
if not len(ub.session.query(ub.Registration).from_statement(text(sql)).params(domain=domain_text).all()):
result = ub.session.query(ub.Registration).from_statement(text(sql)).params(domain=domain_text).all()
if not len(result):
return False
sql = "SELECT * FROM registration WHERE (:domain LIKE domain and allow = 0);"
return not len(ub.session.query(ub.Registration).from_statement(text(sql)).params(domain=domain_text).all())
result = ub.session.query(ub.Registration).from_statement(text(sql)).params(domain=domain_text).all()
return not len(result)
def get_download_link(book_id, book_format, client):
book_format = book_format.split(".")[0]
book = calibre_db.get_filtered_book(book_id, allow_show_archived=True)
data1= ""
if book:
data1 = calibre_db.get_book_format(book.id, book_format.upper())
if data1:
# collect downloaded books only for registered user and not for anonymous user
if current_user.is_authenticated:
ub.update_download(book_id, int(current_user.id))
file_name = book.title
if len(book.authors) > 0:
file_name = file_name + ' - ' + book.authors[0].name
file_name = get_valid_filename(file_name, replace_whitespace=False)
headers = Headers()
headers["Content-Type"] = mimetypes.types_map.get('.' + book_format, "application/octet-stream")
headers["Content-Disposition"] = "attachment; filename=%s.%s; filename*=UTF-8''%s.%s" % (
quote(file_name), book_format, quote(file_name), book_format)
return do_download_file(book, book_format, client, data1, headers)
else:
log.error("Book id {} not found for downloading".format(book_id))
abort(404)
abort(404)
if data1:
# collect downloaded books only for registered user and not for anonymous user
if current_user.is_authenticated:
ub.update_download(book_id, int(current_user.id))
file_name = book.title
if len(book.authors) > 0:
file_name = file_name + ' - ' + book.authors[0].name
file_name = get_valid_filename(file_name, replace_whitespace=False)
headers = Headers()
headers["Content-Type"] = mimetypes.types_map.get('.' + book_format, "application/octet-stream")
headers["Content-Disposition"] = "attachment; filename=%s.%s; filename*=UTF-8''%s.%s" % (
quote(file_name.encode('utf-8')), book_format, quote(file_name.encode('utf-8')), book_format)
return do_download_file(book, book_format, client, data1, headers)
else:
abort(404)
def clear_cover_thumbnail_cache(book_id):
@ -1149,11 +1029,3 @@ def add_book_to_thumbnail_cache(book_id):
def update_thumbnail_cache():
if config.schedule_generate_book_covers:
WorkerThread.add(None, TaskGenerateCoverThumbnails())
def set_all_metadata_dirty():
WorkerThread.add(None, TaskBackupMetadata(export_language=get_locale(),
translated_title=_("Cover"),
set_dirty=True,
task_message=N_("Queue all books for metadata backup")),
hidden=False)

View File

@ -49,24 +49,15 @@ except ImportError:
def get_language_names(locale):
names = _LANGUAGE_NAMES.get(str(locale))
if names is None:
names = _LANGUAGE_NAMES.get(locale.language)
return names
return _LANGUAGE_NAMES.get(str(locale))
def get_language_name(locale, lang_code):
UNKNOWN_TRANSLATION = "Unknown"
names = get_language_names(locale)
if names is None:
log.error(f"Missing language names for locale: {str(locale)}/{locale.language}")
return UNKNOWN_TRANSLATION
name = names.get(lang_code, UNKNOWN_TRANSLATION)
if name == UNKNOWN_TRANSLATION:
log.error("Missing translation for language name: {}".format(lang_code))
return name
try:
return get_language_names(locale)[lang_code]
except KeyError:
log.error('Missing translation for language name: {}'.format(lang_code))
return "Unknown"
def get_language_codes(locale, language_names, remainder=None):

File diff suppressed because it is too large Load Diff

View File

@ -124,7 +124,7 @@ def formatseriesindex_filter(series_index):
return int(series_index)
else:
return series_index
except (ValueError, TypeError):
except ValueError:
return series_index
return 0

View File

@ -21,7 +21,6 @@ import base64
import datetime
import os
import uuid
import zipfile
from time import gmtime, strftime
import json
from urllib.parse import unquote
@ -46,9 +45,7 @@ import requests
from . import config, logger, kobo_auth, db, calibre_db, helper, shelf as shelf_lib, ub, csrf, kobo_sync_status
from . import isoLanguages
from .epub import get_epub_layout
from .constants import COVER_THUMBNAIL_SMALL #, sqlalchemy_version2
from .constants import sqlalchemy_version2, COVER_THUMBNAIL_SMALL
from .helper import get_download_link
from .services import SyncToken as SyncToken
from .web import download_required
@ -56,7 +53,7 @@ from .kobo_auth import requires_kobo_auth, get_auth_token
KOBO_FORMATS = {"KEPUB": ["KEPUB"], "EPUB": ["EPUB3", "EPUB"]}
KOBO_STOREAPI_URL = "https://storeapi.kobo.com"
KOBO_IMAGEHOST_URL = "https://cdn.kobo.com/book-images"
KOBO_IMAGEHOST_URL = "https://kbimages1-a.akamaihd.net"
SYNC_ITEM_LIMIT = 100
@ -137,15 +134,11 @@ def convert_to_kobo_timestamp_string(timestamp):
@kobo.route("/v1/library/sync")
@requires_kobo_auth
# @download_required
@download_required
def HandleSyncRequest():
if not current_user.role_download():
log.info("Users need download permissions for syncing library to Kobo reader")
return abort(403)
sync_token = SyncToken.SyncToken.from_headers(request.headers)
log.info("Kobo library sync request received")
log.info("Kobo library sync request received.")
log.debug("SyncToken: {}".format(sync_token))
log.debug("Download link format {}".format(get_download_url_for_book('[bookid]','[bookformat]')))
if not current_app.wsgi_app.is_proxied:
log.debug('Kobo: Received unproxied request, changed request port to external server port')
@ -169,10 +162,16 @@ def HandleSyncRequest():
only_kobo_shelves = current_user.kobo_only_shelves_sync
if only_kobo_shelves:
changed_entries = calibre_db.session.query(db.Books,
ub.ArchivedBook.last_modified,
ub.BookShelf.date_added,
ub.ArchivedBook.is_archived)
if sqlalchemy_version2:
changed_entries = select(db.Books,
ub.ArchivedBook.last_modified,
ub.BookShelf.date_added,
ub.ArchivedBook.is_archived)
else:
changed_entries = calibre_db.session.query(db.Books,
ub.ArchivedBook.last_modified,
ub.BookShelf.date_added,
ub.ArchivedBook.is_archived)
changed_entries = (changed_entries
.join(db.Data).outerjoin(ub.ArchivedBook, and_(db.Books.id == ub.ArchivedBook.book_id,
ub.ArchivedBook.user_id == current_user.id))
@ -189,9 +188,12 @@ def HandleSyncRequest():
.filter(ub.Shelf.kobo_sync)
.distinct())
else:
changed_entries = calibre_db.session.query(db.Books,
ub.ArchivedBook.last_modified,
ub.ArchivedBook.is_archived)
if sqlalchemy_version2:
changed_entries = select(db.Books, ub.ArchivedBook.last_modified, ub.ArchivedBook.is_archived)
else:
changed_entries = calibre_db.session.query(db.Books,
ub.ArchivedBook.last_modified,
ub.ArchivedBook.is_archived)
changed_entries = (changed_entries
.join(db.Data).outerjoin(ub.ArchivedBook, and_(db.Books.id == ub.ArchivedBook.book_id,
ub.ArchivedBook.user_id == current_user.id))
@ -203,12 +205,15 @@ def HandleSyncRequest():
.order_by(db.Books.id))
reading_states_in_new_entitlements = []
books = changed_entries.limit(SYNC_ITEM_LIMIT)
if sqlalchemy_version2:
books = calibre_db.session.execute(changed_entries.limit(SYNC_ITEM_LIMIT))
else:
books = changed_entries.limit(SYNC_ITEM_LIMIT)
log.debug("Books to Sync: {}".format(len(books.all())))
for book in books:
formats = [data.format for data in book.Books.data]
if 'KEPUB' not in formats and config.config_kepubifypath and 'EPUB' in formats:
helper.convert_book_format(book.Books.id, config.get_book_path(), 'EPUB', 'KEPUB', current_user.name)
helper.convert_book_format(book.Books.id, config.config_calibre_dir, 'EPUB', 'KEPUB', current_user.name)
kobo_reading_state = get_or_create_reading_state(book.Books.id)
entitlement = {
@ -221,7 +226,7 @@ def HandleSyncRequest():
new_reading_state_last_modified = max(new_reading_state_last_modified, kobo_reading_state.last_modified)
reading_states_in_new_entitlements.append(book.Books.id)
ts_created = book.Books.timestamp.replace(tzinfo=None)
ts_created = book.Books.timestamp
try:
ts_created = max(ts_created, book.date_added)
@ -234,7 +239,7 @@ def HandleSyncRequest():
sync_results.append({"ChangedEntitlement": entitlement})
new_books_last_modified = max(
book.Books.last_modified.replace(tzinfo=None), new_books_last_modified
book.Books.last_modified, new_books_last_modified
)
try:
new_books_last_modified = max(
@ -246,16 +251,27 @@ def HandleSyncRequest():
new_books_last_created = max(ts_created, new_books_last_created)
kobo_sync_status.add_synced_books(book.Books.id)
max_change = changed_entries.filter(ub.ArchivedBook.is_archived)\
.filter(ub.ArchivedBook.user_id == current_user.id) \
.order_by(func.datetime(ub.ArchivedBook.last_modified).desc()).first()
if sqlalchemy_version2:
max_change = calibre_db.session.execute(changed_entries
.filter(ub.ArchivedBook.is_archived)
.filter(ub.ArchivedBook.user_id == current_user.id)
.order_by(func.datetime(ub.ArchivedBook.last_modified).desc()))\
.columns(db.Books).first()
else:
max_change = changed_entries.from_self().filter(ub.ArchivedBook.is_archived)\
.filter(ub.ArchivedBook.user_id == current_user.id) \
.order_by(func.datetime(ub.ArchivedBook.last_modified).desc()).first()
max_change = max_change.last_modified if max_change else new_archived_last_modified
new_archived_last_modified = max(new_archived_last_modified, max_change)
# no. of books returned
book_count = changed_entries.count()
if sqlalchemy_version2:
entries = calibre_db.session.execute(changed_entries).all()
book_count = len(entries)
else:
book_count = changed_entries.count()
# last entry:
cont_sync = bool(book_count)
log.debug("Remaining books to Sync: {}".format(book_count))
@ -318,7 +334,7 @@ def generate_sync_response(sync_token, sync_results, set_cont=False):
extra_headers["x-kobo-recent-reads"] = store_response.headers.get("x-kobo-recent-reads")
except Exception as ex:
log.error_or_exception("Failed to receive or parse response from Kobo's sync endpoint: {}".format(ex))
log.error("Failed to receive or parse response from Kobo's sync endpoint: {}".format(ex))
if set_cont:
extra_headers["x-kobo-sync"] = "continue"
sync_token.to_headers(extra_headers)
@ -339,7 +355,7 @@ def HandleMetadataRequest(book_uuid):
log.info("Kobo library metadata request received for book %s" % book_uuid)
book = calibre_db.get_book_by_uuid(book_uuid)
if not book or not book.data:
log.info("Book %s not found in database", book_uuid)
log.info(u"Book %s not found in database", book_uuid)
return redirect_or_proxy_request()
metadata = get_metadata(book)
@ -348,7 +364,7 @@ def HandleMetadataRequest(book_uuid):
return response
def get_download_url_for_book(book_id, book_format):
def get_download_url_for_book(book, book_format):
if not current_app.wsgi_app.is_proxied:
if ':' in request.host and not request.host.endswith(']'):
host = "".join(request.host.split(':')[:-1])
@ -360,13 +376,13 @@ def get_download_url_for_book(book_id, book_format):
url_base=host,
url_port=config.config_external_port,
auth_token=get_auth_token(),
book_id=book_id,
book_id=book.id,
book_format=book_format.lower()
)
return url_for(
"kobo.download_book",
auth_token=kobo_auth.get_auth_token(),
book_id=book_id,
book_id=book.id,
book_format=book_format.lower(),
_external=True,
)
@ -427,12 +443,6 @@ def get_seriesindex(book):
return book.series_index or 1
def get_language(book):
if not book.languages:
return 'en'
return isoLanguages.get(part3=book.languages[0].lang_code).part1
def get_metadata(book):
download_urls = []
kepub = [data for data in book.data if data.format == 'KEPUB']
@ -442,21 +452,16 @@ def get_metadata(book):
continue
for kobo_format in KOBO_FORMATS[book_data.format]:
# log.debug('Id: %s, Format: %s' % (book.id, kobo_format))
try:
if get_epub_layout(book, book_data) == 'pre-paginated':
kobo_format = 'EPUB3FL'
download_urls.append(
{
"Format": kobo_format,
"Size": book_data.uncompressed_size,
"Url": get_download_url_for_book(book.id, book_data.format),
# The Kobo forma accepts platforms: (Generic, Android)
"Platform": "Generic",
# "DrmType": "None", # Not required
}
)
except (zipfile.BadZipfile, FileNotFoundError) as e:
log.error(e)
download_urls.append(
{
"Format": kobo_format,
"Size": book_data.uncompressed_size,
"Url": get_download_url_for_book(book, book_data.format),
# The Kobo forma accepts platforms: (Generic, Android)
"Platform": "Generic",
# "DrmType": "None", # Not required
}
)
book_uuid = book.uuid
metadata = {
@ -475,7 +480,7 @@ def get_metadata(book):
"IsInternetArchive": False,
"IsPreOrder": False,
"IsSocialEnabled": True,
"Language": get_language(book),
"Language": "en",
"PhoneticPronunciations": {},
"PublicationDate": convert_to_kobo_timestamp_string(book.pubdate),
"Publisher": {"Imprint": "", "Name": get_publisher(book), },
@ -503,7 +508,7 @@ def get_metadata(book):
@requires_kobo_auth
# Creates a Shelf with the given items, and returns the shelf's uuid.
def HandleTagCreate():
# catch delete requests, otherwise they are handled in the book delete handler
# catch delete requests, otherwise the are handled in the book delete handler
if request.method == "DELETE":
abort(405)
name, items = None, None
@ -697,12 +702,20 @@ def sync_shelves(sync_token, sync_results, only_kobo_shelves=False):
})
extra_filters.append(ub.Shelf.kobo_sync)
shelflist = ub.session.query(ub.Shelf).outerjoin(ub.BookShelf).filter(
or_(func.datetime(ub.Shelf.last_modified) > sync_token.tags_last_modified,
func.datetime(ub.BookShelf.date_added) > sync_token.tags_last_modified),
ub.Shelf.user_id == current_user.id,
*extra_filters
).distinct().order_by(func.datetime(ub.Shelf.last_modified).asc())
if sqlalchemy_version2:
shelflist = ub.session.execute(select(ub.Shelf).outerjoin(ub.BookShelf).filter(
or_(func.datetime(ub.Shelf.last_modified) > sync_token.tags_last_modified,
func.datetime(ub.BookShelf.date_added) > sync_token.tags_last_modified),
ub.Shelf.user_id == current_user.id,
*extra_filters
).distinct().order_by(func.datetime(ub.Shelf.last_modified).asc())).columns(ub.Shelf)
else:
shelflist = ub.session.query(ub.Shelf).outerjoin(ub.BookShelf).filter(
or_(func.datetime(ub.Shelf.last_modified) > sync_token.tags_last_modified,
func.datetime(ub.BookShelf.date_added) > sync_token.tags_last_modified),
ub.Shelf.user_id == current_user.id,
*extra_filters
).distinct().order_by(func.datetime(ub.Shelf.last_modified).asc())
for shelf in shelflist:
if not shelf_lib.check_shelf_view_permissions(shelf):
@ -739,7 +752,7 @@ def create_kobo_tag(shelf):
for book_shelf in shelf.books:
book = calibre_db.get_book(book_shelf.book_id)
if not book:
log.info("Book (id: %s) in BookShelf (id: %s) not found in book database", book_shelf.book_id, shelf.id)
log.info(u"Book (id: %s) in BookShelf (id: %s) not found in book database", book_shelf.book_id, shelf.id)
continue
tag["Items"].append(
{
@ -756,7 +769,7 @@ def create_kobo_tag(shelf):
def HandleStateRequest(book_uuid):
book = calibre_db.get_book_by_uuid(book_uuid)
if not book or not book.data:
log.info("Book %s not found in database", book_uuid)
log.info(u"Book %s not found in database", book_uuid)
return redirect_or_proxy_request()
kobo_reading_state = get_or_create_reading_state(book.id)
@ -903,26 +916,20 @@ def get_current_bookmark_response(current_bookmark):
@kobo.route("/<book_uuid>/<width>/<height>/<Quality>/<isGreyscale>/image.jpg")
@requires_kobo_auth
def HandleCoverImageRequest(book_uuid, width, height, Quality, isGreyscale):
try:
resolution = None if int(height) > 1000 else COVER_THUMBNAIL_SMALL
except ValueError:
log.error("Requested height %s of book %s is invalid" % (book_uuid, height))
resolution = COVER_THUMBNAIL_SMALL
book_cover = helper.get_book_cover_with_uuid(book_uuid, resolution=resolution)
if book_cover:
log.debug("Serving local cover image of book %s" % book_uuid)
return book_cover
if not config.config_kobo_proxy:
log.debug("Returning 404 for cover image of unknown book %s" % book_uuid)
# additional proxy request make no sense, -> direct return
return abort(404)
log.debug("Redirecting request for cover image of unknown book %s to Kobo" % book_uuid)
return redirect(KOBO_IMAGEHOST_URL +
"/{book_uuid}/{width}/{height}/false/image.jpg".format(book_uuid=book_uuid,
width=width,
height=height), 307)
book_cover = helper.get_book_cover_with_uuid(book_uuid, resolution=COVER_THUMBNAIL_SMALL)
if not book_cover:
if config.config_kobo_proxy:
log.debug("Cover for unknown book: %s proxied to kobo" % book_uuid)
return redirect(KOBO_IMAGEHOST_URL +
"/{book_uuid}/{width}/{height}/false/image.jpg".format(book_uuid=book_uuid,
width=width,
height=height), 307)
else:
log.debug("Cover for unknown book: %s requested" % book_uuid)
# additional proxy request make no sense, -> direct return
return make_response(jsonify({}))
log.debug("Cover request received for book %s" % book_uuid)
return book_cover
@kobo.route("")
@ -937,7 +944,7 @@ def HandleBookDeletionRequest(book_uuid):
log.info("Kobo book delete request received for book %s" % book_uuid)
book = calibre_db.get_book_by_uuid(book_uuid)
if not book:
log.info("Book %s not found in database", book_uuid)
log.info(u"Book %s not found in database", book_uuid)
return redirect_or_proxy_request()
book_id = book.id
@ -951,7 +958,7 @@ def HandleBookDeletionRequest(book_uuid):
@csrf.exempt
@kobo.route("/v1/library/<dummy>", methods=["DELETE", "GET"])
def HandleUnimplementedRequest(dummy=None):
log.debug("Unimplemented Library Request received: %s (request is forwarded to kobo if configured)", request.base_url)
log.debug("Unimplemented Library Request received: %s", request.base_url)
return redirect_or_proxy_request()
@ -962,9 +969,8 @@ def HandleUnimplementedRequest(dummy=None):
@kobo.route("/v1/user/wishlist", methods=["GET", "POST"])
@kobo.route("/v1/user/recommendations", methods=["GET", "POST"])
@kobo.route("/v1/analytics/<dummy>", methods=["GET", "POST"])
@kobo.route("/v1/assets", methods=["GET"])
def HandleUserRequest(dummy=None):
log.debug("Unimplemented User Request received: %s (request is forwarded to kobo if configured)", request.base_url)
log.debug("Unimplemented User Request received: %s", request.base_url)
return redirect_or_proxy_request()
@ -1004,7 +1010,7 @@ def handle_getests():
@kobo.route("/v1/affiliate", methods=["GET", "POST"])
@kobo.route("/v1/deals", methods=["GET", "POST"])
def HandleProductsRequest(dummy=None):
log.debug("Unimplemented Products Request received: %s (request is forwarded to kobo if configured)", request.base_url)
log.debug("Unimplemented Products Request received: %s", request.base_url)
return redirect_or_proxy_request()
@ -1021,7 +1027,7 @@ def make_calibre_web_auth_response():
"RefreshToken": RefreshToken,
"TokenType": "Bearer",
"TrackingId": str(uuid.uuid4()),
"UserKey": content.get('UserKey',""),
"UserKey": content['UserKey'],
}
)
)

View File

@ -64,12 +64,11 @@ from datetime import datetime
from os import urandom
from functools import wraps
from flask import g, Blueprint, abort, request
from flask import g, Blueprint, url_for, abort, request
from flask_login import login_user, current_user, login_required
from flask_babel import gettext as _
from flask_limiter import RateLimitExceeded
from . import logger, config, calibre_db, db, helper, ub, lm, limiter
from . import logger, config, calibre_db, db, helper, ub, lm
from .render_template import render_title_template
log = logger.create()
@ -113,7 +112,7 @@ def generate_auth_token(user_id):
return render_title_template(
"generate_kobo_auth_url.html",
title=_("Kobo Setup"),
title=_(u"Kobo Setup"),
auth_token=auth_token.auth_token,
warning = warning
)
@ -152,13 +151,6 @@ def requires_kobo_auth(f):
def inner(*args, **kwargs):
auth_token = get_auth_token()
if auth_token is not None:
try:
limiter.check()
except RateLimitExceeded:
return abort(429)
except (ConnectionError, Exception) as e:
log.error("Connection error to limiter backend: %s", e)
return abort(429)
user = (
ub.session.query(ub.User)
.join(ub.RemoteAuthToken)
@ -167,8 +159,7 @@ def requires_kobo_auth(f):
)
if user is not None:
login_user(user)
[limiter.limiter.storage.clear(k.key) for k in limiter.current_limits]
return f(*args, **kwargs)
log.debug("Received Kobo request without a recognizable auth token.")
return abort(401)
log.debug("Received Kobo request without a recognizable auth token.")
return abort(401)
return inner

View File

@ -43,14 +43,13 @@ logging.addLevelName(logging.CRITICAL, "CRIT")
class _Logger(logging.Logger):
def error_or_exception(self, message, stacklevel=2, *args, **kwargs):
is_debug = self.getEffectiveLevel() <= logging.DEBUG
if sys.version_info > (3, 7):
if is_debug:
if is_debug_enabled():
self.exception(message, stacklevel=stacklevel, *args, **kwargs)
else:
self.error(message, stacklevel=stacklevel, *args, **kwargs)
else:
if is_debug:
if is_debug_enabled():
self.exception(message, stack_info=True, *args, **kwargs)
else:
self.error(message, *args, **kwargs)
@ -151,7 +150,7 @@ def setup(log_file, log_level=None):
else:
try:
file_handler = RotatingFileHandler(log_file, maxBytes=100000, backupCount=2, encoding='utf-8')
except (IOError, PermissionError):
except IOError:
if log_file == DEFAULT_LOG_FILE:
raise
file_handler = RotatingFileHandler(DEFAULT_LOG_FILE, maxBytes=100000, backupCount=2, encoding='utf-8')
@ -178,7 +177,7 @@ def create_access_log(log_file, log_name, formatter):
access_log.setLevel(logging.INFO)
try:
file_handler = RotatingFileHandler(log_file, maxBytes=50000, backupCount=2, encoding='utf-8')
except (IOError, PermissionError):
except IOError:
if log_file == DEFAULT_ACCESS_LOG:
raise
file_handler = RotatingFileHandler(DEFAULT_ACCESS_LOG, maxBytes=50000, backupCount=2, encoding='utf-8')

View File

@ -18,14 +18,9 @@
import sys
from . import create_app, limiter
from . import create_app
from .jinjia import jinjia
from .remotelogin import remotelogin
from flask import request
def request_username():
return request.authorization.username
def main():
app = create_app()
@ -44,7 +39,6 @@ def main():
try:
from .kobo import kobo, get_kobo_activated
from .kobo_auth import kobo_auth
from flask_limiter.util import get_remote_address
kobo_available = get_kobo_activated()
except (ImportError, AttributeError): # Catch also error for not installed flask-WTF (missing csrf decorator)
kobo_available = False
@ -62,7 +56,6 @@ def main():
app.register_blueprint(tasks)
app.register_blueprint(web)
app.register_blueprint(opds)
limiter.limit("3/minute",key_func=request_username)(opds)
app.register_blueprint(jinjia)
app.register_blueprint(about)
app.register_blueprint(shelf)
@ -74,7 +67,6 @@ def main():
if kobo_available:
app.register_blueprint(kobo)
app.register_blueprint(kobo_auth)
limiter.limit("3/minute", key_func=get_remote_address)(kobo)
if oauth_available:
app.register_blueprint(oauth)
success = web_server.start()

View File

@ -63,11 +63,11 @@ class Amazon(Metadata):
r.raise_for_status()
except Exception as ex:
log.warning(ex)
return None
return
long_soup = BS(r.text, "lxml") #~4sec :/
soup2 = long_soup.find("div", attrs={"cel_widget_id": "dpx-books-ppd_csm_instrumentation_wrapper"})
if soup2 is None:
return None
return
try:
match = MetaRecord(
title = "",
@ -98,7 +98,7 @@ class Amazon(Metadata):
try:
match.authors = [next(
filter(lambda i: i != " " and i != "\n" and not i.startswith("{"),
x.findAll(string=True))).strip()
x.findAll(text=True))).strip()
for x in soup2.findAll("span", attrs={"class": "author"})]
except (AttributeError, TypeError, StopIteration):
match.authors = ""
@ -115,7 +115,7 @@ class Amazon(Metadata):
return match, index
except Exception as e:
log.error_or_exception(e)
return None
return
val = list()
if self.active:
@ -127,10 +127,10 @@ class Amazon(Metadata):
results.raise_for_status()
except requests.exceptions.HTTPError as e:
log.error_or_exception(e)
return []
return None
except Exception as e:
log.warning(e)
return []
return None
soup = BS(results.text, 'html.parser')
links_list = [next(filter(lambda i: "digital-text" in i["href"], x.findAll("a")))["href"] for x in
soup.findAll("div", attrs={"data-component-type": "s-search-result"})]

View File

@ -43,8 +43,7 @@ class Douban(Metadata):
__id__ = "douban"
DESCRIPTION = "豆瓣"
META_URL = "https://book.douban.com/"
SEARCH_JSON_URL = "https://www.douban.com/j/search"
SEARCH_URL = "https://www.douban.com/search"
SEARCH_URL = "https://www.douban.com/j/search"
ID_PATTERN = re.compile(r"sid: (?P<id>\d+),")
AUTHORS_PATTERN = re.compile(r"作者|译者")
@ -53,7 +52,6 @@ class Douban(Metadata):
PUBLISHED_DATE_PATTERN = re.compile(r"出版年")
SERIES_PATTERN = re.compile(r"丛书")
IDENTIFIERS_PATTERN = re.compile(r"ISBN|统一书号")
CRITERIA_PATTERN = re.compile("criteria = '(.+)'")
TITTLE_XPATH = "//span[@property='v:itemreviewed']"
COVER_XPATH = "//a[@class='nbg']"
@ -65,90 +63,56 @@ class Douban(Metadata):
session = requests.Session()
session.headers = {
'user-agent':
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36 Edg/98.0.1108.56',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36 Edg/98.0.1108.56',
}
def search(self,
query: str,
generic_cover: str = "",
locale: str = "en") -> List[MetaRecord]:
val = []
def search(
self, query: str, generic_cover: str = "", locale: str = "en"
) -> Optional[List[MetaRecord]]:
if self.active:
log.debug(f"start searching {query} on douban")
log.debug(f"starting search {query} on douban")
if title_tokens := list(
self.get_title_tokens(query, strip_joiners=False)):
self.get_title_tokens(query, strip_joiners=False)
):
query = "+".join(title_tokens)
book_id_list = self._get_book_id_list_from_html(query)
try:
r = self.session.get(
self.SEARCH_URL, params={"cat": 1001, "q": query}
)
r.raise_for_status()
if not book_id_list:
log.debug("No search results in Douban")
except Exception as e:
log.warning(e)
return None
results = r.json()
if results["total"] == 0:
return []
with futures.ThreadPoolExecutor(
max_workers=5, thread_name_prefix='douban') as executor:
book_id_list = [
self.ID_PATTERN.search(item).group("id")
for item in results["items"][:10] if self.ID_PATTERN.search(item)
]
with futures.ThreadPoolExecutor(max_workers=5) as executor:
fut = [
executor.submit(self._parse_single_book, book_id,
generic_cover) for book_id in book_id_list
executor.submit(self._parse_single_book, book_id, generic_cover)
for book_id in book_id_list
]
val = [
future.result() for future in futures.as_completed(fut)
if future.result()
future.result()
for future in futures.as_completed(fut) if future.result()
]
return val
def _get_book_id_list_from_html(self, query: str) -> List[str]:
try:
r = self.session.get(self.SEARCH_URL,
params={
"cat": 1001,
"q": query
})
r.raise_for_status()
except Exception as e:
log.warning(e)
return []
html = etree.HTML(r.content.decode("utf8"))
result_list = html.xpath(self.COVER_XPATH)
return [
self.ID_PATTERN.search(item.get("onclick")).group("id")
for item in result_list[:10]
if self.ID_PATTERN.search(item.get("onclick"))
]
def _get_book_id_list_from_json(self, query: str) -> List[str]:
try:
r = self.session.get(self.SEARCH_JSON_URL,
params={
"cat": 1001,
"q": query
})
r.raise_for_status()
except Exception as e:
log.warning(e)
return []
results = r.json()
if results["total"] == 0:
return []
return [
self.ID_PATTERN.search(item).group("id")
for item in results["items"][:10] if self.ID_PATTERN.search(item)
]
def _parse_single_book(self,
id: str,
generic_cover: str = "") -> Optional[MetaRecord]:
def _parse_single_book(
self, id: str, generic_cover: str = ""
) -> Optional[MetaRecord]:
url = f"https://book.douban.com/subject/{id}/"
log.debug(f"start parsing {url}")
try:
r = self.session.get(url)
@ -169,12 +133,10 @@ class Douban(Metadata):
),
)
decode_content = r.content.decode("utf8")
html = etree.HTML(decode_content)
html = etree.HTML(r.content.decode("utf8"))
match.title = html.xpath(self.TITTLE_XPATH)[0].text
match.cover = html.xpath(
self.COVER_XPATH)[0].attrib["href"] or generic_cover
match.cover = html.xpath(self.COVER_XPATH)[0].attrib["href"] or generic_cover
try:
rating_num = float(html.xpath(self.RATING_XPATH)[0].text.strip())
except Exception:
@ -184,39 +146,35 @@ class Douban(Metadata):
tag_elements = html.xpath(self.TAGS_XPATH)
if len(tag_elements):
match.tags = [tag_element.text for tag_element in tag_elements]
else:
match.tags = self._get_tags(decode_content)
description_element = html.xpath(self.DESCRIPTION_XPATH)
if len(description_element):
match.description = html2text(
etree.tostring(description_element[-1]).decode("utf8"))
match.description = html2text(etree.tostring(
description_element[-1], encoding="utf8").decode("utf8"))
info = html.xpath(self.INFO_XPATH)
for element in info:
text = element.text
if self.AUTHORS_PATTERN.search(text):
next_element = element.getnext()
while next_element is not None and next_element.tag != "br":
match.authors.append(next_element.text)
next_element = next_element.getnext()
next = element.getnext()
while next is not None and next.tag != "br":
match.authors.append(next.text)
next = next.getnext()
elif self.PUBLISHER_PATTERN.search(text):
if publisher := element.tail.strip():
match.publisher = publisher
else:
match.publisher = element.getnext().text
match.publisher = element.tail.strip()
elif self.SUBTITLE_PATTERN.search(text):
match.title = f'{match.title}:{element.tail.strip()}'
match.title = f'{match.title}:' + element.tail.strip()
elif self.PUBLISHED_DATE_PATTERN.search(text):
match.publishedDate = self._clean_date(element.tail.strip())
elif self.SERIES_PATTERN.search(text):
elif self.SUBTITLE_PATTERN.search(text):
match.series = element.getnext().text
elif i_type := self.IDENTIFIERS_PATTERN.search(text):
match.identifiers[i_type.group()] = element.tail.strip()
return match
def _clean_date(self, date: str) -> str:
"""
Clean up the date string to be in the format YYYY-MM-DD
@ -236,24 +194,13 @@ class Douban(Metadata):
if date[i].isdigit():
digit.append(date[i])
elif digit:
ls.append("".join(digit) if len(digit) ==
2 else f"0{digit[0]}")
ls.append("".join(digit) if len(digit)==2 else f"0{digit[0]}")
digit = []
if digit:
ls.append("".join(digit) if len(digit) ==
2 else f"0{digit[0]}")
ls.append("".join(digit) if len(digit)==2 else f"0{digit[0]}")
moon = ls[0]
if len(ls) > 1:
day = ls[1]
if len(ls)>1:
day = ls[1]
return f"{year}-{moon}-{day}"
def _get_tags(self, text: str) -> List[str]:
tags = []
if criteria := self.CRITERIA_PATTERN.search(text):
tags.extend(
item.replace('7:', '') for item in criteria.group().split('|')
if item.startswith('7:'))
return tags

View File

@ -19,7 +19,6 @@
# Google Books api document: https://developers.google.com/books/docs/v1/using
from typing import Dict, List, Optional
from urllib.parse import quote
from datetime import datetime
import requests
@ -82,11 +81,7 @@ class Google(Metadata):
match.description = result["volumeInfo"].get("description", "")
match.languages = self._parse_languages(result=result, locale=locale)
match.publisher = result["volumeInfo"].get("publisher", "")
try:
datetime.strptime(result["volumeInfo"].get("publishedDate", ""), "%Y-%m-%d")
match.publishedDate = result["volumeInfo"].get("publishedDate", "")
except ValueError:
match.publishedDate = ""
match.publishedDate = result["volumeInfo"].get("publishedDate", "")
match.rating = result["volumeInfo"].get("averageRating", 0)
match.series, match.series_index = "", 1
match.tags = result["volumeInfo"].get("categories", [])
@ -108,13 +103,6 @@ class Google(Metadata):
def _parse_cover(result: Dict, generic_cover: str) -> str:
if result["volumeInfo"].get("imageLinks"):
cover_url = result["volumeInfo"]["imageLinks"]["thumbnail"]
# strip curl in cover
cover_url = cover_url.replace("&edge=curl", "")
# request 800x900 cover image (higher resolution)
cover_url += "&fife=w800-h900"
return cover_url.replace("http://", "https://")
return generic_cover

View File

@ -97,14 +97,12 @@ class LubimyCzytac(Metadata):
LANGUAGES = f"{CONTAINER}//dt[contains(text(),'Język:')]{SIBLINGS}/text()"
DESCRIPTION = f"{CONTAINER}//div[@class='collapse-content']"
SERIES = f"{CONTAINER}//span/a[contains(@href,'/cykl/')]/text()"
TRANSLATOR = f"{CONTAINER}//dt[contains(text(),'Tłumacz:')]{SIBLINGS}/a/text()"
DETAILS = "//div[@id='book-details']"
PUBLISH_DATE = "//dt[contains(@title,'Data pierwszego wydania"
FIRST_PUBLISH_DATE = f"{DETAILS}{PUBLISH_DATE} oryginalnego')]{SIBLINGS}[1]/text()"
FIRST_PUBLISH_DATE_PL = f"{DETAILS}{PUBLISH_DATE} polskiego')]{SIBLINGS}[1]/text()"
TAGS = "//a[contains(@href,'/ksiazki/t/')]/text()" # "//nav[@aria-label='breadcrumbs']//a[contains(@href,'/ksiazki/k/')]/span/text()"
TAGS = "//nav[@aria-label='breadcrumb']//a[contains(@href,'/ksiazki/k/')]/text()"
RATING = "//meta[@property='books:rating:value']/@content"
COVER = "//meta[@property='og:image']/@content"
@ -160,7 +158,6 @@ class LubimyCzytac(Metadata):
class LubimyCzytacParser:
PAGES_TEMPLATE = "<p id='strony'>Książka ma {0} stron(y).</p>"
TRANSLATOR_TEMPLATE = "<p id='translator'>Tłumacz: {0}</p>"
PUBLISH_DATE_TEMPLATE = "<p id='pierwsze_wydanie'>Data pierwszego wydania: {0}</p>"
PUBLISH_DATE_PL_TEMPLATE = (
"<p id='pierwsze_wydanie'>Data pierwszego wydania w Polsce: {0}</p>"
@ -349,9 +346,5 @@ class LubimyCzytacParser:
description += LubimyCzytacParser.PUBLISH_DATE_PL_TEMPLATE.format(
first_publish_date_pl.strftime("%d.%m.%Y")
)
translator = self._parse_xpath_node(xpath=LubimyCzytac.TRANSLATOR)
if translator:
description += LubimyCzytacParser.TRANSLATOR_TEMPLATE.format(translator)
return description

View File

@ -54,7 +54,7 @@ class scholar(Metadata):
scholar_gen = itertools.islice(scholarly.search_pubs(query), 10)
except Exception as e:
log.warning(e)
return list()
return None
for result in scholar_gen:
match = self._parse_search_result(
result=result, generic_cover="", locale=locale

View File

@ -74,7 +74,7 @@ def register_user_with_oauth(user=None):
if len(all_oauth.keys()) == 0:
return
if user is None:
flash(_("Register with %(provider)s", provider=", ".join(list(all_oauth.values()))), category="success")
flash(_(u"Register with %(provider)s", provider=", ".join(list(all_oauth.values()))), category="success")
else:
for oauth_key in all_oauth.keys():
# Find this OAuth token in the database, or create it
@ -134,8 +134,8 @@ def bind_oauth_or_register(provider_id, provider_user_id, redirect_url, provider
# already bind with user, just login
if oauth_entry.user:
login_user(oauth_entry.user)
log.debug("You are now logged in as: '%s'", oauth_entry.user.name)
flash(_("Success! You are now logged in as: %(nickname)s", nickname= oauth_entry.user.name),
log.debug(u"You are now logged in as: '%s'", oauth_entry.user.name)
flash(_(u"you are now logged in as: '%(nickname)s'", nickname= oauth_entry.user.name),
category="success")
return redirect(url_for('web.index'))
else:
@ -145,21 +145,21 @@ def bind_oauth_or_register(provider_id, provider_user_id, redirect_url, provider
try:
ub.session.add(oauth_entry)
ub.session.commit()
flash(_("Link to %(oauth)s Succeeded", oauth=provider_name), category="success")
flash(_(u"Link to %(oauth)s Succeeded", oauth=provider_name), category="success")
log.info("Link to {} Succeeded".format(provider_name))
return redirect(url_for('web.profile'))
except Exception as ex:
log.error_or_exception(ex)
ub.session.rollback()
else:
flash(_("Login failed, No User Linked With OAuth Account"), category="error")
flash(_(u"Login failed, No User Linked With OAuth Account"), category="error")
log.info('Login failed, No User Linked With OAuth Account')
return redirect(url_for('web.login'))
# return redirect(url_for('web.login'))
# if config.config_public_reg:
# return redirect(url_for('web.register'))
# else:
# flash(_("Public registration is not enabled"), category="error")
# flash(_(u"Public registration is not enabled"), category="error")
# return redirect(url_for(redirect_url))
except (NoResultFound, AttributeError):
return redirect(url_for(redirect_url))
@ -194,15 +194,15 @@ def unlink_oauth(provider):
ub.session.delete(oauth_entry)
ub.session.commit()
logout_oauth_user()
flash(_("Unlink to %(oauth)s Succeeded", oauth=oauth_check[provider]), category="success")
flash(_(u"Unlink to %(oauth)s Succeeded", oauth=oauth_check[provider]), category="success")
log.info("Unlink to {} Succeeded".format(oauth_check[provider]))
except Exception as ex:
log.error_or_exception(ex)
ub.session.rollback()
flash(_("Unlink to %(oauth)s Failed", oauth=oauth_check[provider]), category="error")
flash(_(u"Unlink to %(oauth)s Failed", oauth=oauth_check[provider]), category="error")
except NoResultFound:
log.warning("oauth %s for user %d not found", provider, current_user.id)
flash(_("Not Linked to %(oauth)s", oauth=provider), category="error")
flash(_(u"Not Linked to %(oauth)s", oauth=provider), category="error")
return redirect(url_for('web.profile'))
def generate_oauth_blueprints():
@ -258,13 +258,13 @@ if ub.oauth_support:
@oauth_authorized.connect_via(oauthblueprints[0]['blueprint'])
def github_logged_in(blueprint, token):
if not token:
flash(_("Failed to log in with GitHub."), category="error")
flash(_(u"Failed to log in with GitHub."), category="error")
log.error("Failed to log in with GitHub")
return False
resp = blueprint.session.get("/user")
if not resp.ok:
flash(_("Failed to fetch user info from GitHub."), category="error")
flash(_(u"Failed to fetch user info from GitHub."), category="error")
log.error("Failed to fetch user info from GitHub")
return False
@ -276,13 +276,13 @@ if ub.oauth_support:
@oauth_authorized.connect_via(oauthblueprints[1]['blueprint'])
def google_logged_in(blueprint, token):
if not token:
flash(_("Failed to log in with Google."), category="error")
flash(_(u"Failed to log in with Google."), category="error")
log.error("Failed to log in with Google")
return False
resp = blueprint.session.get("/oauth2/v2/userinfo")
if not resp.ok:
flash(_("Failed to fetch user info from Google."), category="error")
flash(_(u"Failed to fetch user info from Google."), category="error")
log.error("Failed to fetch user info from Google")
return False
@ -295,8 +295,8 @@ if ub.oauth_support:
@oauth_error.connect_via(oauthblueprints[0]['blueprint'])
def github_error(blueprint, error, error_description=None, error_uri=None):
msg = (
"OAuth error from {name}! "
"error={error} description={description} uri={uri}"
u"OAuth error from {name}! "
u"error={error} description={description} uri={uri}"
).format(
name=blueprint.name,
error=error,
@ -308,8 +308,8 @@ if ub.oauth_support:
@oauth_error.connect_via(oauthblueprints[1]['blueprint'])
def google_error(blueprint, error, error_description=None, error_uri=None):
msg = (
"OAuth error from {name}! "
"error={error} description={description} uri={uri}"
u"OAuth error from {name}! "
u"error={error} description={description} uri={uri}"
).format(
name=blueprint.name,
error=error,
@ -329,10 +329,10 @@ def github_login():
if account_info.ok:
account_info_json = account_info.json()
return bind_oauth_or_register(oauthblueprints[0]['id'], account_info_json['id'], 'github.login', 'github')
flash(_("GitHub Oauth error, please retry later."), category="error")
flash(_(u"GitHub Oauth error, please retry later."), category="error")
log.error("GitHub Oauth error, please retry later")
except (InvalidGrantError, TokenExpiredError) as e:
flash(_("GitHub Oauth error: {}").format(e), category="error")
flash(_(u"GitHub Oauth error: {}").format(e), category="error")
log.error(e)
return redirect(url_for('web.login'))
@ -353,10 +353,10 @@ def google_login():
if resp.ok:
account_info_json = resp.json()
return bind_oauth_or_register(oauthblueprints[1]['id'], account_info_json['id'], 'google.login', 'google')
flash(_("Google Oauth error, please retry later."), category="error")
flash(_(u"Google Oauth error, please retry later."), category="error")
log.error("Google Oauth error, please retry later")
except (InvalidGrantError, TokenExpiredError) as e:
flash(_("Google Oauth error: {}").format(e), category="error")
flash(_(u"Google Oauth error: {}").format(e), category="error")
log.error(e)
return redirect(url_for('web.login'))

View File

@ -21,28 +21,41 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import datetime
import json
from urllib.parse import unquote_plus
from functools import wraps
from flask import Blueprint, request, render_template, make_response, abort, Response, g
from flask import Blueprint, request, render_template, Response, g, make_response, abort
from flask_login import current_user
from flask_babel import get_locale
from flask_babel import gettext as _
from sqlalchemy.sql.expression import func, text, or_, and_, true
from sqlalchemy.exc import InvalidRequestError, OperationalError
from werkzeug.security import check_password_hash
from . import logger, config, db, calibre_db, ub, isoLanguages, constants
from .usermanagement import requires_basic_auth_if_no_ano
from . import constants, logger, config, db, calibre_db, ub, services, isoLanguages
from .helper import get_download_link, get_book_cover
from .pagination import Pagination
from .web import render_read_books
from .usermanagement import load_user_from_request
from flask_babel import gettext as _
opds = Blueprint('opds', __name__)
log = logger.create()
def requires_basic_auth_if_no_ano(f):
@wraps(f)
def decorated(*args, **kwargs):
auth = request.authorization
if config.config_anonbrowse != 1:
if not auth or auth.type != 'basic' or not check_auth(auth.username, auth.password):
return authenticate()
return f(*args, **kwargs)
if config.config_login_type == constants.LOGIN_LDAP and services.ldap and config.config_anonbrowse != 1:
return services.ldap.basic_auth_required(f)
return decorated
@opds.route("/opds/")
@opds.route("/opds")
@requires_basic_auth_if_no_ano
@ -56,7 +69,7 @@ def feed_osd():
return render_xml_template('osd.xml', lang='en-EN')
# @opds.route("/opds/search", defaults={'query': ""})
@opds.route("/opds/search", defaults={'query': ""})
@opds.route("/opds/search/<path:query>")
@requires_basic_auth_if_no_ano
def feed_cc_search(query):
@ -94,8 +107,6 @@ def feed_letter_books(book_id):
@opds.route("/opds/new")
@requires_basic_auth_if_no_ano
def feed_new():
if not current_user.check_visibility(constants.SIDEBAR_RECENT):
abort(404)
off = request.args.get("offset") or 0
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
db.Books, True, [db.Books.timestamp.desc()],
@ -106,8 +117,6 @@ def feed_new():
@opds.route("/opds/discover")
@requires_basic_auth_if_no_ano
def feed_discover():
if not current_user.check_visibility(constants.SIDEBAR_RANDOM):
abort(404)
query = calibre_db.generate_linked_query(config.config_read_column, db.Books)
entries = query.filter(calibre_db.common_filters()).order_by(func.random()).limit(config.config_books_per_page)
pagination = Pagination(1, config.config_books_per_page, int(config.config_books_per_page))
@ -117,8 +126,6 @@ def feed_discover():
@opds.route("/opds/rated")
@requires_basic_auth_if_no_ano
def feed_best_rated():
if not current_user.check_visibility(constants.SIDEBAR_BEST_RATED):
abort(404)
off = request.args.get("offset") or 0
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
db.Books, db.Books.ratings.any(db.Ratings.rating > 9),
@ -130,8 +137,6 @@ def feed_best_rated():
@opds.route("/opds/hot")
@requires_basic_auth_if_no_ano
def feed_hot():
if not current_user.check_visibility(constants.SIDEBAR_HOT):
abort(404)
off = request.args.get("offset") or 0
all_books = ub.session.query(ub.Downloads, func.count(ub.Downloads.book_id)).order_by(
func.count(ub.Downloads.book_id).desc()).group_by(ub.Downloads.book_id)
@ -154,16 +159,12 @@ def feed_hot():
@opds.route("/opds/author")
@requires_basic_auth_if_no_ano
def feed_authorindex():
if not current_user.check_visibility(constants.SIDEBAR_AUTHOR):
abort(404)
return render_element_index(db.Authors.sort, db.books_authors_link, 'opds.feed_letter_author')
@opds.route("/opds/author/letter/<book_id>")
@requires_basic_auth_if_no_ano
def feed_letter_author(book_id):
if not current_user.check_visibility(constants.SIDEBAR_AUTHOR):
abort(404)
off = request.args.get("offset") or 0
letter = true() if book_id == "00" else func.upper(db.Authors.sort).startswith(book_id)
entries = calibre_db.session.query(db.Authors).join(db.books_authors_link).join(db.Books)\
@ -185,8 +186,6 @@ def feed_author(book_id):
@opds.route("/opds/publisher")
@requires_basic_auth_if_no_ano
def feed_publisherindex():
if not current_user.check_visibility(constants.SIDEBAR_PUBLISHER):
abort(404)
off = request.args.get("offset") or 0
entries = calibre_db.session.query(db.Publishers)\
.join(db.books_publishers_link)\
@ -208,16 +207,12 @@ def feed_publisher(book_id):
@opds.route("/opds/category")
@requires_basic_auth_if_no_ano
def feed_categoryindex():
if not current_user.check_visibility(constants.SIDEBAR_CATEGORY):
abort(404)
return render_element_index(db.Tags.name, db.books_tags_link, 'opds.feed_letter_category')
@opds.route("/opds/category/letter/<book_id>")
@requires_basic_auth_if_no_ano
def feed_letter_category(book_id):
if not current_user.check_visibility(constants.SIDEBAR_CATEGORY):
abort(404)
off = request.args.get("offset") or 0
letter = true() if book_id == "00" else func.upper(db.Tags.name).startswith(book_id)
entries = calibre_db.session.query(db.Tags)\
@ -241,16 +236,12 @@ def feed_category(book_id):
@opds.route("/opds/series")
@requires_basic_auth_if_no_ano
def feed_seriesindex():
if not current_user.check_visibility(constants.SIDEBAR_SERIES):
abort(404)
return render_element_index(db.Series.sort, db.books_series_link, 'opds.feed_letter_series')
@opds.route("/opds/series/letter/<book_id>")
@requires_basic_auth_if_no_ano
def feed_letter_series(book_id):
if not current_user.check_visibility(constants.SIDEBAR_SERIES):
abort(404)
off = request.args.get("offset") or 0
letter = true() if book_id == "00" else func.upper(db.Series.sort).startswith(book_id)
entries = calibre_db.session.query(db.Series)\
@ -280,8 +271,6 @@ def feed_series(book_id):
@opds.route("/opds/ratings")
@requires_basic_auth_if_no_ano
def feed_ratingindex():
if not current_user.check_visibility(constants.SIDEBAR_RATING):
abort(404)
off = request.args.get("offset") or 0
entries = calibre_db.session.query(db.Ratings, func.count('books_ratings_link.book').label('count'),
(db.Ratings.rating / 2).label('name')) \
@ -308,8 +297,6 @@ def feed_ratings(book_id):
@opds.route("/opds/formats")
@requires_basic_auth_if_no_ano
def feed_formatindex():
if not current_user.check_visibility(constants.SIDEBAR_FORMAT):
abort(404)
off = request.args.get("offset") or 0
entries = calibre_db.session.query(db.Data).join(db.Books)\
.filter(calibre_db.common_filters()) \
@ -317,6 +304,7 @@ def feed_formatindex():
.order_by(db.Data.format).all()
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
len(entries))
element = list()
for entry in entries:
element.append(FeedObject(entry.format, entry.format))
@ -339,10 +327,8 @@ def feed_format(book_id):
@opds.route("/opds/language/")
@requires_basic_auth_if_no_ano
def feed_languagesindex():
if not current_user.check_visibility(constants.SIDEBAR_LANGUAGE):
abort(404)
off = request.args.get("offset") or 0
if current_user.filter_language() == "all":
if current_user.filter_language() == u"all":
languages = calibre_db.speaking_language()
else:
languages = calibre_db.session.query(db.Languages).filter(
@ -368,11 +354,8 @@ def feed_languages(book_id):
@opds.route("/opds/shelfindex")
@requires_basic_auth_if_no_ano
def feed_shelfindex():
if not (current_user.is_authenticated or g.allow_anonymous):
abort(404)
off = request.args.get("offset") or 0
shelf = ub.session.query(ub.Shelf).filter(
or_(ub.Shelf.is_public == 1, ub.Shelf.user_id == current_user.id)).order_by(ub.Shelf.name).all()
shelf = g.shelves_access
number = len(shelf)
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
number)
@ -382,8 +365,6 @@ def feed_shelfindex():
@opds.route("/opds/shelf/<int:book_id>")
@requires_basic_auth_if_no_ano
def feed_shelf(book_id):
if not (current_user.is_authenticated or g.allow_anonymous):
abort(404)
off = request.args.get("offset") or 0
if current_user.is_anonymous:
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.is_public == 1,
@ -421,7 +402,11 @@ def feed_shelf(book_id):
@opds.route("/opds/download/<book_id>/<book_format>/")
@requires_basic_auth_if_no_ano
def opds_download_link(book_id, book_format):
if not current_user.role_download():
# I gave up with this: With enabled ldap login, the user doesn't get logged in, therefore it's always guest
# workaround, loading the user from the request and checking its download rights here
# in case of anonymous browsing user is None
user = load_user_from_request(request) or current_user
if not user.role_download():
return abort(403)
if "Kobo" in request.headers.get('User-Agent'):
client = "kobo"
@ -444,17 +429,6 @@ def get_metadata_calibre_companion(uuid, library):
return ""
@opds.route("/opds/stats")
@requires_basic_auth_if_no_ano
def get_database_stats():
stat = dict()
stat['books'] = calibre_db.session.query(db.Books).count()
stat['authors'] = calibre_db.session.query(db.Authors).count()
stat['categories'] = calibre_db.session.query(db.Tags).count()
stat['series'] = calibre_db.session.query(db.Series).count()
return Response(json.dumps(stat), mimetype="application/json")
@opds.route("/opds/thumb_240_240/<book_id>")
@opds.route("/opds/cover_240_240/<book_id>")
@opds.route("/opds/cover_90_90/<book_id>")
@ -467,8 +441,6 @@ def feed_get_cover(book_id):
@opds.route("/opds/readbooks")
@requires_basic_auth_if_no_ano
def feed_read_books():
if not (current_user.check_visibility(constants.SIDEBAR_READ_AND_UNREAD) and not current_user.is_anonymous):
return abort(403)
off = request.args.get("offset") or 0
result, pagination = render_read_books(int(off) / (int(config.config_books_per_page)) + 1, True, True)
return render_xml_template('feed.xml', entries=result, pagination=pagination)
@ -477,8 +449,6 @@ def feed_read_books():
@opds.route("/opds/unreadbooks")
@requires_basic_auth_if_no_ano
def feed_unread_books():
if not (current_user.check_visibility(constants.SIDEBAR_READ_AND_UNREAD) and not current_user.is_anonymous):
return abort(403)
off = request.args.get("offset") or 0
result, pagination = render_read_books(int(off) / (int(config.config_books_per_page)) + 1, False, True)
return render_xml_template('feed.xml', entries=result, pagination=pagination)
@ -508,11 +478,32 @@ def feed_search(term):
return render_xml_template('feed.xml', searchterm="")
def check_auth(username, password):
try:
username = username.encode('windows-1252')
except UnicodeEncodeError:
username = username.encode('utf-8')
user = ub.session.query(ub.User).filter(func.lower(ub.User.name) ==
username.decode('utf-8').lower()).first()
if bool(user and check_password_hash(str(user.password), password)):
return True
else:
ip_address = request.headers.get('X-Forwarded-For', request.remote_addr)
log.warning('OPDS Login failed for user "%s" IP-address: %s', username.decode('utf-8'), ip_address)
return False
def authenticate():
return Response(
'Could not verify your access level for that URL.\n'
'You have to login with proper credentials', 401,
{'WWW-Authenticate': 'Basic realm="Login Required"'})
def render_xml_template(*args, **kwargs):
# ToDo: return time in current timezone similar to %z
currtime = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S+00:00")
xml = render_template(current_time=currtime, instance=config.config_calibre_web_title, constants=constants.sidebar_settings, *args, **kwargs)
xml = render_template(current_time=currtime, instance=config.config_calibre_web_title, *args, **kwargs)
response = make_response(xml)
response.headers["Content-Type"] = "application/atom+xml; charset=utf-8"
return response
@ -537,7 +528,7 @@ def render_element_index(database_column, linked_table, folder):
entries = entries.join(linked_table).join(db.Books)
entries = entries.filter(calibre_db.common_filters()).group_by(func.upper(func.substr(database_column, 1, 1))).all()
elements = []
if off == 0 and entries:
if off == 0:
elements.append({'id': "00", 'name': _("All")})
shift = 1
for entry in entries[

21
cps/redirect.py Executable file → Normal file
View File

@ -29,7 +29,7 @@
from urllib.parse import urlparse, urljoin
from flask import request, url_for, redirect, current_app
from flask import request, url_for, redirect
def is_safe_url(target):
@ -38,15 +38,16 @@ def is_safe_url(target):
return test_url.scheme in ('http', 'https') and ref_url.netloc == test_url.netloc
def remove_prefix(text, prefix):
if text.startswith(prefix):
return text[len(prefix):]
return ""
def get_redirect_target():
for target in request.values.get('next'), request.referrer:
if not target:
continue
if is_safe_url(target):
return target
def get_redirect_location(next, endpoint, **values):
target = next or url_for(endpoint, **values)
adapter = current_app.url_map.bind(urlparse(request.host_url).netloc)
if not len(adapter.allowed_methods(remove_prefix(target, request.environ.get('HTTP_X_SCRIPT_NAME',"")))):
def redirect_back(endpoint, **values):
target = request.form['next']
if not target or not is_safe_url(target):
target = url_for(endpoint, **values)
return target
return redirect(target)

View File

@ -58,8 +58,8 @@ def remote_login():
ub.session.add(auth_token)
ub.session_commit()
verify_url = url_for('remotelogin.verify_token', token=auth_token.auth_token, _external=true)
log.debug("Remot Login request with token: %s", auth_token.auth_token)
return render_title_template('remote_login.html', title=_("Login"), token=auth_token.auth_token,
log.debug(u"Remot Login request with token: %s", auth_token.auth_token)
return render_title_template('remote_login.html', title=_(u"Login"), token=auth_token.auth_token,
verify_url=verify_url, page="remotelogin")
@ -71,8 +71,8 @@ def verify_token(token):
# Token not found
if auth_token is None:
flash(_("Token not found"), category="error")
log.error("Remote Login token not found")
flash(_(u"Token not found"), category="error")
log.error(u"Remote Login token not found")
return redirect(url_for('web.index'))
# Token expired
@ -80,8 +80,8 @@ def verify_token(token):
ub.session.delete(auth_token)
ub.session_commit()
flash(_("Token has expired"), category="error")
log.error("Remote Login token expired")
flash(_(u"Token has expired"), category="error")
log.error(u"Remote Login token expired")
return redirect(url_for('web.index'))
# Update token with user information
@ -89,8 +89,8 @@ def verify_token(token):
auth_token.verified = True
ub.session_commit()
flash(_("Success! Please return to your device"), category="success")
log.debug("Remote Login token for userid %s verified", auth_token.user_id)
flash(_(u"Success! Please return to your device"), category="success")
log.debug(u"Remote Login token for userid %s verified", auth_token.user_id)
return redirect(url_for('web.index'))
@ -105,7 +105,7 @@ def token_verified():
# Token not found
if auth_token is None:
data['status'] = 'error'
data['message'] = _("Token not found")
data['message'] = _(u"Token not found")
# Token expired
elif datetime.now() > auth_token.expiration:
@ -113,7 +113,7 @@ def token_verified():
ub.session_commit()
data['status'] = 'error'
data['message'] = _("Token has expired")
data['message'] = _(u"Token has expired")
elif not auth_token.verified:
data['status'] = 'not_verified'
@ -126,8 +126,8 @@ def token_verified():
ub.session_commit("User {} logged in via remotelogin, token deleted".format(user.name))
data['status'] = 'success'
log.debug("Remote Login for userid %s succeeded", user.id)
flash(_("Success! You are now logged in as: %(nickname)s", nickname=user.name), category="success")
log.debug(u"Remote Login for userid %s succeeded", user.id)
flash(_(u"you are now logged in as: '%(nickname)s'", nickname=user.name), category="success")
response = make_response(json.dumps(data, ensure_ascii=False))
response.headers["Content-Type"] = "application/json; charset=utf-8"

View File

@ -20,13 +20,11 @@ from flask import render_template, g, abort, request
from flask_babel import gettext as _
from werkzeug.local import LocalProxy
from flask_login import current_user
from sqlalchemy.sql.expression import or_
from . import config, constants, logger, ub
from . import config, constants, logger
from .ub import User
log = logger.create()
def get_sidebar_config(kwargs=None):
@ -47,12 +45,12 @@ def get_sidebar_config(kwargs=None):
"show_text": _('Show Hot Books'), "config_show": True})
if current_user.role_admin():
sidebar.append({"glyph": "glyphicon-download", "text": _('Downloaded Books'), "link": 'web.download_list',
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not current_user.is_anonymous),
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not g.user.is_anonymous),
"page": "download", "show_text": _('Show Downloaded Books'),
"config_show": content})
else:
sidebar.append({"glyph": "glyphicon-download", "text": _('Downloaded Books'), "link": 'web.books_list',
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not current_user.is_anonymous),
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not g.user.is_anonymous),
"page": "download", "show_text": _('Show Downloaded Books'),
"config_show": content})
sidebar.append(
@ -60,50 +58,47 @@ def get_sidebar_config(kwargs=None):
"visibility": constants.SIDEBAR_BEST_RATED, 'public': True, "page": "rated",
"show_text": _('Show Top Rated Books'), "config_show": True})
sidebar.append({"glyph": "glyphicon-eye-open", "text": _('Read Books'), "link": 'web.books_list', "id": "read",
"visibility": constants.SIDEBAR_READ_AND_UNREAD, 'public': (not current_user.is_anonymous),
"page": "read", "show_text": _('Show Read and Unread'), "config_show": content})
"visibility": constants.SIDEBAR_READ_AND_UNREAD, 'public': (not g.user.is_anonymous),
"page": "read", "show_text": _('Show read and unread'), "config_show": content})
sidebar.append(
{"glyph": "glyphicon-eye-close", "text": _('Unread Books'), "link": 'web.books_list', "id": "unread",
"visibility": constants.SIDEBAR_READ_AND_UNREAD, 'public': (not current_user.is_anonymous), "page": "unread",
"visibility": constants.SIDEBAR_READ_AND_UNREAD, 'public': (not g.user.is_anonymous), "page": "unread",
"show_text": _('Show unread'), "config_show": False})
sidebar.append({"glyph": "glyphicon-random", "text": _('Discover'), "link": 'web.books_list', "id": "rand",
"visibility": constants.SIDEBAR_RANDOM, 'public': True, "page": "discover",
"show_text": _('Show Random Books'), "config_show": True})
sidebar.append({"glyph": "glyphicon-inbox", "text": _('Categories'), "link": 'web.category_list', "id": "cat",
"visibility": constants.SIDEBAR_CATEGORY, 'public': True, "page": "category",
"show_text": _('Show Category Section'), "config_show": True})
"show_text": _('Show category selection'), "config_show": True})
sidebar.append({"glyph": "glyphicon-bookmark", "text": _('Series'), "link": 'web.series_list', "id": "serie",
"visibility": constants.SIDEBAR_SERIES, 'public': True, "page": "series",
"show_text": _('Show Series Section'), "config_show": True})
"show_text": _('Show series selection'), "config_show": True})
sidebar.append({"glyph": "glyphicon-user", "text": _('Authors'), "link": 'web.author_list', "id": "author",
"visibility": constants.SIDEBAR_AUTHOR, 'public': True, "page": "author",
"show_text": _('Show Author Section'), "config_show": True})
"show_text": _('Show author selection'), "config_show": True})
sidebar.append(
{"glyph": "glyphicon-text-size", "text": _('Publishers'), "link": 'web.publisher_list', "id": "publisher",
"visibility": constants.SIDEBAR_PUBLISHER, 'public': True, "page": "publisher",
"show_text": _('Show Publisher Section'), "config_show":True})
"show_text": _('Show publisher selection'), "config_show":True})
sidebar.append({"glyph": "glyphicon-flag", "text": _('Languages'), "link": 'web.language_overview', "id": "lang",
"visibility": constants.SIDEBAR_LANGUAGE, 'public': (current_user.filter_language() == 'all'),
"visibility": constants.SIDEBAR_LANGUAGE, 'public': (g.user.filter_language() == 'all'),
"page": "language",
"show_text": _('Show Language Section'), "config_show": True})
"show_text": _('Show language selection'), "config_show": True})
sidebar.append({"glyph": "glyphicon-star-empty", "text": _('Ratings'), "link": 'web.ratings_list', "id": "rate",
"visibility": constants.SIDEBAR_RATING, 'public': True,
"page": "rating", "show_text": _('Show Ratings Section'), "config_show": True})
"page": "rating", "show_text": _('Show ratings selection'), "config_show": True})
sidebar.append({"glyph": "glyphicon-file", "text": _('File formats'), "link": 'web.formats_list', "id": "format",
"visibility": constants.SIDEBAR_FORMAT, 'public': True,
"page": "format", "show_text": _('Show File Formats Section'), "config_show": True})
"page": "format", "show_text": _('Show file formats selection'), "config_show": True})
sidebar.append(
{"glyph": "glyphicon-trash", "text": _('Archived Books'), "link": 'web.books_list', "id": "archived",
"visibility": constants.SIDEBAR_ARCHIVED, 'public': (not current_user.is_anonymous), "page": "archived",
"show_text": _('Show Archived Books'), "config_show": content})
"visibility": constants.SIDEBAR_ARCHIVED, 'public': (not g.user.is_anonymous), "page": "archived",
"show_text": _('Show archived books'), "config_show": content})
if not simple:
sidebar.append(
{"glyph": "glyphicon-th-list", "text": _('Books List'), "link": 'web.books_table', "id": "list",
"visibility": constants.SIDEBAR_LIST, 'public': (not current_user.is_anonymous), "page": "list",
"visibility": constants.SIDEBAR_LIST, 'public': (not g.user.is_anonymous), "page": "list",
"show_text": _('Show Books List'), "config_show": content})
g.shelves_access = ub.session.query(ub.Shelf).filter(
or_(ub.Shelf.is_public == 1, ub.Shelf.user_id == current_user.id)).order_by(ub.Shelf.name).all()
return sidebar, simple

View File

@ -19,26 +19,19 @@
import datetime
from . import config, constants
from .services.background_scheduler import BackgroundScheduler, CronTrigger, use_APScheduler
from .services.background_scheduler import BackgroundScheduler, use_APScheduler
from .tasks.database import TaskReconnectDatabase
from .tasks.tempFolder import TaskDeleteTempFolder
from .tasks.thumbnail import TaskGenerateCoverThumbnails, TaskGenerateSeriesThumbnails, TaskClearCoverThumbnailCache
from .services.worker import WorkerThread
from .tasks.metadata_backup import TaskBackupMetadata
def get_scheduled_tasks(reconnect=True):
tasks = list()
# Reconnect Calibre database (metadata.db) based on config.schedule_reconnect
# config.schedule_reconnect or
# Reconnect Calibre database (metadata.db)
if reconnect:
tasks.append([lambda: TaskReconnectDatabase(), 'reconnect', False])
# Delete temp folder
tasks.append([lambda: TaskDeleteTempFolder(), 'delete temp', True])
# Generate metadata.opf file for each changed book
if config.schedule_metadata_backup:
tasks.append([lambda: TaskBackupMetadata("en"), 'backup metadata', False])
# Generate all missing book cover thumbnails
if config.schedule_generate_book_covers:
tasks.append([lambda: TaskClearCoverThumbnailCache(0), 'delete superfluous book covers', True])
@ -69,13 +62,10 @@ def register_scheduled_tasks(reconnect=True):
duration = config.schedule_duration
# Register scheduled tasks
timezone_info = datetime.datetime.now(datetime.timezone.utc).astimezone().tzinfo
scheduler.schedule_tasks(tasks=get_scheduled_tasks(reconnect), trigger=CronTrigger(hour=start,
timezone=timezone_info))
scheduler.schedule_tasks(tasks=get_scheduled_tasks(reconnect), trigger='cron', hour=start)
end_time = calclulate_end_time(start, duration)
scheduler.schedule(func=end_scheduled_tasks, trigger=CronTrigger(hour=end_time.hour, minute=end_time.minute,
timezone=timezone_info),
name="end scheduled task")
scheduler.schedule(func=end_scheduled_tasks, trigger='cron', name="end scheduled task", hour=end_time.hour,
minute=end_time.minute)
# Kick-off tasks, if they should currently be running
if should_task_be_running(start, duration):
@ -93,8 +83,6 @@ def register_startup_tasks():
# Ignore tasks that should currently be running, as these will be added when registering scheduled tasks
if constants.APP_MODE in ['development', 'test'] and not should_task_be_running(start, duration):
scheduler.schedule_tasks_immediately(tasks=get_scheduled_tasks(False))
else:
scheduler.schedule_tasks_immediately(tasks=[[lambda: TaskDeleteTempFolder(), 'delete temp', True]])
def should_task_be_running(start, duration):
@ -103,7 +91,6 @@ def should_task_be_running(start, duration):
end_time = start_time + datetime.timedelta(hours=duration // 60, minutes=duration % 60)
return start_time < now < end_time
def calclulate_end_time(start, duration):
start_time = datetime.datetime.now().replace(hour=start, minute=0)
return start_time + datetime.timedelta(hours=duration // 60, minutes=duration % 60)

View File

@ -45,7 +45,7 @@ def simple_search():
return render_title_template('search.html',
searchterm="",
result_count=0,
title=_("Search"),
title=_(u"Search"),
page="search")
@ -185,18 +185,18 @@ def extend_search_term(searchterm,
searchterm.extend((author_name.replace('|', ','), book_title, publisher))
if pub_start:
try:
searchterm.extend([_("Published after ") +
searchterm.extend([_(u"Published after ") +
format_date(datetime.strptime(pub_start, "%Y-%m-%d"),
format='medium')])
except ValueError:
pub_start = ""
pub_start = u""
if pub_end:
try:
searchterm.extend([_("Published before ") +
searchterm.extend([_(u"Published before ") +
format_date(datetime.strptime(pub_end, "%Y-%m-%d"),
format='medium')])
except ValueError:
pub_end = ""
pub_end = u""
elements = {'tag': db.Tags, 'serie':db.Series, 'shelf':ub.Shelf}
for key, db_element in elements.items():
tag_names = calibre_db.session.query(db_element).filter(db_element.id.in_(tags['include_' + key])).all()
@ -214,11 +214,11 @@ def extend_search_term(searchterm,
language_names = calibre_db.speaking_language(language_names)
searchterm.extend(language.name for language in language_names)
if rating_high:
searchterm.extend([_("Rating <= %(rating)s", rating=rating_high)])
searchterm.extend([_(u"Rating <= %(rating)s", rating=rating_high)])
if rating_low:
searchterm.extend([_("Rating >= %(rating)s", rating=rating_low)])
if read_status != "Any":
searchterm.extend([_("Read Status = '%(status)s'", status=read_status)])
searchterm.extend([_(u"Rating >= %(rating)s", rating=rating_low)])
if read_status:
searchterm.extend([_(u"Read Status = %(status)s", status=read_status)])
searchterm.extend(ext for ext in tags['include_extension'])
searchterm.extend(ext for ext in tags['exclude_extension'])
# handle custom columns
@ -267,23 +267,23 @@ def render_adv_search_results(term, offset=None, order=None, limit=None):
column_start = term.get('custom_column_' + str(c.id) + '_start')
column_end = term.get('custom_column_' + str(c.id) + '_end')
if column_start:
search_term.extend(["{} >= {}".format(c.name,
search_term.extend([u"{} >= {}".format(c.name,
format_date(datetime.strptime(column_start, "%Y-%m-%d").date(),
format='medium')
)])
cc_present = True
if column_end:
search_term.extend(["{} <= {}".format(c.name,
search_term.extend([u"{} <= {}".format(c.name,
format_date(datetime.strptime(column_end, "%Y-%m-%d").date(),
format='medium')
)])
cc_present = True
elif term.get('custom_column_' + str(c.id)):
search_term.extend([("{}: {}".format(c.name, term.get('custom_column_' + str(c.id))))])
search_term.extend([(u"{}: {}".format(c.name, term.get('custom_column_' + str(c.id))))])
cc_present = True
if any(tags.values()) or author_name or book_title or publisher or pub_start or pub_end or rating_low \
or rating_high or description or cc_present or read_status != "Any":
or rating_high or description or cc_present or read_status:
search_term, pub_start, pub_end = extend_search_term(search_term,
author_name,
book_title,
@ -302,8 +302,7 @@ def render_adv_search_results(term, offset=None, order=None, limit=None):
q = q.filter(func.datetime(db.Books.pubdate) > func.datetime(pub_start))
if pub_end:
q = q.filter(func.datetime(db.Books.pubdate) < func.datetime(pub_end))
if read_status != "Any":
q = q.filter(adv_search_read_status(read_status))
q = q.filter(adv_search_read_status(read_status))
if publisher:
q = q.filter(db.Books.publishers.any(func.lower(db.Publishers.name).ilike("%" + publisher + "%")))
q = adv_search_tag(q, tags['include_tag'], tags['exclude_tag'])
@ -340,7 +339,7 @@ def render_adv_search_results(term, offset=None, order=None, limit=None):
pagination=pagination,
entries=entries,
result_count=result_count,
title=_("Advanced Search"), page="advsearch",
title=_(u"Advanced Search"), page="advsearch",
order=order[1])
@ -367,28 +366,22 @@ def render_prepare_search_form(cc):
.filter(calibre_db.common_filters()) \
.group_by(db.Data.format)\
.order_by(db.Data.format).all()
if current_user.filter_language() == "all":
if current_user.filter_language() == u"all":
languages = calibre_db.speaking_language()
else:
languages = None
return render_title_template('search_form.html', tags=tags, languages=languages, extensions=extensions,
series=series,shelves=shelves, title=_("Advanced Search"), cc=cc, page="advsearch")
series=series,shelves=shelves, title=_(u"Advanced Search"), cc=cc, page="advsearch")
def render_search_results(term, offset=None, order=None, limit=None):
if term:
join = db.books_series_link, db.Books.id == db.books_series_link.c.book, db.Series
entries, result_count, pagination = calibre_db.get_search_results(term,
config,
offset,
order,
limit,
*join)
else:
entries = list()
order = [None, None]
pagination = result_count = None
join = db.books_series_link, db.Books.id == db.books_series_link.c.book, db.Series
entries, result_count, pagination = calibre_db.get_search_results(term,
config,
offset,
order,
limit,
*join)
return render_title_template('search.html',
searchterm=term,
pagination=pagination,
@ -396,7 +389,7 @@ def render_search_results(term, offset=None, order=None, limit=None):
adv_searchterm=term,
entries=entries,
result_count=result_count,
title=_("Search"),
title=_(u"Search"),
page="search",
order=order[1])

View File

@ -21,13 +21,12 @@ import os
import errno
import signal
import socket
import asyncio
import subprocess # nosec
try:
from gevent.pywsgi import WSGIServer
from .gevent_wsgi import MyWSGIHandler
from gevent.pool import Pool
from gevent.socket import socket as GeventSocket
from gevent import __version__ as _version
from greenlet import GreenletExit
import ssl
@ -37,7 +36,6 @@ except ImportError:
from .tornado_wsgi import MyWSGIContainer
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
from tornado import netutil
from tornado import version as _version
VERSION = 'Tornado ' + _version
_GEVENT = False
@ -97,12 +95,7 @@ class WebServer(object):
log.warning('Cert path: %s', certfile_path)
log.warning('Key path: %s', keyfile_path)
def _make_gevent_socket_activated(self):
# Reuse an already open socket on fd=SD_LISTEN_FDS_START
SD_LISTEN_FDS_START = 3
return GeventSocket(fileno=SD_LISTEN_FDS_START)
def _prepare_unix_socket(self, socket_file):
def _make_gevent_unix_socket(self, socket_file):
# the socket file must not exist prior to bind()
if os.path.exists(socket_file):
# avoid nuking regular files and symbolic links (could be a mistype or security issue)
@ -110,41 +103,35 @@ class WebServer(object):
raise OSError(errno.EEXIST, os.strerror(errno.EEXIST), socket_file)
os.remove(socket_file)
unix_sock = WSGIServer.get_listener(socket_file, family=socket.AF_UNIX)
self.unix_socket_file = socket_file
def _make_gevent_listener(self):
# ensure current user and group have r/w permissions, no permissions for other users
# this way the socket can be shared in a semi-secure manner
# between the user running calibre-web and the user running the fronting webserver
os.chmod(socket_file, 0o660)
return unix_sock
def _make_gevent_socket(self):
if os.name != 'nt':
socket_activated = os.environ.get("LISTEN_FDS")
if socket_activated:
sock = self._make_gevent_socket_activated()
sock_info = sock.getsockname()
return sock, "systemd-socket:" + _readable_listen_address(sock_info[0], sock_info[1])
unix_socket_file = os.environ.get("CALIBRE_UNIX_SOCKET")
if unix_socket_file:
self._prepare_unix_socket(unix_socket_file)
unix_sock = WSGIServer.get_listener(unix_socket_file, family=socket.AF_UNIX)
# ensure current user and group have r/w permissions, no permissions for other users
# this way the socket can be shared in a semi-secure manner
# between the user running calibre-web and the user running the fronting webserver
os.chmod(unix_socket_file, 0o660)
return unix_sock, "unix:" + unix_socket_file
return self._make_gevent_unix_socket(unix_socket_file), "unix:" + unix_socket_file
if self.listen_address:
return ((self.listen_address, self.listen_port),
_readable_listen_address(self.listen_address, self.listen_port))
return (self.listen_address, self.listen_port), None
if os.name == 'nt':
self.listen_address = '0.0.0.0'
return ((self.listen_address, self.listen_port),
_readable_listen_address(self.listen_address, self.listen_port))
return (self.listen_address, self.listen_port), None
try:
address = ('::', self.listen_port)
sock = WSGIServer.get_listener(address, family=socket.AF_INET6)
except socket.error as ex:
log.error('%s', ex)
log.warning('Unable to listen on {}, trying on IPv4 only...'.format(address))
log.warning('Unable to listen on "", trying on IPv4 only...')
address = ('', self.listen_port)
sock = WSGIServer.get_listener(address, family=socket.AF_INET)
@ -165,7 +152,7 @@ class WebServer(object):
# The value of __package__ indicates how Python was called. It may
# not exist if a setuptools script is installed as an egg. It may be
# set incorrectly for entry points created with pip on Windows.
if getattr(__main__, "__package__", "") in ["", None] or (
if getattr(__main__, "__package__", None) is None or (
os.name == "nt"
and __main__.__package__ == ""
and not os.path.exists(py_script)
@ -206,15 +193,15 @@ class WebServer(object):
rv.extend(("-m", py_module.lstrip(".")))
rv.extend(args)
if os.name == 'nt':
rv = ['"{}"'.format(a) for a in rv]
return rv
def _start_gevent(self):
ssl_args = self.ssl_args or {}
try:
sock, output = self._make_gevent_listener()
sock, output = self._make_gevent_socket()
if output is None:
output = _readable_listen_address(self.listen_address, self.listen_port)
log.info('Starting Gevent server on %s', output)
self.wsgiserver = WSGIServer(sock, self.app, log=self.access_logger, handler_class=MyWSGIHandler,
error_log=log,
@ -239,42 +226,17 @@ class WebServer(object):
if os.name == 'nt' and sys.version_info > (3, 7):
import asyncio
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
try:
# Max Buffersize set to 200MB
http_server = HTTPServer(MyWSGIContainer(self.app),
max_buffer_size=209700000,
ssl_options=self.ssl_args)
log.info('Starting Tornado server on %s', _readable_listen_address(self.listen_address, self.listen_port))
unix_socket_file = os.environ.get("CALIBRE_UNIX_SOCKET")
if os.environ.get("LISTEN_FDS") and os.name != 'nt':
SD_LISTEN_FDS_START = 3
sock = socket.socket(fileno=SD_LISTEN_FDS_START)
http_server.add_socket(sock)
sock.setblocking(0)
socket_name =sock.getsockname()
output = "systemd-socket:" + _readable_listen_address(socket_name[0], socket_name[1])
elif unix_socket_file and os.name != 'nt':
self._prepare_unix_socket(unix_socket_file)
output = "unix:" + unix_socket_file
unix_socket = netutil.bind_unix_socket(self.unix_socket_file)
http_server.add_socket(unix_socket)
# ensure current user and group have r/w permissions, no permissions for other users
# this way the socket can be shared in a semi-secure manner
# between the user running calibre-web and the user running the fronting webserver
os.chmod(self.unix_socket_file, 0o660)
else:
output = _readable_listen_address(self.listen_address, self.listen_port)
http_server.listen(self.listen_port, self.listen_address)
log.info('Starting Tornado server on %s', output)
self.wsgiserver = IOLoop.current()
self.wsgiserver.start()
# wait for stop signal
self.wsgiserver.close(True)
finally:
if self.unix_socket_file:
os.remove(self.unix_socket_file)
self.unix_socket_file = None
# Max Buffersize set to 200MB
http_server = HTTPServer(MyWSGIContainer(self.app),
max_buffer_size=209700000,
ssl_options=self.ssl_args)
http_server.listen(self.listen_port, self.listen_address)
self.wsgiserver = IOLoop.current()
self.wsgiserver.start()
# wait for stop signal
self.wsgiserver.close(True)
def start(self):
try:
@ -300,16 +262,9 @@ class WebServer(object):
log.info("Performing restart of Calibre-Web")
args = self._get_args_for_reloading()
os.execv(args[0].lstrip('"').rstrip('"'), args)
subprocess.call(args, close_fds=True) # nosec
return True
@staticmethod
def shutdown_scheduler():
from .services.background_scheduler import BackgroundScheduler
scheduler = BackgroundScheduler()
if scheduler:
scheduler.scheduler.shutdown()
def _killServer(self, __, ___):
self.stop()
@ -318,14 +273,9 @@ class WebServer(object):
updater_thread.stop()
log.info("webserver stop (restart=%s)", restart)
self.shutdown_scheduler()
self.restart = restart
if self.wsgiserver:
if _GEVENT:
self.wsgiserver.close()
else:
if restart:
self.wsgiserver.call_later(1.0, self.wsgiserver.stop)
else:
self.wsgiserver.asyncio_loop.call_soon_threadsafe(self.wsgiserver.stop)
self.wsgiserver.add_callback_from_signal(self.wsgiserver.stop)

View File

@ -19,9 +19,11 @@
import sys
from base64 import b64decode, b64encode
from jsonschema import validate, exceptions
from jsonschema import validate, exceptions, __version__
from datetime import datetime
from urllib.parse import unquote
from flask import json
from .. import logger
@ -30,10 +32,10 @@ log = logger.create()
def b64encode_json(json_data):
return b64encode(json.dumps(json_data).encode()).decode("utf-8")
return b64encode(json.dumps(json_data).encode())
# Python3 has a timestamp() method we could be calling, however it's not available in python2.
# Python3 has a timestamp() method we could be calling, however it's not avaiable in python2.
def to_epoch_timestamp(datetime_object):
return (datetime_object - datetime(1970, 1, 1)).total_seconds()
@ -47,7 +49,7 @@ def get_datetime_from_json(json_object, field_name):
class SyncToken:
""" The SyncToken is used to persist state across requests.
""" The SyncToken is used to persist state accross requests.
When serialized over the response headers, the Kobo device will propagate the token onto following
requests to the service. As an example use-case, the SyncToken is used to detect books that have been added
to the library since the last time the device synced to the server.

View File

@ -23,8 +23,6 @@ from .worker import WorkerThread
try:
from apscheduler.schedulers.background import BackgroundScheduler as BScheduler
from apscheduler.triggers.cron import CronTrigger
from apscheduler.triggers.date import DateTrigger
use_APScheduler = True
except (ImportError, RuntimeError) as e:
use_APScheduler = False
@ -45,33 +43,35 @@ class BackgroundScheduler:
cls.scheduler = BScheduler()
cls.scheduler.start()
atexit.register(lambda: cls.scheduler.shutdown())
return cls._instance
def schedule(self, func, trigger, name=None):
def schedule(self, func, trigger, name=None, **trigger_args):
if use_APScheduler:
return self.scheduler.add_job(func=func, trigger=trigger, name=name)
return self.scheduler.add_job(func=func, trigger=trigger, name=name, **trigger_args)
# Expects a lambda expression for the task
def schedule_task(self, task, user=None, name=None, hidden=False, trigger=None):
def schedule_task(self, task, user=None, name=None, hidden=False, trigger='cron', **trigger_args):
if use_APScheduler:
def scheduled_task():
worker_task = task()
worker_task.scheduled = True
WorkerThread.add(user, worker_task, hidden=hidden)
return self.schedule(func=scheduled_task, trigger=trigger, name=name)
return self.schedule(func=scheduled_task, trigger=trigger, name=name, **trigger_args)
# Expects a list of lambda expressions for the tasks
def schedule_tasks(self, tasks, user=None, trigger=None):
def schedule_tasks(self, tasks, user=None, trigger='cron', **trigger_args):
if use_APScheduler:
for task in tasks:
self.schedule_task(task[0], user=user, trigger=trigger, name=task[1], hidden=task[2])
self.schedule_task(task[0], user=user, trigger=trigger, name=task[1], hidden=task[2], **trigger_args)
# Expects a lambda expression for the task
def schedule_task_immediately(self, task, user=None, name=None, hidden=False):
if use_APScheduler:
def immediate_task():
WorkerThread.add(user, task(), hidden)
return self.schedule(func=immediate_task, trigger=DateTrigger(), name=name)
return self.schedule(func=immediate_task, trigger='date', name=name)
# Expects a list of lambda expressions for the tasks
def schedule_tasks_immediately(self, tasks, user=None):

View File

@ -18,49 +18,16 @@
import time
from functools import reduce
import requests
from goodreads.client import GoodreadsClient
from goodreads.request import GoodreadsRequest
import xmltodict
try:
import Levenshtein
from goodreads.client import GoodreadsClient
except ImportError:
Levenshtein = False
from betterreads.client import GoodreadsClient
try: import Levenshtein
except ImportError: Levenshtein = False
from .. import logger
from ..clean_html import clean_string
class my_GoodreadsClient(GoodreadsClient):
def request(self, *args, **kwargs):
"""Create a GoodreadsRequest object and make that request"""
req = my_GoodreadsRequest(self, *args, **kwargs)
return req.request()
class GoodreadsRequestException(Exception):
def __init__(self, error_msg, url):
self.error_msg = error_msg
self.url = url
def __str__(self):
return self.url, ':', self.error_msg
class my_GoodreadsRequest(GoodreadsRequest):
def request(self):
resp = requests.get(self.host+self.path, params=self.params,
headers={"User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:125.0) "
"Gecko/20100101 Firefox/125.0"})
if resp.status_code != 200:
raise GoodreadsRequestException(resp.reason, self.path)
if self.req_format == 'xml':
data_dict = xmltodict.parse(resp.content)
return data_dict['GoodreadsResponse']
else:
raise Exception("Invalid format")
log = logger.create()
@ -71,20 +38,20 @@ _CACHE_TIMEOUT = 23 * 60 * 60 # 23 hours (in seconds)
_AUTHORS_CACHE = {}
def connect(key=None, enabled=True):
def connect(key=None, secret=None, enabled=True):
global _client
if not enabled or not key:
if not enabled or not key or not secret:
_client = None
return
if _client:
# make sure the configuration has not changed since last we used the client
if _client.client_key != key:
if _client.client_key != key or _client.client_secret != secret:
_client = None
if not _client:
_client = my_GoodreadsClient(key, None)
_client = GoodreadsClient(key, secret)
def get_author_info(author_name):
@ -109,7 +76,6 @@ def get_author_info(author_name):
if author_info:
author_info._timestamp = now
author_info.safe_about = clean_string(author_info.about)
_AUTHORS_CACHE[author_name] = author_info
return author_info

View File

@ -20,7 +20,6 @@ import base64
from flask_simpleldap import LDAP, LDAPException
from flask_simpleldap import ldap as pyLDAP
from flask import current_app
from .. import constants, logger
try:
@ -29,47 +28,8 @@ except ImportError:
pass
log = logger.create()
_ldap = LDAP()
class LDAPLogger(object):
def write(self, message):
try:
log.debug(message.strip("\n").replace("\n", ""))
except Exception:
log.debug("Logging Error")
class mySimpleLDap(LDAP):
@staticmethod
def init_app(app):
super(mySimpleLDap, mySimpleLDap).init_app(app)
app.config.setdefault('LDAP_LOGLEVEL', 0)
@property
def initialize(self):
"""Initialize a connection to the LDAP server.
:return: LDAP connection object.
"""
try:
log_level = 2 if current_app.config['LDAP_LOGLEVEL'] == logger.logging.DEBUG else 0
conn = pyLDAP.initialize('{0}://{1}:{2}'.format(
current_app.config['LDAP_SCHEMA'],
current_app.config['LDAP_HOST'],
current_app.config['LDAP_PORT']), trace_level=log_level, trace_file=LDAPLogger())
conn.set_option(pyLDAP.OPT_NETWORK_TIMEOUT,
current_app.config['LDAP_TIMEOUT'])
conn = self._set_custom_options(conn)
conn.protocol_version = pyLDAP.VERSION3
if current_app.config['LDAP_USE_TLS']:
conn.start_tls_s()
return conn
except pyLDAP.LDAPError as e:
raise LDAPException(self.error(e.args))
_ldap = mySimpleLDap()
def init_app(app, config):
if config.config_login_type != constants.LOGIN_LDAP:
@ -84,15 +44,15 @@ def init_app(app, config):
app.config['LDAP_SCHEMA'] = 'ldap'
if config.config_ldap_authentication > constants.LDAP_AUTH_ANONYMOUS:
if config.config_ldap_authentication > constants.LDAP_AUTH_UNAUTHENTICATE:
if config.config_ldap_serv_password_e is None:
config.config_ldap_serv_password_e = ''
app.config['LDAP_PASSWORD'] = config.config_ldap_serv_password_e
if config.config_ldap_serv_password is None:
config.config_ldap_serv_password = ''
app.config['LDAP_PASSWORD'] = base64.b64decode(config.config_ldap_serv_password)
else:
app.config['LDAP_PASSWORD'] = ""
app.config['LDAP_PASSWORD'] = base64.b64decode("")
app.config['LDAP_USERNAME'] = config.config_ldap_serv_username
else:
app.config['LDAP_USERNAME'] = ""
app.config['LDAP_PASSWORD'] = ""
app.config['LDAP_PASSWORD'] = base64.b64decode("")
if bool(config.config_ldap_cert_path):
app.config['LDAP_CUSTOM_OPTIONS'].update({
pyLDAP.OPT_X_TLS_REQUIRE_CERT: pyLDAP.OPT_X_TLS_DEMAND,
@ -110,7 +70,7 @@ def init_app(app, config):
app.config['LDAP_OPENLDAP'] = bool(config.config_ldap_openldap)
app.config['LDAP_GROUP_OBJECT_FILTER'] = config.config_ldap_group_object_filter
app.config['LDAP_GROUP_MEMBERS_FIELD'] = config.config_ldap_group_members_field
app.config['LDAP_LOGLEVEL'] = config.config_log_level
try:
_ldap.init_app(app)
except ValueError:

View File

@ -266,6 +266,3 @@ class CalibreTask:
def _handleSuccess(self):
self.stat = STAT_FINISH_SUCCESS
self.progress = 1
def __str__(self):
return self.name

View File

@ -46,13 +46,13 @@ def add_to_shelf(shelf_id, book_id):
if shelf is None:
log.error("Invalid shelf specified: %s", shelf_id)
if not xhr:
flash(_("Invalid shelf specified"), category="error")
flash(_(u"Invalid shelf specified"), category="error")
return redirect(url_for('web.index'))
return "Invalid shelf specified", 400
if not check_shelf_edit_permissions(shelf):
if not xhr:
flash(_("Sorry you are not allowed to add a book to that shelf"), category="error")
flash(_(u"Sorry you are not allowed to add a book to that shelf"), category="error")
return redirect(url_for('web.index'))
return "Sorry you are not allowed to add a book to the that shelf", 403
@ -61,7 +61,7 @@ def add_to_shelf(shelf_id, book_id):
if book_in_shelf:
log.error("Book %s is already part of %s", book_id, shelf)
if not xhr:
flash(_("Book is already part of the shelf: %(shelfname)s", shelfname=shelf.name), category="error")
flash(_(u"Book is already part of the shelf: %(shelfname)s", shelfname=shelf.name), category="error")
return redirect(url_for('web.index'))
return "Book is already part of the shelf: %s" % shelf.name, 400
@ -71,14 +71,6 @@ def add_to_shelf(shelf_id, book_id):
else:
maxOrder = maxOrder[0]
if not calibre_db.session.query(db.Books).filter(db.Books.id == book_id).one_or_none():
log.error("Invalid Book Id: %s. Could not be added to shelf %s", book_id, shelf.name)
if not xhr:
flash(_("%(book_id)s is a invalid Book Id. Could not be added to Shelf", book_id=book_id),
category="error")
return redirect(url_for('web.index'))
return "%s is a invalid Book Id. Could not be added to Shelf" % book_id, 400
shelf.books.append(ub.BookShelf(shelf=shelf.id, book_id=book_id, order=maxOrder + 1))
shelf.last_modified = datetime.utcnow()
try:
@ -87,14 +79,14 @@ def add_to_shelf(shelf_id, book_id):
except (OperationalError, InvalidRequestError) as e:
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
if "HTTP_REFERER" in request.environ:
return redirect(request.environ["HTTP_REFERER"])
else:
return redirect(url_for('web.index'))
if not xhr:
log.debug("Book has been added to shelf: {}".format(shelf.name))
flash(_("Book has been added to shelf: %(sname)s", sname=shelf.name), category="success")
flash(_(u"Book has been added to shelf: %(sname)s", sname=shelf.name), category="success")
if "HTTP_REFERER" in request.environ:
return redirect(request.environ["HTTP_REFERER"])
else:
@ -108,12 +100,12 @@ def search_to_shelf(shelf_id):
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
if shelf is None:
log.error("Invalid shelf specified: {}".format(shelf_id))
flash(_("Invalid shelf specified"), category="error")
flash(_(u"Invalid shelf specified"), category="error")
return redirect(url_for('web.index'))
if not check_shelf_edit_permissions(shelf):
log.warning("You are not allowed to add a book to the shelf".format(shelf.name))
flash(_("You are not allowed to add a book to the shelf"), category="error")
flash(_(u"You are not allowed to add a book to the shelf"), category="error")
return redirect(url_for('web.index'))
if current_user.id in ub.searched_ids and ub.searched_ids[current_user.id]:
@ -131,7 +123,7 @@ def search_to_shelf(shelf_id):
if not books_for_shelf:
log.error("Books are already part of {}".format(shelf.name))
flash(_("Books are already part of the shelf: %(name)s", name=shelf.name), category="error")
flash(_(u"Books are already part of the shelf: %(name)s", name=shelf.name), category="error")
return redirect(url_for('web.index'))
maxOrder = ub.session.query(func.max(ub.BookShelf.order)).filter(ub.BookShelf.shelf == shelf_id).first()[0] or 0
@ -143,14 +135,14 @@ def search_to_shelf(shelf_id):
try:
ub.session.merge(shelf)
ub.session.commit()
flash(_("Books have been added to shelf: %(sname)s", sname=shelf.name), category="success")
flash(_(u"Books have been added to shelf: %(sname)s", sname=shelf.name), category="success")
except (OperationalError, InvalidRequestError) as e:
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
else:
log.error("Could not add books to shelf: {}".format(shelf.name))
flash(_("Could not add books to shelf: %(sname)s", sname=shelf.name), category="error")
flash(_(u"Could not add books to shelf: %(sname)s", sname=shelf.name), category="error")
return redirect(url_for('web.index'))
@ -190,13 +182,13 @@ def remove_from_shelf(shelf_id, book_id):
except (OperationalError, InvalidRequestError) as e:
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
if "HTTP_REFERER" in request.environ:
return redirect(request.environ["HTTP_REFERER"])
else:
return redirect(url_for('web.index'))
if not xhr:
flash(_("Book has been removed from shelf: %(sname)s", sname=shelf.name), category="success")
flash(_(u"Book has been removed from shelf: %(sname)s", sname=shelf.name), category="success")
if "HTTP_REFERER" in request.environ:
return redirect(request.environ["HTTP_REFERER"])
else:
@ -205,7 +197,7 @@ def remove_from_shelf(shelf_id, book_id):
else:
if not xhr:
log.warning("You are not allowed to remove a book from shelf: {}".format(shelf.name))
flash(_("Sorry you are not allowed to remove a book from this shelf"),
flash(_(u"Sorry you are not allowed to remove a book from this shelf"),
category="error")
return redirect(url_for('web.index'))
return "Sorry you are not allowed to remove a book from this shelf", 403
@ -215,7 +207,7 @@ def remove_from_shelf(shelf_id, book_id):
@login_required
def create_shelf():
shelf = ub.Shelf()
return create_edit_shelf(shelf, page_title=_("Create a Shelf"), page="shelfcreate")
return create_edit_shelf(shelf, page_title=_(u"Create a Shelf"), page="shelfcreate")
@shelf.route("/shelf/edit/<int:shelf_id>", methods=["GET", "POST"])
@ -223,9 +215,9 @@ def create_shelf():
def edit_shelf(shelf_id):
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
if not check_shelf_edit_permissions(shelf):
flash(_("Sorry you are not allowed to edit this shelf"), category="error")
flash(_(u"Sorry you are not allowed to edit this shelf"), category="error")
return redirect(url_for('web.index'))
return create_edit_shelf(shelf, page_title=_("Edit a shelf"), page="shelfedit", shelf_id=shelf_id)
return create_edit_shelf(shelf, page_title=_(u"Edit a shelf"), page="shelfedit", shelf_id=shelf_id)
@shelf.route("/shelf/delete/<int:shelf_id>", methods=["POST"])
@ -240,7 +232,7 @@ def delete_shelf(shelf_id):
except InvalidRequestError as e:
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
return redirect(url_for('web.index'))
@ -277,7 +269,7 @@ def order_shelf(shelf_id):
except (OperationalError, InvalidRequestError) as e:
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
result = list()
if shelf:
@ -286,7 +278,7 @@ def order_shelf(shelf_id):
.add_columns(calibre_db.common_filters().label("visible")) \
.filter(ub.BookShelf.shelf == shelf_id).order_by(ub.BookShelf.order.asc()).all()
return render_title_template('shelf_order.html', entries=result,
title=_("Change order of Shelf: '%(name)s'", name=shelf.name),
title=_(u"Change order of Shelf: '%(name)s'", name=shelf.name),
shelf=shelf, page="shelforder")
else:
abort(404)
@ -303,14 +295,11 @@ def check_shelf_edit_permissions(cur_shelf):
def check_shelf_view_permissions(cur_shelf):
try:
if cur_shelf.is_public:
return True
if current_user.is_anonymous or cur_shelf.user_id != current_user.id:
log.error("User is unauthorized to view non-public shelf: {}".format(cur_shelf.name))
return False
except Exception as e:
log.error(e)
if cur_shelf.is_public:
return True
if current_user.is_anonymous or cur_shelf.user_id != current_user.id:
log.error("User is unauthorized to view non-public shelf: {}".format(cur_shelf.name))
return False
return True
@ -321,7 +310,7 @@ def create_edit_shelf(shelf, page_title, page, shelf_id=False):
if request.method == "POST":
to_save = request.form.to_dict()
if not current_user.role_edit_shelfs() and to_save.get("is_public") == "on":
flash(_("Sorry you are not allowed to create a public shelf"), category="error")
flash(_(u"Sorry you are not allowed to create a public shelf"), category="error")
return redirect(url_for('web.index'))
is_public = 1 if to_save.get("is_public") == "on" else 0
if config.config_kobo_sync:
@ -338,24 +327,24 @@ def create_edit_shelf(shelf, page_title, page, shelf_id=False):
shelf.user_id = int(current_user.id)
ub.session.add(shelf)
shelf_action = "created"
flash_text = _("Shelf %(title)s created", title=shelf_title)
flash_text = _(u"Shelf %(title)s created", title=shelf_title)
else:
shelf_action = "changed"
flash_text = _("Shelf %(title)s changed", title=shelf_title)
flash_text = _(u"Shelf %(title)s changed", title=shelf_title)
try:
ub.session.commit()
log.info("Shelf {} {}".format(shelf_title, shelf_action))
log.info(u"Shelf {} {}".format(shelf_title, shelf_action))
flash(flash_text, category="success")
return redirect(url_for('shelf.show_shelf', shelf_id=shelf.id))
except (OperationalError, InvalidRequestError) as ex:
ub.session.rollback()
log.error_or_exception(ex)
log.error_or_exception("Settings Database error: {}".format(ex))
flash(_("Oops! Database Error: %(error)s.", error=ex.orig), category="error")
flash(_(u"Database error: %(error)s.", error=ex.orig), category="error")
except Exception as ex:
ub.session.rollback()
log.error_or_exception(ex)
flash(_("There was an error"), category="error")
flash(_(u"There was an error"), category="error")
return render_title_template('shelf_edit.html',
shelf=shelf,
title=page_title,
@ -377,7 +366,7 @@ def check_shelf_is_unique(title, is_public, shelf_id=False):
if not is_shelf_name_unique:
log.error("A public shelf with the name '{}' already exists.".format(title))
flash(_("A public shelf with the name '%(title)s' already exists.", title=title),
flash(_(u"A public shelf with the name '%(title)s' already exists.", title=title),
category="error")
else:
is_shelf_name_unique = ub.session.query(ub.Shelf) \
@ -388,7 +377,7 @@ def check_shelf_is_unique(title, is_public, shelf_id=False):
if not is_shelf_name_unique:
log.error("A private shelf with the name '{}' already exists.".format(title))
flash(_("A private shelf with the name '%(title)s' already exists.", title=title),
flash(_(u"A private shelf with the name '%(title)s' already exists.", title=title),
category="error")
return is_shelf_name_unique
@ -465,14 +454,14 @@ def render_show_shelf(shelf_type, shelf_id, page_no, sort_param):
except (OperationalError, InvalidRequestError) as e:
ub.session.rollback()
log.error_or_exception("Settings Database error: {}".format(e))
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
flash(_(u"Database error: %(error)s.", error=e.orig), category="error")
return render_title_template(page,
entries=result,
pagination=pagination,
title=_("Shelf: '%(name)s'", name=shelf.name),
title=_(u"Shelf: '%(name)s'", name=shelf.name),
shelf=shelf,
page="shelf")
else:
flash(_("Error opening shelf. Shelf does not exist or is not accessible"), category="error")
flash(_(u"Error opening shelf. Shelf does not exist or is not accessible"), category="error")
return redirect(url_for("web.index"))

View File

@ -3290,13 +3290,10 @@ div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] .dropd
-ms-transform-origin: center top;
transform-origin: center top;
border: 0;
left: 0 !important;
overflow-y: auto;
}
.dropdown-menu:not(.datepicker-dropdown):not(.profileDropli) {
left: 0 !important;
}
#add-to-shelves {
min-height: 48px;
max-height: calc(100% - 120px);
overflow-y: auto;
}
@ -4426,6 +4423,38 @@ body.advanced_search > div.container-fluid > div.row-fluid > div.col-sm-10 > div
left: 49px;
margin-top: 5px
}
body:not(.blur) > .navbar > .container-fluid > .navbar-header:after, body:not(.blur) > .navbar > .container-fluid > .navbar-header:before {
color: hsla(0, 0%, 100%, .7);
cursor: pointer;
display: block;
font-family: plex-icons-new, serif;
font-size: 20px;
font-stretch: 100%;
font-style: normal;
font-variant-caps: normal;
font-variant-east-asian: normal;
font-variant-numeric: normal;
font-weight: 400;
height: 60px;
letter-spacing: normal;
line-height: 60px;
position: absolute
}
body:not(.blur) > .navbar > .container-fluid > .navbar-header:before {
content: "\EA30";
-webkit-font-variant-ligatures: normal;
font-variant-ligatures: normal;
left: 20px
}
body:not(.blur) > .navbar > .container-fluid > .navbar-header:after {
content: "\EA2F";
-webkit-font-variant-ligatures: normal;
font-variant-ligatures: normal;
left: 60px
}
}
body.admin > div.container-fluid > div > div.col-sm-10 > div.container-fluid > div.row:first-of-type > div.col > h2:before, body.admin > div.container-fluid > div > div.col-sm-10 > div.discover > h2:first-of-type:before, body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.discover > h1:before, body.newuser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.discover > h1:before {
@ -4813,14 +4842,8 @@ body.advsearch:not(.blur) > div.container-fluid > div.row-fluid > div.col-sm-10
z-index: 999999999999999999999999999999999999
}
body.search #shelf-actions button#add-to-shelf {
height: 40px;
}
@media screen and (max-width: 767px) {
body.search .discover, body.advsearch .discover {
display: flex;
flex-direction: column;
}
.search #shelf-actions, body.login .home-btn {
display: none
}
body.read:not(.blur) a[href*=readbooks] {
@ -5141,7 +5164,7 @@ body.login > div.navbar.navbar-default.navbar-static-top > div > div.navbar-head
right: 5px
}
body:not(.search) #shelf-actions > .btn-group.open, .downloadBtn.open, .profileDrop[aria-expanded=true] {
#shelf-actions > .btn-group.open, .downloadBtn.open, .profileDrop[aria-expanded=true] {
pointer-events: none
}
@ -5158,7 +5181,7 @@ body:not(.search) #shelf-actions > .btn-group.open, .downloadBtn.open, .profileD
color: var(--color-primary)
}
body:not(.search) #shelf-actions, body:not(.search) #shelf-actions > .btn-group, body:not(.search) #shelf-actions > .btn-group > .empty-ul {
#shelf-actions, #shelf-actions > .btn-group, #shelf-actions > .btn-group > .empty-ul {
pointer-events: none
}
@ -7286,11 +7309,6 @@ body.edituser.admin > div.container-fluid > div.row-fluid > div.col-sm-10 > div.
float: right
}
body.blur #main-nav + #scnd-nav .create-shelf, body.blur #main-nav + .col-sm-2 #scnd-nav .create-shelf {
float: none;
margin: 5px 0 10px -10px;
}
#main-nav + #scnd-nav .nav-head.hidden-xs {
display: list-item !important;
width: 225px

View File

@ -22,7 +22,3 @@ body.serieslist.grid-view div.container-fluid > div > div.col-sm-10::before {
padding: 0 0;
line-height: 15px;
}
input.datepicker {color: transparent}
input.datepicker:focus {color: transparent}
input.datepicker:focus + input {color: #555}

View File

@ -149,20 +149,6 @@ body {
word-wrap: break-word;
}
#mainContent > canvas {
display: block;
margin-left: auto;
margin-right: auto;
}
.long-strip > .mainImage {
margin-bottom: 4px;
}
.long-strip > .mainImage:last-child {
margin-bottom: 0px !important;
}
#titlebar {
min-height: 25px;
height: auto;

View File

@ -1,3 +0,0 @@
<svg width="12" height="13" viewBox="0 0 12 13" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M5.375 7.625V11.875C5.375 12.0408 5.44085 12.1997 5.55806 12.3169C5.67527 12.4342 5.83424 12.5 6 12.5C6.16576 12.5 6.32473 12.4342 6.44194 12.3169C6.55915 12.1997 6.625 12.0408 6.625 11.875V7.625L7.125 7.125H11.375C11.5408 7.125 11.6997 7.05915 11.8169 6.94194C11.9342 6.82473 12 6.66576 12 6.5C12 6.33424 11.9342 6.17527 11.8169 6.05806C11.6997 5.94085 11.5408 5.875 11.375 5.875H7.125L6.625 5.375V1.125C6.625 0.95924 6.55915 0.800269 6.44194 0.683058C6.32473 0.565848 6.16576 0.5 6 0.5C5.83424 0.5 5.67527 0.565848 5.55806 0.683058C5.44085 0.800269 5.375 0.95924 5.375 1.125V5.375L4.875 5.875H0.625C0.45924 5.875 0.300269 5.94085 0.183058 6.05806C0.065848 6.17527 0 6.33424 0 6.5C0 6.66576 0.065848 6.82473 0.183058 6.94194C0.300269 7.05915 0.45924 7.125 0.625 7.125H4.762L5.375 7.625Z" fill="black"/>
</svg>

Before

Width:  |  Height:  |  Size: 920 B

View File

@ -1,3 +0,0 @@
<svg width="12" height="13" viewBox="0 0 12 13" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M6 0.5C5.21207 0.5 4.43185 0.655195 3.7039 0.956723C2.97595 1.25825 2.31451 1.70021 1.75736 2.25736C1.20021 2.81451 0.758251 3.47595 0.456723 4.2039C0.155195 4.93185 0 5.71207 0 6.5C0 7.28793 0.155195 8.06815 0.456723 8.7961C0.758251 9.52405 1.20021 10.1855 1.75736 10.7426C2.31451 11.2998 2.97595 11.7417 3.7039 12.0433C4.43185 12.3448 5.21207 12.5 6 12.5C7.5913 12.5 9.11742 11.8679 10.2426 10.7426C11.3679 9.61742 12 8.0913 12 6.5C12 4.9087 11.3679 3.38258 10.2426 2.25736C9.11742 1.13214 7.5913 0.5 6 0.5ZM5.06 8.9L2.9464 6.7856C2.85273 6.69171 2.80018 6.56446 2.80033 6.43183C2.80048 6.29921 2.85331 6.17207 2.9472 6.0784C3.04109 5.98473 3.16834 5.93218 3.30097 5.93233C3.43359 5.93248 3.56073 5.98531 3.6544 6.0792L5.3112 7.7368L8.3464 4.7008C8.44109 4.6109 8.56715 4.56153 8.69771 4.56322C8.82827 4.56492 8.95301 4.61754 9.04534 4.70986C9.13766 4.80219 9.19028 4.92693 9.19198 5.05749C9.19367 5.18805 9.1443 5.31411 9.0544 5.4088L5.5624 8.9H5.06Z" fill="#FBFBFE"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.1 KiB

View File

@ -1,6 +0,0 @@
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" height="40" width="40">
<path d="M9 3.5a1.5 1.5 0 0 0-3-.001v7.95C6 12.83 7.12 14 8.5 14s2.5-1.17 2.5-2.55V5.5a.5.5 0 0 1 1 0v6.03C11.955 13.427 10.405 15 8.5 15S5.044 13.426 5 11.53V3.5a2.5 2.5 0 0 1 5 0v7.003a1.5 1.5 0 0 1-3-.003v-5a.5.5 0 0 1 1 0v5a.5.5 0 0 0 1 0Z"/>
</svg>

Before

Width:  |  Height:  |  Size: 552 B

View File

@ -1,7 +0,0 @@
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" height="40" width="40">
<path d="M8.156 12.5a.99.99 0 0 0 .707-.294l.523-2.574L10.5 8.499l1.058-1.04 2.65-.601a.996.996 0 0 0 0-1.414l-3.657-3.658a.996.996 0 0 0-1.414 0l-.523 2.576L7.5 5.499 6.442 6.535l-2.65.6a.996.996 0 0 0 0 1.413l3.657 3.658a.999.999 0 0 0 .707.295z"/>
<path d="M9.842.996c-.386 0-.77.146-1.06.44a.5.5 0 0 0-.136.251l-.492 2.43-1.008 1.03-.953.933-2.511.566a.5.5 0 0 0-.243.133 1.505 1.505 0 0 0-.002 2.123l1.477 1.477-2.768 2.767a.5.5 0 0 0 0 .707.5.5 0 0 0 .708 0l2.767-2.767 1.475 1.474a1.494 1.494 0 0 0 2.123-.002.5.5 0 0 0 .135-.254l.492-2.427 1.008-1.024.953-.937 2.511-.57a.5.5 0 0 0 .243-.132c.586-.58.583-1.543.002-2.125l-3.659-3.656A1.501 1.501 0 0 0 9.842.996Zm.05 1.025a.394.394 0 0 1 .305.12l3.658 3.657c.18.18.141.432.002.627l-2.41.545a.5.5 0 0 0-.24.131L10.15 8.142a.5.5 0 0 0-.007.006L9.029 9.283a.5.5 0 0 0-.133.25l-.48 2.36c-.082.053-.165.109-.26.109a.492.492 0 0 1-.353-.149L4.145 8.195c-.18-.18-.141-.432-.002-.627l2.41-.545a.5.5 0 0 0 .238-.13L7.85 5.857a.5.5 0 0 0 .007-.008l1.114-1.138a.5.5 0 0 0 .133-.25l.472-2.323a.619.619 0 0 1 .317-.117Z"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.3 KiB

View File

@ -1,6 +0,0 @@
<svg width="18" height="19" viewBox="0 0 18 19" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd" d="M12.2 3.09C12.28 3.01 12.43 3 12.43 3C12.48 3 12.58 3.02 12.66 3.1L14.45 4.89C14.58 5.02 14.58 5.22 14.45 5.35L11.7713 8.02872L9.51628 5.77372L12.2 3.09ZM13.2658 5.12L11.7713 6.6145L10.9305 5.77372L12.425 4.27921L13.2658 5.12Z" fill="#FBFBFE"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M5.98 9.32L8.23 11.57L10.7106 9.08938L8.45562 6.83438L5.98 9.31V9.32ZM8.23 10.1558L9.29641 9.08938L8.45562 8.24859L7.38921 9.315L8.23 10.1558Z" fill="#FBFBFE"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M10.1526 13.1816L16.2125 7.1217C16.7576 6.58919 17.05 5.8707 17.05 5.12C17.05 4.36931 16.7576 3.65084 16.2126 3.11834L14.4317 1.33747C13.8992 0.79242 13.1807 0.5 12.43 0.5C11.6643 0.5 10.9529 0.812929 10.4329 1.33289L3.68289 8.08289C3.04127 8.72452 3.00459 9.75075 3.57288 10.4363L1.29187 12.7239C1.09186 12.9245 0.990263 13.1957 1.0007 13.4685L1 14.5C0.447715 14.5 0 14.9477 0 15.5V17.5C0 18.0523 0.447715 18.5 1 18.5H16C16.5523 18.5 17 18.0523 17 17.5V15.5C17 14.9477 16.5523 14.5 16 14.5H10.2325C9.83594 14.5 9.39953 13.9347 10.1526 13.1816ZM4.39 9.85L4.9807 10.4407L2.39762 13.0312H6.63877L7.10501 12.565L7.57125 13.0312H8.88875L15.51 6.41C15.86 6.07 16.05 5.61 16.05 5.12C16.05 4.63 15.86 4.17 15.51 3.83L13.72 2.04C13.38 1.69 12.92 1.5 12.43 1.5C11.94 1.5 11.48 1.7 11.14 2.04L4.39 8.79C4.1 9.08 4.1 9.56 4.39 9.85ZM16 17.5V15.5H1V17.5H16Z" fill="#FBFBFE"/>
<path d="M15.1616 6.05136L15.1616 6.05132L15.1564 6.05645L8.40645 12.8064C8.35915 12.8537 8.29589 12.88 8.23 12.88C8.16411 12.88 8.10085 12.8537 8.05355 12.8064L7.45857 12.2115L7.10501 11.8579L6.75146 12.2115L6.03289 12.93H3.20465L5.33477 10.7937L5.6873 10.4402L5.33426 10.0871L4.74355 9.49645C4.64882 9.40171 4.64882 9.23829 4.74355 9.14355L11.4936 2.39355C11.7436 2.14354 12.0779 2 12.43 2C12.7883 2 13.1179 2.13776 13.3614 2.38839L13.3613 2.38843L13.3664 2.39355L15.1564 4.18355L15.1564 4.18359L15.1616 4.18864C15.4122 4.43211 15.55 4.76166 15.55 5.12C15.55 5.47834 15.4122 5.80789 15.1616 6.05136ZM7.87645 11.9236L8.23 12.2771L8.58355 11.9236L11.0642 9.44293L11.4177 9.08938L11.0642 8.73582L8.80918 6.48082L8.45562 6.12727L8.10207 6.48082L5.62645 8.95645L5.48 9.10289V9.31V9.32V9.52711L5.62645 9.67355L7.87645 11.9236ZM11.4177 8.38227L11.7713 8.73582L12.1248 8.38227L14.8036 5.70355C15.1288 5.37829 15.1288 4.86171 14.8036 4.53645L13.0136 2.74645C12.8186 2.55146 12.5792 2.5 12.43 2.5H12.4134L12.3967 2.50111L12.43 3C12.3967 2.50111 12.3966 2.50112 12.3965 2.50112L12.3963 2.50114L12.3957 2.50117L12.3947 2.50125L12.3924 2.50142L12.387 2.50184L12.3732 2.50311C12.3628 2.50416 12.3498 2.50567 12.3346 2.50784C12.3049 2.51208 12.2642 2.51925 12.2178 2.53146C12.1396 2.55202 11.9797 2.60317 11.8464 2.73645L9.16273 5.42016L8.80918 5.77372L9.16273 6.12727L11.4177 8.38227ZM1.5 16H15.5V17H1.5V16Z" stroke="#15141A"/>
</svg>

Before

Width:  |  Height:  |  Size: 2.9 KiB

View File

@ -1,3 +0,0 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M12 2.75H12.5V2.25V1V0.5H12H10.358C9.91165 0.5 9.47731 0.625661 9.09989 0.860442L9.09886 0.861087L8 1.54837L6.89997 0.860979L6.89911 0.860443C6.5218 0.625734 6.08748 0.5 5.642 0.5H4H3.5V1V2.25V2.75H4H5.642C5.66478 2.75 5.6885 2.75641 5.71008 2.76968C5.71023 2.76977 5.71038 2.76986 5.71053 2.76995L6.817 3.461C6.81704 3.46103 6.81709 3.46105 6.81713 3.46108C6.81713 3.46108 6.81713 3.46108 6.81714 3.46109C6.8552 3.48494 6.876 3.52285 6.876 3.567V8V12.433C6.876 12.4771 6.85523 12.515 6.81722 12.5389C6.81715 12.5389 6.81707 12.539 6.817 12.539L5.70953 13.23C5.70941 13.2301 5.70929 13.2302 5.70917 13.2303C5.68723 13.2438 5.6644 13.25 5.641 13.25H4H3.5V13.75V15V15.5H4H5.642C6.08835 15.5 6.52269 15.3743 6.90011 15.1396L6.90086 15.1391L8 14.4526L9.10003 15.14L9.10089 15.1406C9.47831 15.3753 9.91265 15.501 10.359 15.501H12H12.5V15.001V13.751V13.251H12H10.358C10.3352 13.251 10.3115 13.2446 10.2899 13.2313C10.2897 13.2312 10.2896 13.2311 10.2895 13.231L9.183 12.54C9.18298 12.54 9.18295 12.54 9.18293 12.54C9.18291 12.5399 9.18288 12.5399 9.18286 12.5399C9.14615 12.5169 9.125 12.4797 9.125 12.434V8V3.567C9.125 3.52266 9.14603 3.48441 9.18364 3.4606C9.18377 3.46052 9.1839 3.46043 9.18404 3.46035L10.2895 2.76995C10.2896 2.76985 10.2898 2.76975 10.2899 2.76966C10.3119 2.75619 10.3346 2.75 10.358 2.75H12Z" fill="black" stroke="white"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.4 KiB

View File

@ -1,4 +0,0 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M0.0189877 13.6645L0.612989 10.4635C0.687989 10.0545 0.884989 9.6805 1.18099 9.3825L9.98199 0.5805C10.756 -0.1925 12.015 -0.1945 12.792 0.5805L14.42 2.2085C15.194 2.9835 15.194 4.2435 14.42 5.0185L5.61599 13.8215C5.31999 14.1165 4.94599 14.3125 4.53799 14.3875L1.33599 14.9815C1.26599 14.9935 1.19799 15.0005 1.12999 15.0005C0.832989 15.0005 0.544988 14.8835 0.330988 14.6695C0.0679874 14.4055 -0.0490122 14.0305 0.0189877 13.6645Z" fill="white"/>
<path d="M0.0189877 13.6645L0.612989 10.4635C0.687989 10.0545 0.884989 9.6805 1.18099 9.3825L9.98199 0.5805C10.756 -0.1925 12.015 -0.1945 12.792 0.5805L14.42 2.2085C15.194 2.9835 15.194 4.2435 14.42 5.0185L5.61599 13.8215C5.31999 14.1165 4.94599 14.3125 4.53799 14.3875L1.33599 14.9815C1.26599 14.9935 1.19799 15.0005 1.12999 15.0005C0.832989 15.0005 0.544988 14.8835 0.330988 14.6695C0.0679874 14.4055 -0.0490122 14.0305 0.0189877 13.6645ZM12.472 5.1965L13.632 4.0365L13.631 3.1885L11.811 1.3675L10.963 1.3685L9.80299 2.5285L12.472 5.1965ZM4.31099 13.1585C4.47099 13.1285 4.61799 13.0515 4.73399 12.9345L11.587 6.0815L8.91899 3.4135L2.06599 10.2655C1.94899 10.3835 1.87199 10.5305 1.84099 10.6915L1.36699 13.2485L1.75199 13.6335L4.31099 13.1585Z" fill="black"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.3 KiB

View File

@ -1,8 +0,0 @@
<svg width="29" height="32" viewBox="0 0 29 32" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M28 16.75C28.2761 16.75 28.5 16.5261 28.5 16.25V15C28.5 14.7239 28.2761 14.5 28 14.5H26.358C25.9117 14.5 25.4773 14.6257 25.0999 14.8604L25.0989 14.8611L24 15.5484L22.9 14.861L22.8991 14.8604C22.5218 14.6257 22.0875 14.5 21.642 14.5H20C19.7239 14.5 19.5 14.7239 19.5 15V16.25C19.5 16.5261 19.7239 16.75 20 16.75H21.642C21.6648 16.75 21.6885 16.7564 21.7101 16.7697C21.7102 16.7698 21.7104 16.7699 21.7105 16.77L22.817 17.461C22.817 17.461 22.8171 17.4611 22.8171 17.4611C22.8171 17.4611 22.8171 17.4611 22.8171 17.4611C22.8552 17.4849 22.876 17.5229 22.876 17.567V22.625V27.683C22.876 27.7271 22.8552 27.765 22.8172 27.7889C22.8171 27.7889 22.8171 27.789 22.817 27.789L21.7095 28.48C21.7094 28.4801 21.7093 28.4802 21.7092 28.4803C21.6872 28.4938 21.6644 28.5 21.641 28.5H20C19.7239 28.5 19.5 28.7239 19.5 29V30.25C19.5 30.5261 19.7239 30.75 20 30.75H21.642C22.0883 30.75 22.5227 30.6243 22.9001 30.3896L22.9009 30.3891L24 29.7026L25.1 30.39L25.1009 30.3906C25.4783 30.6253 25.9127 30.751 26.359 30.751H28C28.2761 30.751 28.5 30.5271 28.5 30.251V29.001C28.5 28.7249 28.2761 28.501 28 28.501H26.358C26.3352 28.501 26.3115 28.4946 26.2899 28.4813C26.2897 28.4812 26.2896 28.4811 26.2895 28.481L25.183 27.79C25.183 27.79 25.183 27.79 25.1829 27.79C25.1829 27.7899 25.1829 27.7899 25.1829 27.7899C25.1462 27.7669 25.125 27.7297 25.125 27.684V22.625V17.567C25.125 17.5227 25.146 17.4844 25.1836 17.4606C25.1838 17.4605 25.1839 17.4604 25.184 17.4603L26.2895 16.77C26.2896 16.7699 26.2898 16.7698 26.2899 16.7697C26.3119 16.7562 26.3346 16.75 26.358 16.75H28Z" fill="black" stroke="#FBFBFE" stroke-linejoin="round"/>
<path d="M24.625 17.567C24.625 17.35 24.735 17.152 24.918 17.037L26.026 16.345C26.126 16.283 26.24 16.25 26.358 16.25H28V15H26.358C26.006 15 25.663 15.099 25.364 15.285L24.256 15.978C24.161 16.037 24.081 16.113 24 16.187C23.918 16.113 23.839 16.037 23.744 15.978L22.635 15.285C22.336 15.099 21.993 15 21.642 15H20V16.25H21.642C21.759 16.25 21.874 16.283 21.974 16.345L23.082 17.037C23.266 17.152 23.376 17.35 23.376 17.567V22.625V27.683C23.376 27.9 23.266 28.098 23.082 28.213L21.973 28.905C21.873 28.967 21.759 29 21.641 29H20V30.25H21.642C21.994 30.25 22.337 30.151 22.636 29.965L23.744 29.273C23.84 29.213 23.919 29.137 24 29.064C24.081 29.137 24.161 29.213 24.256 29.273L25.365 29.966C25.664 30.152 26.007 30.251 26.359 30.251H28V29.001H26.358C26.241 29.001 26.126 28.968 26.026 28.906L24.918 28.214C24.734 28.099 24.625 27.901 24.625 27.684V22.625V17.567Z" fill="black"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M12.2 2.59C12.28 2.51 12.43 2.5 12.43 2.5C12.48 2.5 12.58 2.52 12.66 2.6L14.45 4.39C14.58 4.52 14.58 4.72 14.45 4.85L11.7713 7.52872L9.51628 5.27372L12.2 2.59ZM13.2658 4.62L11.7713 6.1145L10.9305 5.27372L12.425 3.77921L13.2658 4.62Z" fill="#FBFBFE"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M5.98 8.82L8.23 11.07L10.7106 8.58938L8.45562 6.33438L5.98 8.81V8.82ZM8.23 9.65579L9.29641 8.58938L8.45562 7.74859L7.38921 8.815L8.23 9.65579Z" fill="#FBFBFE"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M10.1526 12.6816L16.2125 6.6217C16.7576 6.08919 17.05 5.3707 17.05 4.62C17.05 3.86931 16.7576 3.15084 16.2126 2.61834L14.4317 0.837474C13.8992 0.29242 13.1807 0 12.43 0C11.6643 0 10.9529 0.312929 10.4329 0.832893L3.68289 7.58289C3.04127 8.22452 3.00459 9.25075 3.57288 9.93634L1.29187 12.2239C1.09186 12.4245 0.990263 12.6957 1.0007 12.9685L1 14C0.447715 14 0 14.4477 0 15V17C0 17.5523 0.447715 18 1 18H16C16.5523 18 17 17.5523 17 17V15C17 14.4477 16.5523 14 16 14H10.2325C9.83594 14 9.39953 13.4347 10.1526 12.6816ZM4.39 9.35L4.9807 9.9407L2.39762 12.5312H6.63877L7.10501 12.065L7.57125 12.5312H8.88875L15.51 5.91C15.86 5.57 16.05 5.11 16.05 4.62C16.05 4.13 15.86 3.67 15.51 3.33L13.72 1.54C13.38 1.19 12.92 1 12.43 1C11.94 1 11.48 1.2 11.14 1.54L4.39 8.29C4.1 8.58 4.1 9.06 4.39 9.35ZM16 17V15H1V17H16Z" fill="#FBFBFE"/>
<path d="M15.1616 5.55136L15.1616 5.55132L15.1564 5.55645L8.40645 12.3064C8.35915 12.3537 8.29589 12.38 8.23 12.38C8.16411 12.38 8.10085 12.3537 8.05355 12.3064L7.45857 11.7115L7.10501 11.3579L6.75146 11.7115L6.03289 12.43H3.20465L5.33477 10.2937L5.6873 9.94019L5.33426 9.58715L4.74355 8.99645C4.64882 8.90171 4.64882 8.73829 4.74355 8.64355L11.4936 1.89355C11.7436 1.64354 12.0779 1.5 12.43 1.5C12.7883 1.5 13.1179 1.63776 13.3614 1.88839L13.3613 1.88843L13.3664 1.89355L15.1564 3.68355L15.1564 3.68359L15.1616 3.68864C15.4122 3.93211 15.55 4.26166 15.55 4.62C15.55 4.97834 15.4122 5.30789 15.1616 5.55136ZM5.48 8.82V9.02711L5.62645 9.17355L7.87645 11.4236L8.23 11.7771L8.58355 11.4236L11.0642 8.94293L11.4177 8.58938L11.0642 8.23582L8.80918 5.98082L8.45562 5.62727L8.10207 5.98082L5.62645 8.45645L5.48 8.60289V8.81V8.82ZM11.4177 7.88227L11.7713 8.23582L12.1248 7.88227L14.8036 5.20355C15.1288 4.87829 15.1288 4.36171 14.8036 4.03645L13.0136 2.24645C12.8186 2.05146 12.5792 2 12.43 2H12.4134L12.3967 2.00111L12.43 2.5C12.3967 2.00111 12.3966 2.00112 12.3965 2.00112L12.3963 2.00114L12.3957 2.00117L12.3947 2.00125L12.3924 2.00142L12.387 2.00184L12.3732 2.00311C12.3628 2.00416 12.3498 2.00567 12.3346 2.00784C12.3049 2.01208 12.2642 2.01925 12.2178 2.03146C12.1396 2.05202 11.9797 2.10317 11.8464 2.23645L9.16273 4.92016L8.80918 5.27372L9.16273 5.62727L11.4177 7.88227ZM1.5 16.5V15.5H15.5V16.5H1.5Z" stroke="#15141A"/>
</svg>

Before

Width:  |  Height:  |  Size: 5.3 KiB

View File

@ -1,5 +0,0 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd"
d="M11 3H13.6C14 3 14.3 3.3 14.3 3.6C14.3 3.9 14 4.2 13.7 4.2H13.3V14C13.3 15.1 12.4 16 11.3 16H4.80005C3.70005 16 2.80005 15.1 2.80005 14V4.2H2.40005C2.00005 4.2 1.80005 4 1.80005 3.6C1.80005 3.2 2.00005 3 2.40005 3H5.00005V2C5.00005 0.9 5.90005 0 7.00005 0H9.00005C10.1 0 11 0.9 11 2V3ZM6.90005 1.2L6.30005 1.8V3H9.80005V1.8L9.20005 1.2H6.90005ZM11.4 14.7L12 14.1V4.2H4.00005V14.1L4.60005 14.7H11.4ZM7.00005 12.4C7.00005 12.7 6.70005 13 6.40005 13C6.10005 13 5.80005 12.7 5.80005 12.4V7.6C5.70005 7.3 6.00005 7 6.40005 7C6.80005 7 7.00005 7.3 7.00005 7.6V12.4ZM10.2001 12.4C10.2001 12.7 9.90006 13 9.60006 13C9.30006 13 9.00006 12.7 9.00006 12.4V7.6C9.00006 7.3 9.30006 7 9.60006 7C9.90006 7 10.2001 7.3 10.2001 7.6V12.4Z"
fill="black" />
</svg>

Before

Width:  |  Height:  |  Size: 909 B

View File

@ -0,0 +1,6 @@
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"
fill="rgba(255,255,255,1)"><path d="M8 12a1 1 0 0 1-.707-.293l-5-5a1 1 0 0 1 1.414-1.414L8
9.586l4.293-4.293a1 1 0 0 1 1.414 1.414l-5 5A1 1 0 0 1 8 12z"></path></svg>

After

Width:  |  Height:  |  Size: 461 B

View File

@ -1,3 +1,4 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M10.999 8.352L5.534 13.818C5.41551 13.9303 5.25786 13.9918 5.09466 13.9895C4.93146 13.9872 4.77561 13.9212 4.66033 13.8057C4.54505 13.6902 4.47945 13.5342 4.47752 13.3709C4.47559 13.2077 4.53748 13.0502 4.65 12.932L9.585 7.998L4.651 3.067C4.53862 2.94864 4.47691 2.79106 4.47903 2.62786C4.48114 2.46466 4.54692 2.30874 4.66233 2.19333C4.77774 2.07792 4.93366 2.01215 5.09686 2.01003C5.26006 2.00792 5.41763 2.06962 5.536 2.182L11 7.647L10.999 8.352Z" fill="black"/>
</svg>
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"><path d="M8 12a1 1 0 0 1-.707-.293l-5-5a1 1 0 0 1 1.414-1.414L8 9.586l4.293-4.293a1 1 0 0 1 1.414 1.414l-5 5A1 1 0 0 1 8 12z"></path></svg>

Before

Width:  |  Height:  |  Size: 578 B

After

Width:  |  Height:  |  Size: 434 B

View File

@ -0,0 +1,5 @@
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"
fill="rgba(255,255,255,1)"><path d="M13 11a1 1 0 0 1-.707-.293L8 6.414l-4.293 4.293a1 1 0 0 1-1.414-1.414l5-5a1 1 0 0 1 1.414 0l5 5A1 1 0 0 1 13 11z"></path></svg>

After

Width:  |  Height:  |  Size: 458 B

View File

@ -1,3 +1,4 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M5.001 8.352L10.466 13.818C10.5845 13.9303 10.7421 13.9918 10.9053 13.9895C11.0685 13.9872 11.2244 13.9212 11.3397 13.8057C11.4549 13.6902 11.5205 13.5342 11.5225 13.3709C11.5244 13.2077 11.4625 13.0502 11.35 12.932L6.416 7.999L11.349 3.067C11.4614 2.94864 11.5231 2.79106 11.521 2.62786C11.5189 2.46466 11.4531 2.30874 11.3377 2.19333C11.2223 2.07792 11.0663 2.01215 10.9031 2.01003C10.7399 2.00792 10.5824 2.06962 10.464 2.182L5 7.647L5.001 8.352Z" fill="black"/>
</svg>
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"><path d="M13 11a1 1 0 0 1-.707-.293L8 6.414l-4.293 4.293a1 1 0 0 1-1.414-1.414l5-5a1 1 0 0 1 1.414 0l5 5A1 1 0 0 1 13 11z"></path></svg>

Before

Width:  |  Height:  |  Size: 578 B

After

Width:  |  Height:  |  Size: 431 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 326 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 326 B

View File

@ -1,3 +0,0 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd" d="M16.969 10.059C17.262 9.766 17.737 9.766 18.03 10.059C18.323 10.352 18.323 10.827 18.03 11.12L12.15 17H11.35L5.46896 11.12C5.17596 10.827 5.17596 10.352 5.46896 10.059C5.76196 9.766 6.23696 9.766 6.52996 10.059L11 14.529V2.75C11 2.336 11.336 2 11.75 2C12.164 2 12.5 2.336 12.499 2.75V14.529L16.969 10.059ZM4.98193 19.7L5.78193 20.5H17.7169L18.5169 19.7V17.75C18.5169 17.336 18.8529 17 19.2669 17C19.6809 17 20.0169 17.336 20.0169 17.75V19.5C20.0169 20.881 18.8979 22 17.5169 22H5.98193C4.60093 22 3.48193 20.881 3.48193 19.5V17.75C3.48193 17.336 3.81793 17 4.23193 17C4.64593 17 4.98193 17.336 4.98193 17.75V19.7Z" fill="black"/>
</svg>

Before

Width:  |  Height:  |  Size: 782 B

View File

@ -0,0 +1,24 @@
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"
fill="rgba(255,255,255,1)" style="animation:spinLoadingIcon 1s steps(12,end)
infinite"><style>@keyframes
spinLoadingIcon{to{transform:rotate(360deg)}}</style><path
d="M7 3V1s0-1 1-1 1 1 1 1v2s0 1-1 1-1-1-1-1z"/><path d="M4.63
4.1l-1-1.73S3.13 1.5 4 1c.87-.5 1.37.37 1.37.37l1 1.73s.5.87-.37
1.37c-.87.57-1.37-.37-1.37-.37z" fill-opacity=".93"/><path
d="M3.1 6.37l-1.73-1S.5 4.87 1 4c.5-.87 1.37-.37 1.37-.37l1.73 1s.87.5.37
1.37c-.5.87-1.37.37-1.37.37z" fill-opacity=".86"/><path d="M3
9H1S0 9 0 8s1-1 1-1h2s1 0 1 1-1 1-1 1z" fill-opacity=".79"/><path d="M4.1 11.37l-1.73 1S1.5 12.87 1
12c-.5-.87.37-1.37.37-1.37l1.73-1s.87-.5 1.37.37c.5.87-.37 1.37-.37 1.37z"
fill-opacity=".72"/><path d="M3.63 13.56l1-1.73s.5-.87
1.37-.37c.87.5.37 1.37.37 1.37l-1 1.73s-.5.87-1.37.37c-.87-.5-.37-1.37-.37-1.37z"
fill-opacity=".65"/><path d="M7 15v-2s0-1 1-1 1 1 1 1v2s0 1-1
1-1-1-1-1z" fill-opacity=".58"/><path d="M10.63
14.56l-1-1.73s-.5-.87.37-1.37c.87-.5 1.37.37 1.37.37l1 1.73s.5.87-.37
1.37c-.87.5-1.37-.37-1.37-.37z" fill-opacity=".51"/><path
d="M13.56 12.37l-1.73-1s-.87-.5-.37-1.37c.5-.87 1.37-.37 1.37-.37l1.73 1s.87.5.37
1.37c-.5.87-1.37.37-1.37.37z" fill-opacity=".44"/><path d="M15
9h-2s-1 0-1-1 1-1 1-1h2s1 0 1 1-1 1-1 1z" fill-opacity=".37"/><path d="M14.56 5.37l-1.73
1s-.87.5-1.37-.37c-.5-.87.37-1.37.37-1.37l1.73-1s.87-.5 1.37.37c.5.87-.37 1.37-.37
1.37z" fill-opacity=".3"/><path d="M9.64 3.1l.98-1.66s.5-.874
1.37-.37c.87.5.37 1.37.37 1.37l-1 1.73s-.5.87-1.37.37c-.87-.5-.37-1.37-.37-1.37z"
fill-opacity=".23"/></svg>

After

Width:  |  Height:  |  Size: 1.5 KiB

View File

@ -0,0 +1,16 @@
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16
16"
fill="rgba(255,255,255,1)">
<path
d="M8 16a8 8 0 1 1 8-8 8.009 8.009 0 0 1-8 8zM8 2a6 6 0 1 0 6 6 6.006 6.006 0 0 0-6-6z">
</path>
<path
d="M8 7a1 1 0 0 0-1 1v3a1 1 0 0 0 2 0V8a1 1 0 0 0-1-1z">
</path>
<circle
cx="8" cy="5" r="1.188">
</circle>
</svg>

After

Width:  |  Height:  |  Size: 557 B

View File

@ -1,3 +1,15 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M8 1.5C4.41015 1.5 1.5 4.41015 1.5 8C1.5 11.5899 4.41015 14.5 8 14.5C11.5899 14.5 14.5 11.5899 14.5 8C14.5 4.41015 11.5899 1.5 8 1.5ZM0 8C0 3.58172 3.58172 0 8 0C12.4183 0 16 3.58172 16 8C16 12.4183 12.4183 16 8 16C3.58172 16 0 12.4183 0 8ZM8.75 4V5.5H7.25V4H8.75ZM8.75 12V7H7.25V12H8.75Z" fill="black"/>
</svg>
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16
16">
<path
d="M8 16a8 8 0 1 1 8-8 8.009 8.009 0 0 1-8 8zM8 2a6 6 0 1 0 6 6 6.006 6.006 0 0 0-6-6z">
</path>
<path
d="M8 7a1 1 0 0 0-1 1v3a1 1 0 0 0 2 0V8a1 1 0 0 0-1-1z">
</path>
<circle
cx="8" cy="5" r="1.188">
</circle>
</svg>

Before

Width:  |  Height:  |  Size: 417 B

After

Width:  |  Height:  |  Size: 530 B

View File

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16"
fill="rgba(255,255,255,1)"><path d="M13 13c-.3 0-.5-.1-.7-.3L8 8.4l-4.3 4.3c-.9.9-2.3-.5-1.4-1.4l5-5c.4-.4 1-.4 1.4 0l5 5c.6.6.2 1.7-.7 1.7zm0-11H3C1.7 2 1.7 4 3 4h10c1.3 0 1.3-2 0-2z"/></svg>

After

Width:  |  Height:  |  Size: 255 B

View File

@ -1,3 +1 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M14 3.5H2V5H14V3.5ZM8 8.811L12.939 13.75L14.001 12.689L8.531 7.219C8.238 6.926 7.763 6.926 7.47 7.219L2 12.689L3.061 13.75L8 8.811Z" fill="black"/>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16"><path d="M13 13c-.3 0-.5-.1-.7-.3L8 8.4l-4.3 4.3c-.9.9-2.3-.5-1.4-1.4l5-5c.4-.4 1-.4 1.4 0l5 5c.6.6.2 1.7-.7 1.7zm0-11H3C1.7 2 1.7 4 3 4h10c1.3 0 1.3-2 0-2z"/></svg>

Before

Width:  |  Height:  |  Size: 260 B

After

Width:  |  Height:  |  Size: 228 B

View File

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"
fill="rgba(255,255,255,1)"><path d="M15 3.7V13c0 1.5-1.53 3-3 3H7.13c-.72 0-1.63-.5-2.13-1l-5-5s.84-1 .87-1c.13-.1.33-.2.53-.2.1 0 .3.1.4.2L4 10.6V2.7c0-.6.4-1 1-1s1 .4 1 1v4.6h1V1c0-.6.4-1 1-1s1 .4 1 1v6.3h1V1.7c0-.6.4-1 1-1s1 .4 1 1v5.7h1V3.7c0-.6.4-1 1-1s1 .4 1 1z"/></svg>

After

Width:  |  Height:  |  Size: 339 B

View File

@ -1,3 +1 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M7.75 2.125C7.75 1.78021 8.03021 1.5 8.375 1.5C8.71979 1.5 9 1.78021 9 2.125V3.125V8H10.5V3.125C10.5 2.78021 10.7802 2.5 11.125 2.5C11.4698 2.5 11.75 2.78021 11.75 3.125V4.625V8H13.25V4.625C13.25 4.28021 13.5302 4 13.875 4C14.2198 4 14.5 4.28021 14.5 4.625V12.0188L13.3802 13.6628C13.2954 13.7872 13.25 13.9344 13.25 14.085V16H14.75V14.3162L15.8698 12.6722C15.9546 12.5478 16 12.4006 16 12.25V4.625C16 3.45179 15.0482 2.5 13.875 2.5C13.6346 2.5 13.4035 2.53996 13.188 2.6136C12.959 1.68724 12.1219 1 11.125 1C10.8235 1 10.5366 1.06286 10.2768 1.17618C9.9281 0.478968 9.20726 0 8.375 0C7.54274 0 6.8219 0.478968 6.47323 1.17618C6.21337 1.06286 5.9265 1 5.625 1C4.45179 1 3.5 1.95179 3.5 3.125V7.25317C2.66504 6.54282 1.41035 6.58199 0.621672 7.37067C-0.208221 8.20056 -0.208221 9.54644 0.621672 10.3763L0.62188 10.3765L5.499 15.2498V16H6.999V14.939C6.999 14.74 6.9199 14.5491 6.77912 14.4085L1.68233 9.31567C1.43823 9.07156 1.43823 8.67544 1.68233 8.43133C1.92644 8.18722 2.32257 8.18722 2.56667 8.43133L3.71967 9.58433C3.93417 9.79883 4.25676 9.863 4.53701 9.74691C4.81727 9.63082 5 9.35735 5 9.054V3.125C5 2.78021 5.28022 2.5 5.625 2.5C5.96921 2.5 6.24906 2.77927 6.25 3.12326V8H7.75L7.75 3.125L7.75 3.12178V2.125Z" fill="black"/>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"><path d="M15 3.7V13c0 1.5-1.53 3-3 3H7.13c-.72 0-1.63-.5-2.13-1l-5-5s.84-1 .87-1c.13-.1.33-.2.53-.2.1 0 .3.1.4.2L4 10.6V2.7c0-.6.4-1 1-1s1 .4 1 1v4.6h1V1c0-.6.4-1 1-1s1 .4 1 1v6.3h1V1.7c0-.6.4-1 1-1s1 .4 1 1v5.7h1V3.7c0-.6.4-1 1-1s1 .4 1 1z"/></svg>

Before

Width:  |  Height:  |  Size: 1.3 KiB

After

Width:  |  Height:  |  Size: 312 B

View File

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16"
fill="rgba(255,255,255,1)"><path d="M8 10c-.3 0-.5-.1-.7-.3l-5-5c-.9-.9.5-2.3 1.4-1.4L8 7.6l4.3-4.3c.9-.9 2.3.5 1.4 1.4l-5 5c-.2.2-.4.3-.7.3zm5 2H3c-1.3 0-1.3 2 0 2h10c1.3 0 1.3-2 0-2z"/></svg>

After

Width:  |  Height:  |  Size: 256 B

View File

@ -1,3 +1 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M8 8.189L12.939 3.25L14 4.311L8.531 9.781C8.238 10.074 7.763 10.074 7.47 9.781L2 4.311L3.061 3.25L8 8.189ZM14 13.5V12H2V13.5H14Z" fill="black"/>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16"><path d="M8 10c-.3 0-.5-.1-.7-.3l-5-5c-.9-.9.5-2.3 1.4-1.4L8 7.6l4.3-4.3c.9-.9 2.3.5 1.4 1.4l-5 5c-.2.2-.4.3-.7.3zm5 2H3c-1.3 0-1.3 2 0 2h10c1.3 0 1.3-2 0-2z"/></svg>

Before

Width:  |  Height:  |  Size: 257 B

After

Width:  |  Height:  |  Size: 229 B

View File

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"
fill="rgba(255,255,255,1)"><path d="M1 1a1 1 0 011 1v2.4A7 7 0 118 15a7 7 0 01-4.9-2 1 1 0 011.4-1.5 5 5 0 10-1-5.5H6a1 1 0 010 2H1a1 1 0 01-1-1V2a1 1 0 011-1z"/></svg>

After

Width:  |  Height:  |  Size: 231 B

View File

@ -1,3 +1 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M3.4105 4.83612L4.77001 6.19601C5.06701 6.49201 4.85701 7.00001 4.43701 7.00001H0.862006C0.602006 7.00001 0.391006 6.78901 0.391006 6.52901V2.95401C0.391006 2.53401 0.899006 2.32401 1.19601 2.62101L2.32796 3.75328C3.67958 1.78973 5.9401 0.5 8.5 0.5C12.636 0.5 16 3.864 16 8C16 12.136 12.636 15.5 8.5 15.5C4.704 15.5 1.566 12.663 1.075 9H2.59C3.068 11.833 5.532 14 8.5 14C11.809 14 14.5 11.309 14.5 8C14.5 4.691 11.809 2 8.5 2C6.35262 2 4.46893 3.13503 3.4105 4.83612Z" fill="black"/>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"><path d="M1 1a1 1 0 011 1v2.4A7 7 0 118 15a7 7 0 01-4.9-2 1 1 0 011.4-1.5 5 5 0 10-1-5.5H6a1 1 0 010 2H1a1 1 0 01-1-1V2a1 1 0 011-1z"/></svg>

Before

Width:  |  Height:  |  Size: 596 B

After

Width:  |  Height:  |  Size: 204 B

View File

@ -0,0 +1,5 @@
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"
fill="rgba(255,255,255,1)"><path d="M15 1a1 1 0 0 0-1 1v2.418A6.995 6.995 0 1 0 8 15a6.954 6.954 0 0 0 4.95-2.05 1 1 0 0 0-1.414-1.414A5.019 5.019 0 1 1 12.549 6H10a1 1 0 0 0 0 2h5a1 1 0 0 0 1-1V2a1 1 0 0 0-1-1z"></path></svg>

After

Width:  |  Height:  |  Size: 521 B

View File

@ -1,3 +1,4 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M12.5895 4.83613L11.23 6.19601C10.933 6.49201 11.143 7.00001 11.563 7.00001H15.138C15.398 7.00001 15.609 6.78901 15.609 6.52901V2.95401C15.609 2.53401 15.101 2.32401 14.804 2.62101L13.672 3.75328C12.3204 1.78973 10.0599 0.5 7.5 0.5C3.364 0.5 0 3.864 0 8C0 12.136 3.364 15.5 7.5 15.5C11.296 15.5 14.434 12.663 14.925 9H13.41C12.932 11.833 10.468 14 7.5 14C4.191 14 1.5 11.309 1.5 8C1.5 4.691 4.191 2 7.5 2C9.64738 2 11.5311 3.13503 12.5895 4.83613Z" fill="black"/>
</svg>
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"><path d="M15 1a1 1 0 0 0-1 1v2.418A6.995 6.995 0 1 0 8 15a6.954 6.954 0 0 0 4.95-2.05 1 1 0 0 0-1.414-1.414A5.019 5.019 0 1 1 12.549 6H10a1 1 0 0 0 0 2h5a1 1 0 0 0 1-1V2a1 1 0 0 0-1-1z"></path></svg>

Before

Width:  |  Height:  |  Size: 576 B

After

Width:  |  Height:  |  Size: 494 B

View File

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"
fill="rgba(255,255,255,1)"><path d="M0 4h1.5c1 0 1.5.5 1.5 1.5v5c0 1-.5 1.5-1.5 1.5H0zM9.5 4c1 0 1.5.5 1.5 1.5v5c0 1-.5 1.5-1.5 1.5h-3c-1 0-1.5-.5-1.5-1.5v-5C5 4.5 5.5 4 6.5 4zM16 4h-1.5c-1 0-1.5.5-1.5 1.5v5c0 1 .5 1.5 1.5 1.5H16z"/></svg>

After

Width:  |  Height:  |  Size: 302 B

View File

@ -1,3 +1 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M3 3.78C3 2.7621 2.13279 2.11834 1.25 2.01476V2H1V3.5C1.18133 3.5 1.32279 3.5609 1.40708 3.63029C1.48961 3.69823 1.5 3.75458 1.5 3.78V11.72C1.5 11.7454 1.48961 11.8018 1.40708 11.8697C1.32279 11.9391 1.18133 12 1 12V13.5H1.25V13.4852C2.13279 13.3817 3 12.7379 3 11.72V3.78ZM10.5 4C10.5 3.72386 10.2761 3.5 10 3.5H6.5C6.22386 3.5 6 3.72386 6 4V11.5C6 11.7761 6.22386 12 6.5 12H10C10.2761 12 10.5 11.7761 10.5 11.5V4ZM10 2C11.1046 2 12 2.89543 12 4V11.5C12 12.6046 11.1046 13.5 10 13.5H6.5C5.39543 13.5 4.5 12.6046 4.5 11.5V4C4.5 2.89543 5.39543 2 6.5 2H10ZM15.5 2H15.25V2.01476C14.3672 2.11834 13.5 2.7621 13.5 3.78V11.72C13.5 12.7379 14.3672 13.3817 15.25 13.4852V13.5H15.5V12C15.3187 12 15.1772 11.9391 15.0929 11.8697C15.0104 11.8018 15 11.7454 15 11.72V3.78C15 3.75458 15.0104 3.69823 15.0929 3.63029C15.1772 3.5609 15.3187 3.5 15.5 3.5V2Z" fill="black"/>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"><path d="M0 4h1.5c1 0 1.5.5 1.5 1.5v5c0 1-.5 1.5-1.5 1.5H0zM9.5 4c1 0 1.5.5 1.5 1.5v5c0 1-.5 1.5-1.5 1.5h-3c-1 0-1.5-.5-1.5-1.5v-5C5 4.5 5.5 4 6.5 4zM16 4h-1.5c-1 0-1.5.5-1.5 1.5v5c0 1 .5 1.5 1.5 1.5H16z"/></svg>

Before

Width:  |  Height:  |  Size: 971 B

After

Width:  |  Height:  |  Size: 275 B

View File

@ -1,3 +0,0 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M3.5 2C3.5 1.72421 3.72421 1.5 4 1.5H12C12.2758 1.5 12.5 1.72421 12.5 2V14C12.5 14.2758 12.2758 14.5 12 14.5H4C3.72421 14.5 3.5 14.2758 3.5 14V2ZM4 0C2.89579 0 2 0.895786 2 2V14C2 15.1042 2.89579 16 4 16H12C13.1042 16 14 15.1042 14 14V2C14 0.895786 13.1042 0 12 0H4ZM5.89301 6H7.25V10H5.89301C5.54301 10 5.36801 10.423 5.61501 10.67L7.72101 12.776C7.87401 12.929 8.12301 12.929 8.27601 12.776L10.383 10.669C10.63 10.422 10.455 9.99902 10.105 9.99902H8.75V6H10.106C10.456 6 10.632 5.577 10.383 5.331L8.27601 3.224C8.12301 3.071 7.87401 3.071 7.72101 3.224L5.61501 5.33C5.36801 5.577 5.54301 6 5.89301 6Z" fill="black"/>
</svg>

Before

Width:  |  Height:  |  Size: 731 B

View File

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"
fill="rgba(255,255,255,1)"><path d="M9.5 4c1 0 1.5.5 1.5 1.5v5c0 1-.5 1.5-1.5 1.5h-3c-1 0-1.5-.5-1.5-1.5v-5C5 4.5 5.5 4 6.5 4zM11 0v.5c0 1-.5 1.5-1.5 1.5h-3C5.5 2 5 1.5 5 .5V0h6zM11 16v-.5c0-1-.5-1.5-1.5-1.5h-3c-1 0-1.5.5-1.5 1.5v.5h6z"/></svg>

After

Width:  |  Height:  |  Size: 307 B

View File

@ -1,3 +1 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M2 1V1.25H2.01476C2.11834 2.13279 2.7621 3 3.78 3H11.72C12.7379 3 13.3817 2.13279 13.4852 1.25H13.5V1H12C12 1.18133 11.9391 1.32279 11.8697 1.40708C11.8018 1.48961 11.7454 1.5 11.72 1.5H3.78C3.75458 1.5 3.69823 1.48961 3.63029 1.40708C3.5609 1.32279 3.5 1.18133 3.5 1H2ZM4 6C3.72386 6 3.5 6.22386 3.5 6.5V10C3.5 10.2761 3.72386 10.5 4 10.5H11.5C11.7761 10.5 12 10.2761 12 10V6.5C12 6.22386 11.7761 6 11.5 6H4ZM2 6.5C2 5.39543 2.89543 4.5 4 4.5H11.5C12.6046 4.5 13.5 5.39543 13.5 6.5V10C13.5 11.1046 12.6046 12 11.5 12H4C2.89543 12 2 11.1046 2 10V6.5ZM3.78 13.5C2.7621 13.5 2.11834 14.3672 2.01476 15.25H2V15.5H3.5C3.5 15.3187 3.5609 15.1772 3.63029 15.0929C3.69823 15.0104 3.75458 15 3.78 15H11.72C11.7454 15 11.8018 15.0104 11.8697 15.0929C11.9391 15.1772 12 15.3187 12 15.5H13.5V15.25H13.4852C13.3817 14.3672 12.7379 13.5 11.72 13.5H3.78Z" fill="black"/>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" height="16" width="16"><path d="M9.5 4c1 0 1.5.5 1.5 1.5v5c0 1-.5 1.5-1.5 1.5h-3c-1 0-1.5-.5-1.5-1.5v-5C5 4.5 5.5 4 6.5 4zM11 0v.5c0 1-.5 1.5-1.5 1.5h-3C5.5 2 5 1.5 5 .5V0h6zM11 16v-.5c0-1-.5-1.5-1.5-1.5h-3c-1 0-1.5.5-1.5 1.5v.5h6z"/></svg>

Before

Width:  |  Height:  |  Size: 969 B

After

Width:  |  Height:  |  Size: 280 B

View File

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16"
fill="rgba(255,255,255,1)"><path d="M5.5 4c1 0 1.5.5 1.5 1.5v5c0 1-.5 1.5-1.5 1.5h-3c-1 0-1.5-.5-1.5-1.5v-5C1 4.5 1.5 4 2.5 4zM7 0v.5C7 1.5 6.5 2 5.5 2h-3C1.5 2 1 1.5 1 .5V0h6zM7 16v-.5c0-1-.5-1.5-1.5-1.5h-3c-1 0-1.5.5-1.5 1.5v.5h6zM13.5 4c1 0 1.5.5 1.5 1.5v5c0 1-.5 1.5-1.5 1.5h-3c-1 0-1.5-.5-1.5-1.5v-5c0-1 .5-1.5 1.5-1.5zM15 0v.5c0 1-.5 1.5-1.5 1.5h-3C9.5 2 9 1.5 9 .5V0h6zM15 16v-.507c0-1-.5-1.5-1.5-1.5h-3C9.5 14 9 14.5 9 15.5v.5h6z"/></svg>

After

Width:  |  Height:  |  Size: 509 B

View File

@ -1,3 +1 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M2.5 1C2.5 1.27579 2.72421 1.5 3 1.5H5C5.27579 1.5 5.5 1.27579 5.5 1H7C7 2.10421 6.10421 3 5 3H3C1.89579 3 1 2.10421 1 1H2.5ZM2.5 6C2.5 5.72421 2.72421 5.5 3 5.5H5C5.27579 5.5 5.5 5.72421 5.5 6V10C5.5 10.2758 5.27579 10.5 5 10.5H3C2.72421 10.5 2.5 10.2758 2.5 10V6ZM3 4C1.89579 4 1 4.89579 1 6V10C1 11.1042 1.89579 12 3 12H5C6.10421 12 7 11.1042 7 10V6C7 4.89579 6.10421 4 5 4H3ZM10 6C10 5.72421 10.2242 5.5 10.5 5.5H12.5C12.7758 5.5 13 5.72421 13 6V10C13 10.2758 12.7758 10.5 12.5 10.5H10.5C10.2242 10.5 10 10.2758 10 10V6ZM10.5 4C9.39579 4 8.5 4.89579 8.5 6V10C8.5 11.1042 9.39579 12 10.5 12H12.5C13.6042 12 14.5 11.1042 14.5 10V6C14.5 4.89579 13.6042 4 12.5 4H10.5ZM3 14.5C2.72421 14.5 2.5 14.7242 2.5 15H1C1 13.8958 1.89579 13 3 13H5C6.10421 13 7 13.8958 7 15H5.5C5.5 14.7242 5.27579 14.5 5 14.5H3ZM10 15C10 14.7242 10.2242 14.5 10.5 14.5H12.5C12.7758 14.5 13 14.7242 13 15H14.5C14.5 13.8958 13.6042 13 12.5 13H10.5C9.39579 13 8.5 13.8958 8.5 15H10ZM10.5 1.5C10.2242 1.5 10 1.27579 10 1H8.5C8.5 2.10421 9.39579 3 10.5 3H12.5C13.6042 3 14.5 2.10421 14.5 1H13C13 1.27579 12.7758 1.5 12.5 1.5H10.5Z" fill="black"/>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16"><path d="M5.5 4c1 0 1.5.5 1.5 1.5v5c0 1-.5 1.5-1.5 1.5h-3c-1 0-1.5-.5-1.5-1.5v-5C1 4.5 1.5 4 2.5 4zM7 0v.5C7 1.5 6.5 2 5.5 2h-3C1.5 2 1 1.5 1 .5V0h6zM7 16v-.5c0-1-.5-1.5-1.5-1.5h-3c-1 0-1.5.5-1.5 1.5v.5h6zM13.5 4c1 0 1.5.5 1.5 1.5v5c0 1-.5 1.5-1.5 1.5h-3c-1 0-1.5-.5-1.5-1.5v-5c0-1 .5-1.5 1.5-1.5zM15 0v.5c0 1-.5 1.5-1.5 1.5h-3C9.5 2 9 1.5 9 .5V0h6zM15 16v-.507c0-1-.5-1.5-1.5-1.5h-3C9.5 14 9 14.5 9 15.5v.5h6z"/></svg>

Before

Width:  |  Height:  |  Size: 1.2 KiB

After

Width:  |  Height:  |  Size: 482 B

View File

@ -0,0 +1,5 @@
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"
fill="rgba(255,255,255,1)"><path d="M12.408 8.217l-8.083-6.7A.2.2 0 0 0 4 1.672V12.3a.2.2 0 0 0 .333.146l2.56-2.372 1.857 3.9A1.125 1.125 0 1 0 10.782 13L8.913 9.075l3.4-.51a.2.2 0 0 0 .095-.348z"></path></svg>

After

Width:  |  Height:  |  Size: 505 B

View File

@ -1,3 +1,4 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M0.371588 2.93131C-0.203366 1.33422 1.3342 -0.20335 2.93129 0.371603L2.93263 0.372085L12.0716 3.68171C12.0718 3.68178 12.0714 3.68163 12.0716 3.68171C13.4459 4.17758 13.8478 5.9374 12.8076 6.9776L11.8079 7.97727L14.6876 10.8569C15.4705 11.6398 15.4705 12.9047 14.6876 13.6876L13.6476 14.7276C12.8647 15.5105 11.5998 15.5105 10.8169 14.7276L7.93725 11.8479L6.97758 12.8076C5.93739 13.8478 4.17779 13.4465 3.68192 12.0722C3.68184 12.072 3.682 12.0724 3.68192 12.0722L0.371588 2.93131ZM1.78292 2.42323C1.78298 2.4234 1.78286 2.42305 1.78292 2.42323L5.09281 11.5629C5.21725 11.9082 5.65728 12.0066 5.91692 11.7469L7.93725 9.72661L11.8776 13.6669C12.0747 13.864 12.3898 13.864 12.5869 13.6669L13.6269 12.6269C13.824 12.4298 13.824 12.1147 13.6269 11.9176L9.68659 7.97727L11.7469 5.91694C12.0066 5.65729 11.9081 5.21727 11.5629 5.09283L11.5619 5.09245L2.42321 1.78293C2.42304 1.78287 2.42339 1.783 2.42321 1.78293C2.02067 1.63847 1.63846 2.02069 1.78292 2.42323Z" fill="black"/>
</svg>
<!-- This Source Code Form is subject to the terms of the Mozilla Public
- License, v. 2.0. If a copy of the MPL was not distributed with this
- file, You can obtain one at http://mozilla.org/MPL/2.0/. -->
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16"><path d="M12.408 8.217l-8.083-6.7A.2.2 0 0 0 4 1.672V12.3a.2.2 0 0 0 .333.146l2.56-2.372 1.857 3.9A1.125 1.125 0 1 0 10.782 13L8.913 9.075l3.4-.51a.2.2 0 0 0 .095-.348z"></path></svg>

Before

Width:  |  Height:  |  Size: 1.1 KiB

After

Width:  |  Height:  |  Size: 478 B

View File

@ -0,0 +1,2 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16"
fill="rgba(255,255,255,1)"><path d="M1.5 3.5C.5 3.5 0 4 0 5v6.5c0 1 .5 1.5 1.5 1.5h4c1 0 1.5-.5 1.5-1.5V5c0-1-.5-1.5-1.5-1.5zm2 1.2c.8 0 1.4.2 1.8.6.5.4.7 1 .7 1.7 0 .5-.2 1-.5 1.4-.2.3-.5.7-1 1l-.6.4c-.4.3-.6.4-.75.56-.15.14-.25.24-.35.44H6v1.3H1c0-.6.1-1.1.3-1.5.3-.6.7-1 1.5-1.6.7-.4 1.1-.8 1.28-1 .32-.3.42-.6.42-1 0-.3-.1-.6-.23-.8-.17-.2-.37-.3-.77-.3s-.7.1-.9.5c-.04.2-.1.5-.1.9H1.1c0-.6.1-1.1.3-1.5.4-.7 1.1-1.1 2.1-1.1zM10.54 3.54C9.5 3.54 9 4 9 5v6.5c0 1 .5 1.5 1.54 1.5h4c.96 0 1.46-.5 1.46-1.5V5c0-1-.5-1.46-1.5-1.46zm1.9.95c.7 0 1.3.2 1.7.5.4.4.6.8.6 1.4 0 .4-.1.8-.4 1.1-.2.2-.3.3-.5.4.1 0 .3.1.6.3.4.3.5.8.5 1.4 0 .6-.2 1.2-.6 1.6-.4.5-1.1.7-1.9.7-1 0-1.8-.3-2.2-1-.14-.29-.24-.69-.24-1.29h1.4c0 .3 0 .5.1.7.2.4.5.5 1 .5.3 0 .5-.1.7-.3.2-.2.3-.5.3-.8 0-.5-.2-.8-.6-.95-.2-.05-.5-.15-1-.15v-1c.5 0 .8-.1 1-.14.3-.1.5-.4.5-.9 0-.3-.1-.5-.2-.7-.2-.2-.4-.3-.7-.3-.3 0-.6.1-.75.3-.2.2-.2.5-.2.86h-1.34c0-.4.1-.7.19-1.1 0-.12.2-.32.4-.62.2-.2.4-.3.7-.4.3-.1.6-.1 1-.1z"/></svg>

After

Width:  |  Height:  |  Size: 1.0 KiB

View File

@ -1,3 +1 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" d="M2 3.5C1.72421 3.5 1.5 3.72421 1.5 4V12.5C1.5 12.7758 1.72421 13 2 13H7.25V3.5H2ZM14 13H8.75V3.5H14C14.2758 3.5 14.5 3.72421 14.5 4V12.5C14.5 12.7758 14.2758 13 14 13ZM0 4C0 2.89579 0.895786 2 2 2H14C15.1042 2 16 2.89579 16 4V12.5C16 13.6042 15.1042 14.5 14 14.5H2C0.895786 14.5 0 13.6042 0 12.5V4ZM10 6.5H11.5V7.5H10V9H11.5V10H10V11.5H12.25C12.6642 11.5 13 11.1642 13 10.75V5.75C13 5.33579 12.6642 5 12.25 5H10V6.5ZM4.5 6.5H3V5H5.25C5.66421 5 6 5.33579 6 5.75V7.75C6 8.03408 5.8395 8.29378 5.58541 8.42082L4.5 8.96353V10H6V11.5H3.75C3.33579 11.5 3 11.1642 3 10.75V8.5C3 8.21592 3.1605 7.95622 3.41459 7.82918L4.5 7.28647V6.5Z" fill="black"/>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16"><path d="M1.5 3.5C.5 3.5 0 4 0 5v6.5c0 1 .5 1.5 1.5 1.5h4c1 0 1.5-.5 1.5-1.5V5c0-1-.5-1.5-1.5-1.5zm2 1.2c.8 0 1.4.2 1.8.6.5.4.7 1 .7 1.7 0 .5-.2 1-.5 1.4-.2.3-.5.7-1 1l-.6.4c-.4.3-.6.4-.75.56-.15.14-.25.24-.35.44H6v1.3H1c0-.6.1-1.1.3-1.5.3-.6.7-1 1.5-1.6.7-.4 1.1-.8 1.28-1 .32-.3.42-.6.42-1 0-.3-.1-.6-.23-.8-.17-.2-.37-.3-.77-.3s-.7.1-.9.5c-.04.2-.1.5-.1.9H1.1c0-.6.1-1.1.3-1.5.4-.7 1.1-1.1 2.1-1.1zM10.54 3.54C9.5 3.54 9 4 9 5v6.5c0 1 .5 1.5 1.54 1.5h4c.96 0 1.46-.5 1.46-1.5V5c0-1-.5-1.46-1.5-1.46zm1.9.95c.7 0 1.3.2 1.7.5.4.4.6.8.6 1.4 0 .4-.1.8-.4 1.1-.2.2-.3.3-.5.4.1 0 .3.1.6.3.4.3.5.8.5 1.4 0 .6-.2 1.2-.6 1.6-.4.5-1.1.7-1.9.7-1 0-1.8-.3-2.2-1-.14-.29-.24-.69-.24-1.29h1.4c0 .3 0 .5.1.7.2.4.5.5 1 .5.3 0 .5-.1.7-.3.2-.2.3-.5.3-.8 0-.5-.2-.8-.6-.95-.2-.05-.5-.15-1-.15v-1c.5 0 .8-.1 1-.14.3-.1.5-.4.5-.9 0-.3-.1-.5-.2-.7-.2-.2-.4-.3-.7-.3-.3 0-.6.1-.75.3-.2.2-.2.5-.2.86h-1.34c0-.4.1-.7.19-1.1 0-.12.2-.32.4-.62.2-.2.4-.3.7-.4.3-.1.6-.1 1-.1z"/></svg>

Before

Width:  |  Height:  |  Size: 775 B

After

Width:  |  Height:  |  Size: 1022 B

Some files were not shown because too many files have changed in this diff Show More