Compare commits
155 Commits
Author | SHA1 | Date |
---|---|---|
Ozzie Isaacs | 921caf6716 | |
Ozzie Isaacs | 3a603cec22 | |
Ozzie Isaacs | e591211b57 | |
Ozzie Isaacs | a305c35de4 | |
growfrow | 51d306b11d | |
mapi68 | abb418fe86 | |
Ozzie Isaacs | 0925f34557 | |
Ozzie Isaacs | 15952a764c | |
Ozzie Isaacs | fcc95bd895 | |
Ghighi Eftimie | 964e7de920 | |
Ozzie Isaacs | 14b578dd3a | |
Ozzie Isaacs | becb84a73d | |
Ozzie Isaacs | c901ccbb01 | |
Ozzie Isaacs | f987fb0aba | |
Ozzie Isaacs | c30460d76b | |
Ozzie Isaacs | 97380b4b3f | |
Ozzie Isaacs | 4fbd064b85 | |
Ozzie Isaacs | abbd9a5888 | |
Ozzie Isaacs | e860b4e097 | |
Ozzie Isaacs | 23a8a4657d | |
Ozzie Isaacs | b38a1b2298 | |
Ozzie Isaacs | 0ebfba8d05 | |
Ozzie Isaacs | 990ad8d72d | |
Ozzie Isaacs | c3fc125501 | |
Ozzie Isaacs | 3c4ed0de1a | |
Ozzie Isaacs | 117c92233d | |
Ozzie Isaacs | 2ba14acf4f | |
Ozzie Isaacs | 80a2d07009 | |
Ozzie Isaacs | ff9e1ed7c8 | |
Ozzie Isaacs | 8e5bee5352 | |
Ozzie Isaacs | d659430116 | |
Ozzie Isaacs | 859dac462b | |
Ozzie Isaacs | 2bea4dbd06 | |
Ozzie Isaacs | 0180b4b6b5 | |
Ozzie Isaacs | 2bfb02c448 | |
Ozzie Isaacs | 4864254e37 | |
Ozzie Isaacs | 09dce28a0e | |
Ozzie Isaacs | e55d09d8bb | |
Ozzie Isaacs | 92c162b2fd | |
Ozzie Isaacs | 57fb5001e2 | |
Ozzie Isaacs | 64e5314148 | |
Ozzie Isaacs | 873602a5c9 | |
Ozzie Isaacs | 09e966e18a | |
Ozzie Isaacs | f7718cae0c | |
Ozzie Isaacs | 90e728516c | |
Ozzie Isaacs | 7c04b68c88 | |
Ozzie Isaacs | 8549689a0f | |
Ozzie Isaacs | d8f5c17518 | |
mapi68 | 05367d2df5 | |
Webysther Sperandio | eb6fbfc90c | |
Ozzie Isaacs | c2267b6902 | |
Ozzie Isaacs | 0e5520a261 | |
Ozzie Isaacs | 6f5e9f167e | |
Ozzie Isaacs | ce83fb6816 | |
Ozzie Isaacs | fbfb7adef6 | |
Ozzie Isaacs | cc52ad5d27 | |
Ozzie Isaacs | 706b9c4013 | |
Ozzie Isaacs | 6972c1b841 | |
Ozzie Isaacs | b9c329535d | |
Ozzie Isaacs | 8fdf7a94ab | |
Ozzie Isaacs | 31a344b410 | |
Ozzie Isaacs | 3814fbf08f | |
Ozzie Isaacs | ffc13a5565 | |
Ozzie Isaacs | 74c61d9685 | |
Ozzie Isaacs | b8031cd53f | |
Ozzie Isaacs | 898e76fc37 | |
Ozzie Isaacs | af71a1a2ed | |
Ozzie Isaacs | e0327db08f | |
Ozzie Isaacs | bf2ac97c47 | |
Ozzie Isaacs | 902fa254b0 | |
Ozzie Isaacs | f0cc93abd3 | |
Ozzie Isaacs | 977f07364b | |
Ozzie Isaacs | 00acd745f4 | |
Ozzie Isaacs | d272f43424 | |
Whatever Cloud | 7a8d8375d0 | |
Johannes H | 3aa75ef4a7 | |
Ozzie Isaacs | 25fb8d934f | |
GONCALVES Nelson (T0025615) | f08c8faaff | |
Michiel Cornelissen | bc0ebdb78d | |
Ozzie Isaacs | 2a4b3cb7af | |
Ozzie Isaacs | 4401cf66d1 | |
Ozzie Isaacs | d353c9b6d3 | |
Ozzie Isaacs | 0aba96c032 | |
Ozzie Isaacs | c60b7e9192 | |
Ozzie Isaacs | 23033255b8 | |
Ozzie Isaacs | 31c8909dea | |
Ozzie Isaacs | 9ef89dbcc3 | |
Ozzie Isaacs | 1086296d1d | |
Ozzie Isaacs | d341faf204 | |
Ozzie Isaacs | 2334e8f9c9 | |
Ozzie Isaacs | 90ad570578 | |
Ozzie Isaacs | fd90d6e375 | |
Ozzie Isaacs | 7fbbb85f47 | |
Ozzie Isaacs | 52c7557878 | |
Ozzie Isaacs | 794cd354ca | |
Ghighi Eftimie | 389e3f09f5 | |
Ghighi Eftimie | 285979b68d | |
Ozzie Isaacs | 3a012c900e | |
Ozzie Isaacs | ec45de3212 | |
Ozzie Isaacs | f644a2a136 | |
Russell | 01108aac42 | |
Russell Troxel | 400c745692 | |
ye | 9841a4d068 | |
Ozzie Isaacs | 7fd1d10fca | |
Ozzie Isaacs | 4f6bbfa8b8 | |
Ozzie Isaacs | cf6810db87 | |
Ozzie Isaacs | 5afff2231e | |
Ozzie Isaacs | d611582b78 | |
Ozzie Isaacs | 3bbd8ee27e | |
Ozzie Isaacs | f78e0ff938 | |
Ozzie Isaacs | bd71391bfb | |
Ozzie Isaacs | 20b2936cc1 | |
Ozzie Isaacs | 19825a635a | |
Ozzie Isaacs | 0d611d35de | |
Ozzie Isaacs | effd026fe2 | |
Ozzie Isaacs | d68e57c4fc | |
Ozzie Isaacs | 184ce23351 | |
Ozzie Isaacs | 2fbc3da451 | |
Ozzie Isaacs | fad6550ff1 | |
Ozzie Isaacs | b7aaa0f24d | |
Ozzie Isaacs | 5040bb762c | |
Ozzie Isaacs | 55deca1ec8 | |
Ozzie Isaacs | 40a16f4717 | |
Ozzie Isaacs | d26e60724a | |
Ozzie Isaacs | d877fa1c68 | |
Ozzie Isaacs | d55bafdfa9 | |
Ozzie Isaacs | a2a431802a | |
Ozzie Isaacs | b2e4907165 | |
Ozzie Isaacs | 6c2e40f544 | |
Ozzie Isaacs | 5e3d0ec2ad | |
Ozzie Isaacs | c550d6c90d | |
bacpd | 3b1d0b4013 | |
Ozzie Isaacs | 3d07efbb4f | |
mapi68 | c0ae5bb381 | |
Ozzie Isaacs | 6e755a26f9 | |
Ozzie Isaacs | e32312b54a | |
Ozzie Isaacs | d7ea569e5d | |
databoy2k | b3d1558df8 | |
PhracturedBlue | 074e611705 | |
Thore Schillmann | 9bcbe523d7 | |
Thore Schillmann | e176d63ca6 | |
Thore Schillmann | 80b0e88650 | |
Thore Schillmann | 0b4731913e | |
Thore Schillmann | fc7ce8da2d | |
Thore Schillmann | c89bc12c9b | |
Thore Schillmann | 4913673e8f | |
Thore Schillmann | fc004f4f0c | |
Thore Schillmann | c5c3874243 | |
Thore Schillmann | 0d34f41a48 | |
Thore Schillmann | a77aef83c6 | |
Thore Schillmann | e39c6130c3 | |
Thore Schillmann | 03359599ed | |
Thore Schillmann | 3c4330ba51 | |
Thore Schillmann | 8c781ad4a4 | |
Thore Schillmann | 5e9ec706c5 |
|
@ -6,12 +6,23 @@ labels: ''
|
|||
assignees: ''
|
||||
|
||||
---
|
||||
<!-- Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md) -->
|
||||
|
||||
**Describe the bug/problem**
|
||||
## Short Notice from the maintainer
|
||||
|
||||
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
|
||||
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
|
||||
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
|
||||
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
|
||||
|
||||
|
||||
Please also have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
|
||||
|
||||
**Describe the bug/problem**
|
||||
|
||||
A clear and concise description of what the bug is. If you are asking for support, please check our [Wiki](https://github.com/janeczku/calibre-web/wiki) if your question is already answered there.
|
||||
|
||||
**To Reproduce**
|
||||
|
||||
Steps to reproduce the behavior:
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
|
@ -19,15 +30,19 @@ Steps to reproduce the behavior:
|
|||
4. See error
|
||||
|
||||
**Logfile**
|
||||
|
||||
Add content of calibre-web.log file or the relevant error, try to reproduce your problem with "debug" log-level to get more output.
|
||||
|
||||
**Expected behavior**
|
||||
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Screenshots**
|
||||
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Environment (please complete the following information):**
|
||||
|
||||
- OS: [e.g. Windows 10/Raspberry Pi OS]
|
||||
- Python version: [e.g. python2.7]
|
||||
- Calibre-Web version: [e.g. 0.6.8 or 087c4c59 (git rev-parse --short HEAD)]:
|
||||
|
@ -37,3 +52,4 @@ If applicable, add screenshots to help explain your problem.
|
|||
|
||||
**Additional context**
|
||||
Add any other context about the problem here. [e.g. access via reverse proxy, database background sync, special database location]
|
||||
|
||||
|
|
|
@ -7,7 +7,14 @@ assignees: ''
|
|||
|
||||
---
|
||||
|
||||
<!-- Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md) -->
|
||||
# Short Notice from the maintainer
|
||||
|
||||
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
|
||||
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
|
||||
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
|
||||
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
|
||||
|
||||
Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
|
|
@ -1,3 +1,10 @@
|
|||
# Short Notice from the maintainer
|
||||
|
||||
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
|
||||
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
|
||||
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
|
||||
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
|
||||
|
||||
# Calibre-Web
|
||||
|
||||
Calibre-Web is a web app that offers a clean and intuitive interface for browsing, reading, and downloading eBooks using a valid [Calibre](https://calibre-ebook.com) database.
|
||||
|
@ -65,7 +72,7 @@ Calibre-Web is a web app that offers a clean and intuitive interface for browsin
|
|||
|
||||
*Note: Raspberry Pi OS users may encounter issues during installation. If so, please update pip (`./venv/bin/python3 -m pip install --upgrade pip`) and/or install cargo (`sudo apt install cargo`) before retrying the installation.*
|
||||
|
||||
Refer to the Wiki for additional installation examples: [manual installation](https://github.com/janeczku/calibre-web/wiki/Manual-installation), [Linux Mint](https://github.com/janeczku/calibre-web/wiki/How-To:Install-Calibre-Web-in-Linux-Mint-19-or-20), [Cloud Provider](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-on-a-Cloud-Provider).
|
||||
Refer to the Wiki for additional installation examples: [manual installation](https://github.com/janeczku/calibre-web/wiki/Manual-installation), [Linux Mint](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-in-Linux-Mint-19-or-20), [Cloud Provider](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-on-a-Cloud-Provider).
|
||||
|
||||
## Quick Start
|
||||
|
||||
|
|
|
@ -38,6 +38,13 @@ To receive fixes for security vulnerabilities it is required to always upgrade t
|
|||
| V 0.6.18 | Possible SQL Injection is prevented in user table Thanks to Iman Sharafaldin (Forward Security) |CVE-2022-30765|
|
||||
| V 0.6.18 | The SSRF protection no longer can be bypassed by IPV6/IPV4 embedding. Thanks to @416e6e61 |CVE-2022-0939|
|
||||
| V 0.6.18 | The SSRF protection no longer can be bypassed to connect to other servers in the local network. Thanks to @michaellrowley |CVE-2022-0990|
|
||||
| V 0.6.20 | Credentials for emails are now stored encrypted ||
|
||||
| V 0.6.20 | Login is rate limited ||
|
||||
| V 0.6.20 | Passwordstrength can be forced ||
|
||||
| V 0.6.21 | SMTP server credentials are no longer returned to client ||
|
||||
| V 0.6.21 | Cross-site scripting (XSS) stored in href bypasses filter using data wrapper no longer possible ||
|
||||
| V 0.6.21 | Cross-site scripting (XSS) is no longer possible via pathchooser ||
|
||||
| V 0.6.21 | Error Handling at non existent rating, language, and user downloaded books was fixed ||
|
||||
|
||||
|
||||
## Statement regarding Log4j (CVE-2021-44228 and related)
|
||||
|
|
|
@ -103,7 +103,7 @@ web_server = WebServer()
|
|||
updater_thread = Updater()
|
||||
|
||||
if limiter_present:
|
||||
limiter = Limiter(key_func=True, headers_enabled=True, auto_check=False, swallow_errors=True)
|
||||
limiter = Limiter(key_func=True, headers_enabled=True, auto_check=False, swallow_errors=False)
|
||||
else:
|
||||
limiter = None
|
||||
|
||||
|
@ -125,13 +125,6 @@ def create_app():
|
|||
|
||||
ub.password_change(cli_param.user_credentials)
|
||||
|
||||
if not limiter:
|
||||
log.info('*** "flask-limiter" is needed for calibre-web to run. '
|
||||
'Please install it using pip: "pip install flask-limiter" ***')
|
||||
print('*** "flask-limiter" is needed for calibre-web to run. '
|
||||
'Please install it using pip: "pip install flask-limiter" ***')
|
||||
web_server.stop(True)
|
||||
sys.exit(8)
|
||||
if sys.version_info < (3, 0):
|
||||
log.info(
|
||||
'*** Python2 is EOL since end of 2019, this version of Calibre-Web is no longer supporting Python2, '
|
||||
|
@ -141,13 +134,6 @@ def create_app():
|
|||
'please update your installation to Python3 ***')
|
||||
web_server.stop(True)
|
||||
sys.exit(5)
|
||||
if not wtf_present:
|
||||
log.info('*** "flask-WTF" is needed for calibre-web to run. '
|
||||
'Please install it using pip: "pip install flask-WTF" ***')
|
||||
print('*** "flask-WTF" is needed for calibre-web to run. '
|
||||
'Please install it using pip: "pip install flask-WTF" ***')
|
||||
web_server.stop(True)
|
||||
sys.exit(7)
|
||||
|
||||
lm.login_view = 'web.login'
|
||||
lm.anonymous_user = ub.Anonymous
|
||||
|
@ -158,13 +144,21 @@ def create_app():
|
|||
calibre_db.init_db()
|
||||
|
||||
updater_thread.init_updater(config, web_server)
|
||||
# Perform dry run of updater and exit afterwards
|
||||
# Perform dry run of updater and exit afterward
|
||||
if cli_param.dry_run:
|
||||
updater_thread.dry_run()
|
||||
sys.exit(0)
|
||||
updater_thread.start()
|
||||
|
||||
for res in dependency_check() + dependency_check(True):
|
||||
requirements = dependency_check()
|
||||
for res in requirements:
|
||||
if res['found'] == "not installed":
|
||||
message = ('Cannot import {name} module, it is needed to run calibre-web, '
|
||||
'please install it using "pip install {name}"').format(name=res["name"])
|
||||
log.info(message)
|
||||
print("*** " + message + " ***")
|
||||
web_server.stop(True)
|
||||
sys.exit(8)
|
||||
for res in requirements + dependency_check(True):
|
||||
log.info('*** "{}" version does not meet the requirements. '
|
||||
'Should: {}, Found: {}, please consider installing required version ***'
|
||||
.format(res['name'],
|
||||
|
@ -196,8 +190,18 @@ def create_app():
|
|||
config.config_use_goodreads)
|
||||
config.store_calibre_uuid(calibre_db, db.Library_Id)
|
||||
# Configure rate limiter
|
||||
# https://limits.readthedocs.io/en/stable/storage.html
|
||||
app.config.update(RATELIMIT_ENABLED=config.config_ratelimiter)
|
||||
limiter.init_app(app)
|
||||
if config.config_limiter_uri != "" and not cli_param.memory_backend:
|
||||
app.config.update(RATELIMIT_STORAGE_URI=config.config_limiter_uri)
|
||||
if config.config_limiter_options != "":
|
||||
app.config.update(RATELIMIT_STORAGE_OPTIONS=config.config_limiter_options)
|
||||
try:
|
||||
limiter.init_app(app)
|
||||
except Exception as e:
|
||||
log.error('Wrong Flask Limiter configuration, falling back to default: {}'.format(e))
|
||||
app.config.update(RATELIMIT_STORAGE_URI=None)
|
||||
limiter.init_app(app)
|
||||
|
||||
# Register scheduled tasks
|
||||
from .schedule import register_scheduled_tasks, register_startup_tasks
|
||||
|
|
|
@ -49,9 +49,9 @@ sorted_modules = OrderedDict((sorted(modules.items(), key=lambda x: x[0].casefol
|
|||
|
||||
def collect_stats():
|
||||
if constants.NIGHTLY_VERSION[0] == "$Format:%H$":
|
||||
calibre_web_version = constants.STABLE_VERSION['version']
|
||||
calibre_web_version = constants.STABLE_VERSION['version'].replace("b", " Beta")
|
||||
else:
|
||||
calibre_web_version = (constants.STABLE_VERSION['version'] + ' - '
|
||||
calibre_web_version = (constants.STABLE_VERSION['version'].replace("b", " Beta") + ' - '
|
||||
+ constants.NIGHTLY_VERSION[0].replace('%', '%%') + ' - '
|
||||
+ constants.NIGHTLY_VERSION[1].replace('%', '%%'))
|
||||
|
||||
|
|
|
@ -48,6 +48,7 @@ from . import db, calibre_db, ub, web_server, config, updater_thread, gdriveutil
|
|||
kobo_sync_status, schedule
|
||||
from .helper import check_valid_domain, send_test_mail, reset_password, generate_password_hash, check_email, \
|
||||
valid_email, check_username
|
||||
from .embed_helper import get_calibre_binarypath
|
||||
from .gdriveutils import is_gdrive_ready, gdrive_support
|
||||
from .render_template import render_title_template, get_sidebar_config
|
||||
from .services.worker import WorkerThread
|
||||
|
@ -102,10 +103,13 @@ def admin_required(f):
|
|||
|
||||
@admi.before_app_request
|
||||
def before_request():
|
||||
if not ub.check_user_session(current_user.id,
|
||||
flask_session.get('_id')) and 'opds' not in request.path \
|
||||
and config.config_session == 1:
|
||||
logout_user()
|
||||
try:
|
||||
if not ub.check_user_session(current_user.id,
|
||||
flask_session.get('_id')) and 'opds' not in request.path \
|
||||
and config.config_session == 1:
|
||||
logout_user()
|
||||
except AttributeError:
|
||||
pass # ? fails on requesting /ajax/emailstat during restart ?
|
||||
g.constants = constants
|
||||
g.google_site_verification = os.getenv('GOOGLE_SITE_VERIFICATION', '')
|
||||
g.allow_registration = config.config_public_reg
|
||||
|
@ -214,7 +218,7 @@ def admin():
|
|||
form_date += timedelta(hours=int(commit[20:22]), minutes=int(commit[23:]))
|
||||
commit = format_datetime(form_date - tz, format='short')
|
||||
else:
|
||||
commit = version['version']
|
||||
commit = version['version'].replace("b", " Beta")
|
||||
|
||||
all_user = ub.session.query(ub.User).all()
|
||||
# email_settings = mail_config.get_mail_settings()
|
||||
|
@ -913,11 +917,15 @@ def list_restriction(res_type, user_id):
|
|||
|
||||
@admi.route("/ajax/fullsync", methods=["POST"])
|
||||
@login_required
|
||||
def ajax_fullsync():
|
||||
count = ub.session.query(ub.KoboSyncedBooks).filter(current_user.id == ub.KoboSyncedBooks.user_id).delete()
|
||||
message = _("{} sync entries deleted").format(count)
|
||||
ub.session_commit(message)
|
||||
return Response(json.dumps([{"type": "success", "message": message}]), mimetype='application/json')
|
||||
def ajax_self_fullsync():
|
||||
return do_full_kobo_sync(current_user.id)
|
||||
|
||||
|
||||
@admi.route("/ajax/fullsync/<int:userid>", methods=["POST"])
|
||||
@login_required
|
||||
@admin_required
|
||||
def ajax_fullsync(userid):
|
||||
return do_full_kobo_sync(userid)
|
||||
|
||||
|
||||
@admi.route("/ajax/pathchooser/")
|
||||
|
@ -927,6 +935,13 @@ def ajax_pathchooser():
|
|||
return pathchooser()
|
||||
|
||||
|
||||
def do_full_kobo_sync(userid):
|
||||
count = ub.session.query(ub.KoboSyncedBooks).filter(userid == ub.KoboSyncedBooks.user_id).delete()
|
||||
message = _("{} sync entries deleted").format(count)
|
||||
ub.session_commit(message)
|
||||
return Response(json.dumps([{"type": "success", "message": message}]), mimetype='application/json')
|
||||
|
||||
|
||||
def check_valid_read_column(column):
|
||||
if column != "0":
|
||||
if not calibre_db.session.query(db.CustomColumns).filter(db.CustomColumns.id == column) \
|
||||
|
@ -1725,6 +1740,9 @@ def _db_configuration_update_helper():
|
|||
calibre_db.update_config(config)
|
||||
if not os.access(os.path.join(config.config_calibre_dir, "metadata.db"), os.W_OK):
|
||||
flash(_("DB is not Writeable"), category="warning")
|
||||
_config_string(to_save, "config_calibre_split_dir")
|
||||
config.config_calibre_split = to_save.get('config_calibre_split', 0) == "on"
|
||||
calibre_db.update_config(config)
|
||||
config.save()
|
||||
return _db_configuration_result(None, gdrive_error)
|
||||
|
||||
|
@ -1745,6 +1763,7 @@ def _configuration_update_helper():
|
|||
|
||||
_config_checkbox_int(to_save, "config_uploading")
|
||||
_config_checkbox_int(to_save, "config_unicode_filename")
|
||||
_config_checkbox_int(to_save, "config_embed_metadata")
|
||||
# Reboot on config_anonbrowse with enabled ldap, as decoraters are changed in this case
|
||||
reboot_required |= (_config_checkbox_int(to_save, "config_anonbrowse")
|
||||
and config.config_login_type == constants.LOGIN_LDAP)
|
||||
|
@ -1761,8 +1780,14 @@ def _configuration_update_helper():
|
|||
constants.EXTENSIONS_UPLOAD = config.config_upload_formats.split(',')
|
||||
|
||||
_config_string(to_save, "config_calibre")
|
||||
_config_string(to_save, "config_converterpath")
|
||||
_config_string(to_save, "config_binariesdir")
|
||||
_config_string(to_save, "config_kepubifypath")
|
||||
if "config_binariesdir" in to_save:
|
||||
calibre_status = helper.check_calibre(config.config_binariesdir)
|
||||
if calibre_status:
|
||||
return _configuration_result(calibre_status)
|
||||
to_save["config_converterpath"] = get_calibre_binarypath("ebook-convert")
|
||||
_config_string(to_save, "config_converterpath")
|
||||
|
||||
reboot_required |= _config_int(to_save, "config_login_type")
|
||||
|
||||
|
@ -1809,6 +1834,7 @@ def _configuration_update_helper():
|
|||
_config_checkbox(to_save, "config_password_number")
|
||||
_config_checkbox(to_save, "config_password_lower")
|
||||
_config_checkbox(to_save, "config_password_upper")
|
||||
_config_checkbox(to_save, "config_password_character")
|
||||
_config_checkbox(to_save, "config_password_special")
|
||||
if 0 < int(to_save.get("config_password_min_length", "0")) < 41:
|
||||
_config_int(to_save, "config_password_min_length")
|
||||
|
@ -1816,6 +1842,8 @@ def _configuration_update_helper():
|
|||
return _configuration_result(_('Password length has to be between 1 and 40'))
|
||||
reboot_required |= _config_int(to_save, "config_session")
|
||||
reboot_required |= _config_checkbox(to_save, "config_ratelimiter")
|
||||
reboot_required |= _config_string(to_save, "config_limiter_uri")
|
||||
reboot_required |= _config_string(to_save, "config_limiter_options")
|
||||
|
||||
# Rarfile Content configuration
|
||||
_config_string(to_save, "config_rarfile_location")
|
||||
|
|
|
@ -29,8 +29,8 @@ from .constants import DEFAULT_SETTINGS_FILE, DEFAULT_GDRIVE_FILE
|
|||
|
||||
def version_info():
|
||||
if _NIGHTLY_VERSION[1].startswith('$Format'):
|
||||
return "Calibre-Web version: %s - unknown git-clone" % _STABLE_VERSION['version']
|
||||
return "Calibre-Web version: %s -%s" % (_STABLE_VERSION['version'], _NIGHTLY_VERSION[1])
|
||||
return "Calibre-Web version: %s - unknown git-clone" % _STABLE_VERSION['version'].replace("b", " Beta")
|
||||
return "Calibre-Web version: %s -%s" % (_STABLE_VERSION['version'].replace("b", " Beta"), _NIGHTLY_VERSION[1])
|
||||
|
||||
|
||||
class CliParameter(object):
|
||||
|
@ -52,6 +52,7 @@ class CliParameter(object):
|
|||
parser.add_argument('-v', '--version', action='version', help='Shows version number and exits Calibre-Web',
|
||||
version=version_info())
|
||||
parser.add_argument('-i', metavar='ip-address', help='Server IP-Address to listen')
|
||||
parser.add_argument('-m', action='store_true', help='Use Memory-backend as limiter backend, use this parameter in case of miss configured backend')
|
||||
parser.add_argument('-s', metavar='user:pass',
|
||||
help='Sets specific username to new password and exits Calibre-Web')
|
||||
parser.add_argument('-f', action='store_true', help='Flag is depreciated and will be removed in next version')
|
||||
|
@ -98,6 +99,8 @@ class CliParameter(object):
|
|||
if args.k == "":
|
||||
self.keyfilepath = ""
|
||||
|
||||
# overwrite limiter backend
|
||||
self.memory_backend = args.m or None
|
||||
# dry run updater
|
||||
self.dry_run = args.d or None
|
||||
# enable reconnect endpoint for docker database reconnect
|
||||
|
|
|
@ -102,7 +102,7 @@ def _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_exec
|
|||
extension = ext[1].lower()
|
||||
if extension in cover.COVER_EXTENSIONS:
|
||||
try:
|
||||
cover_data = cf.read(name)[name].read()
|
||||
cover_data = cf.read([name])[name].read()
|
||||
except (py7zr.Bad7zFile, OSError) as ex:
|
||||
log.error('7Zip file failed with error: {}'.format(ex))
|
||||
break
|
||||
|
|
|
@ -34,6 +34,7 @@ except ImportError:
|
|||
from sqlalchemy.ext.declarative import declarative_base
|
||||
|
||||
from . import constants, logger
|
||||
from .subproc_wrapper import process_wait
|
||||
|
||||
|
||||
log = logger.create()
|
||||
|
@ -69,6 +70,8 @@ class _Settings(_Base):
|
|||
|
||||
config_calibre_dir = Column(String)
|
||||
config_calibre_uuid = Column(String)
|
||||
config_calibre_split = Column(Boolean, default=False)
|
||||
config_calibre_split_dir = Column(String)
|
||||
config_port = Column(Integer, default=constants.DEFAULT_PORT)
|
||||
config_external_port = Column(Integer, default=constants.DEFAULT_PORT)
|
||||
config_certfile = Column(String)
|
||||
|
@ -138,10 +141,12 @@ class _Settings(_Base):
|
|||
|
||||
config_kepubifypath = Column(String, default=None)
|
||||
config_converterpath = Column(String, default=None)
|
||||
config_binariesdir = Column(String, default=None)
|
||||
config_calibre = Column(String)
|
||||
config_rarfile_location = Column(String, default=None)
|
||||
config_upload_formats = Column(String, default=','.join(constants.EXTENSIONS_UPLOAD))
|
||||
config_unicode_filename = Column(Boolean, default=False)
|
||||
config_embed_metadata = Column(Boolean, default=True)
|
||||
|
||||
config_updatechannel = Column(Integer, default=constants.UPDATE_STABLE)
|
||||
|
||||
|
@ -160,9 +165,12 @@ class _Settings(_Base):
|
|||
config_password_number = Column(Boolean, default=True)
|
||||
config_password_lower = Column(Boolean, default=True)
|
||||
config_password_upper = Column(Boolean, default=True)
|
||||
config_password_character = Column(Boolean, default=True)
|
||||
config_password_special = Column(Boolean, default=True)
|
||||
config_session = Column(Integer, default=1)
|
||||
config_ratelimiter = Column(Boolean, default=True)
|
||||
config_limiter_uri = Column(String, default="")
|
||||
config_limiter_options = Column(String, default="")
|
||||
|
||||
def __repr__(self):
|
||||
return self.__class__.__name__
|
||||
|
@ -184,9 +192,11 @@ class ConfigSQL(object):
|
|||
self.load()
|
||||
|
||||
change = False
|
||||
if self.config_converterpath == None: # pylint: disable=access-member-before-definition
|
||||
|
||||
if self.config_binariesdir == None: # pylint: disable=access-member-before-definition
|
||||
change = True
|
||||
self.config_converterpath = autodetect_calibre_binary()
|
||||
self.config_binariesdir = autodetect_calibre_binaries()
|
||||
self.config_converterpath = autodetect_converter_binary(self.config_binariesdir)
|
||||
|
||||
if self.config_kepubifypath == None: # pylint: disable=access-member-before-definition
|
||||
change = True
|
||||
|
@ -389,6 +399,9 @@ class ConfigSQL(object):
|
|||
self.db_configured = False
|
||||
self.save()
|
||||
|
||||
def get_book_path(self):
|
||||
return self.config_calibre_split_dir if self.config_calibre_split_dir else self.config_calibre_dir
|
||||
|
||||
def store_calibre_uuid(self, calibre_db, Library_table):
|
||||
try:
|
||||
calibre_uuid = calibre_db.session.query(Library_table).one_or_none()
|
||||
|
@ -469,17 +482,33 @@ def _migrate_table(session, orm_class, secret_key=None):
|
|||
session.rollback()
|
||||
|
||||
|
||||
def autodetect_calibre_binary():
|
||||
def autodetect_calibre_binaries():
|
||||
if sys.platform == "win32":
|
||||
calibre_path = ["C:\\program files\\calibre\\ebook-convert.exe",
|
||||
"C:\\program files(x86)\\calibre\\ebook-convert.exe",
|
||||
"C:\\program files(x86)\\calibre2\\ebook-convert.exe",
|
||||
"C:\\program files\\calibre2\\ebook-convert.exe"]
|
||||
calibre_path = ["C:\\program files\\calibre\\",
|
||||
"C:\\program files(x86)\\calibre\\",
|
||||
"C:\\program files(x86)\\calibre2\\",
|
||||
"C:\\program files\\calibre2\\"]
|
||||
else:
|
||||
calibre_path = ["/opt/calibre/ebook-convert"]
|
||||
calibre_path = ["/opt/calibre/"]
|
||||
for element in calibre_path:
|
||||
if os.path.isfile(element) and os.access(element, os.X_OK):
|
||||
return element
|
||||
supported_binary_paths = [os.path.join(element, binary) for binary in constants.SUPPORTED_CALIBRE_BINARIES.values()]
|
||||
if all(os.path.isfile(binary_path) and os.access(binary_path, os.X_OK) for binary_path in supported_binary_paths):
|
||||
values = [process_wait([binary_path, "--version"],
|
||||
pattern=r'\(calibre (.*)\)') for binary_path in supported_binary_paths]
|
||||
if all(values):
|
||||
version = values[0].group(1)
|
||||
log.debug("calibre version %s", version)
|
||||
return element
|
||||
return ""
|
||||
|
||||
|
||||
def autodetect_converter_binary(calibre_path):
|
||||
if sys.platform == "win32":
|
||||
converter_path = os.path.join(calibre_path, "ebook-convert.exe")
|
||||
else:
|
||||
converter_path = os.path.join(calibre_path, "ebook-convert")
|
||||
if calibre_path and os.path.isfile(converter_path) and os.access(converter_path, os.X_OK):
|
||||
return converter_path
|
||||
return ""
|
||||
|
||||
|
||||
|
|
|
@ -34,6 +34,8 @@ UPDATER_AVAILABLE = True
|
|||
|
||||
# Base dir is parent of current file, necessary if called from different folder
|
||||
BASE_DIR = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), os.pardir))
|
||||
# if executable file the files should be placed in the parent dir (parallel to the exe file)
|
||||
|
||||
STATIC_DIR = os.path.join(BASE_DIR, 'cps', 'static')
|
||||
TEMPLATES_DIR = os.path.join(BASE_DIR, 'cps', 'templates')
|
||||
TRANSLATIONS_DIR = os.path.join(BASE_DIR, 'cps', 'translations')
|
||||
|
@ -49,6 +51,9 @@ if HOME_CONFIG:
|
|||
CONFIG_DIR = os.environ.get('CALIBRE_DBPATH', home_dir)
|
||||
else:
|
||||
CONFIG_DIR = os.environ.get('CALIBRE_DBPATH', BASE_DIR)
|
||||
if getattr(sys, 'frozen', False):
|
||||
CONFIG_DIR = os.path.abspath(os.path.join(CONFIG_DIR, os.pardir))
|
||||
|
||||
|
||||
DEFAULT_SETTINGS_FILE = "app.db"
|
||||
DEFAULT_GDRIVE_FILE = "gdrive.db"
|
||||
|
@ -144,13 +149,18 @@ del env_CALIBRE_PORT
|
|||
|
||||
EXTENSIONS_AUDIO = {'mp3', 'mp4', 'ogg', 'opus', 'wav', 'flac', 'm4a', 'm4b'}
|
||||
EXTENSIONS_CONVERT_FROM = ['pdf', 'epub', 'mobi', 'azw3', 'docx', 'rtf', 'fb2', 'lit', 'lrf',
|
||||
'txt', 'htmlz', 'rtf', 'odt', 'cbz', 'cbr']
|
||||
'txt', 'htmlz', 'rtf', 'odt', 'cbz', 'cbr', 'prc']
|
||||
EXTENSIONS_CONVERT_TO = ['pdf', 'epub', 'mobi', 'azw3', 'docx', 'rtf', 'fb2',
|
||||
'lit', 'lrf', 'txt', 'htmlz', 'rtf', 'odt']
|
||||
EXTENSIONS_UPLOAD = {'txt', 'pdf', 'epub', 'kepub', 'mobi', 'azw', 'azw3', 'cbr', 'cbz', 'cbt', 'cb7', 'djvu', 'djv',
|
||||
'prc', 'doc', 'docx', 'fb2', 'html', 'rtf', 'lit', 'odt', 'mp3', 'mp4', 'ogg',
|
||||
'opus', 'wav', 'flac', 'm4a', 'm4b'}
|
||||
|
||||
_extension = ""
|
||||
if sys.platform == "win32":
|
||||
_extension = ".exe"
|
||||
SUPPORTED_CALIBRE_BINARIES = {binary:binary + _extension for binary in ["ebook-convert", "calibredb"]}
|
||||
|
||||
|
||||
def has_flag(value, bit_flag):
|
||||
return bit_flag == (bit_flag & (value or 0))
|
||||
|
@ -163,13 +173,12 @@ def selected_roles(dictionary):
|
|||
BookMeta = namedtuple('BookMeta', 'file_path, extension, title, author, cover, description, tags, series, '
|
||||
'series_id, languages, publisher, pubdate, identifiers')
|
||||
|
||||
STABLE_VERSION = {'version': '0.6.21'}
|
||||
# python build process likes to have x.y.zbw -> b for beta and w a counting number
|
||||
STABLE_VERSION = {'version': '0.6.22b'}
|
||||
|
||||
NIGHTLY_VERSION = dict()
|
||||
NIGHTLY_VERSION[0] = '$Format:%H$'
|
||||
NIGHTLY_VERSION[1] = '$Format:%cI$'
|
||||
# NIGHTLY_VERSION[0] = 'bb7d2c6273ae4560e83950d36d64533343623a57'
|
||||
# NIGHTLY_VERSION[1] = '2018-09-09T10:13:08+02:00'
|
||||
|
||||
# CACHE
|
||||
CACHE_TYPE_THUMBNAILS = 'thumbnails'
|
||||
|
|
|
@ -839,8 +839,7 @@ class CalibreDB:
|
|||
entries = list()
|
||||
pagination = list()
|
||||
try:
|
||||
pagination = Pagination(page, pagesize,
|
||||
len(query.all()))
|
||||
pagination = Pagination(page, pagesize, query.count())
|
||||
entries = query.order_by(*order).offset(off).limit(pagesize).all()
|
||||
except Exception as ex:
|
||||
log.error_or_exception(ex)
|
||||
|
|
|
@ -61,7 +61,7 @@ def dependency_check(optional=False):
|
|||
deps = load_dependencies(optional)
|
||||
for dep in deps:
|
||||
try:
|
||||
dep_version_int = [int(x) for x in dep[0].split('.')]
|
||||
dep_version_int = [int(x) if x.isnumeric() else 0 for x in dep[0].split('.')]
|
||||
low_check = [int(x) for x in dep[3].split('.')]
|
||||
high_check = [int(x) for x in dep[5].split('.')]
|
||||
except AttributeError:
|
||||
|
|
|
@ -27,11 +27,22 @@ from shutil import copyfile
|
|||
from uuid import uuid4
|
||||
from markupsafe import escape, Markup # dependency of flask
|
||||
from functools import wraps
|
||||
from lxml.etree import ParserError
|
||||
|
||||
try:
|
||||
from lxml.html.clean import clean_html, Cleaner
|
||||
# at least bleach 6.0 is needed -> incomplatible change from list arguments to set arguments
|
||||
from bleach import clean_text as clean_html
|
||||
BLEACH = True
|
||||
except ImportError:
|
||||
clean_html = None
|
||||
try:
|
||||
BLEACH = False
|
||||
from nh3 import clean as clean_html
|
||||
except ImportError:
|
||||
try:
|
||||
BLEACH = False
|
||||
from lxml.html.clean import clean_html
|
||||
except ImportError:
|
||||
clean_html = None
|
||||
|
||||
from flask import Blueprint, request, flash, redirect, url_for, abort, Response
|
||||
from flask_babel import gettext as _
|
||||
|
@ -49,6 +60,7 @@ from .tasks.upload import TaskUpload
|
|||
from .render_template import render_title_template
|
||||
from .usermanagement import login_required_if_no_ano
|
||||
from .kobo_sync_status import change_archived_books
|
||||
from .redirect import get_redirect_location
|
||||
|
||||
|
||||
editbook = Blueprint('edit-book', __name__)
|
||||
|
@ -85,7 +97,7 @@ def delete_book_from_details(book_id):
|
|||
@editbook.route("/delete/<int:book_id>/<string:book_format>", methods=["POST"])
|
||||
@login_required
|
||||
def delete_book_ajax(book_id, book_format):
|
||||
return delete_book_from_table(book_id, book_format, False)
|
||||
return delete_book_from_table(book_id, book_format, False, request.form.to_dict().get('location', ""))
|
||||
|
||||
|
||||
@editbook.route("/admin/book/<int:book_id>", methods=['GET'])
|
||||
|
@ -126,7 +138,7 @@ def edit_book(book_id):
|
|||
edited_books_id = book.id
|
||||
modify_date = True
|
||||
title_author_error = helper.update_dir_structure(edited_books_id,
|
||||
config.config_calibre_dir,
|
||||
config.get_book_path(),
|
||||
input_authors[0],
|
||||
renamed_author=renamed)
|
||||
if title_author_error:
|
||||
|
@ -271,7 +283,7 @@ def upload():
|
|||
meta.extension.lower())
|
||||
else:
|
||||
error = helper.update_dir_structure(book_id,
|
||||
config.config_calibre_dir,
|
||||
config.get_book_path(),
|
||||
input_authors[0],
|
||||
meta.file_path,
|
||||
title_dir + meta.extension.lower(),
|
||||
|
@ -321,7 +333,7 @@ def convert_bookformat(book_id):
|
|||
return redirect(url_for('edit-book.show_edit_book', book_id=book_id))
|
||||
|
||||
log.info('converting: book id: %s from: %s to: %s', book_id, book_format_from, book_format_to)
|
||||
rtn = helper.convert_book_format(book_id, config.config_calibre_dir, book_format_from.upper(),
|
||||
rtn = helper.convert_book_format(book_id, config.get_book_path(), book_format_from.upper(),
|
||||
book_format_to.upper(), current_user.name)
|
||||
|
||||
if rtn is None:
|
||||
|
@ -391,7 +403,7 @@ def edit_list_book(param):
|
|||
elif param == 'title':
|
||||
sort_param = book.sort
|
||||
if handle_title_on_edit(book, vals.get('value', "")):
|
||||
rename_error = helper.update_dir_structure(book.id, config.config_calibre_dir)
|
||||
rename_error = helper.update_dir_structure(book.id, config.get_book_path())
|
||||
if not rename_error:
|
||||
ret = Response(json.dumps({'success': True, 'newValue': book.title}),
|
||||
mimetype='application/json')
|
||||
|
@ -409,7 +421,7 @@ def edit_list_book(param):
|
|||
mimetype='application/json')
|
||||
elif param == 'authors':
|
||||
input_authors, __, renamed = handle_author_on_edit(book, vals['value'], vals.get('checkA', None) == "true")
|
||||
rename_error = helper.update_dir_structure(book.id, config.config_calibre_dir, input_authors[0],
|
||||
rename_error = helper.update_dir_structure(book.id, config.get_book_path(), input_authors[0],
|
||||
renamed_author=renamed)
|
||||
if not rename_error:
|
||||
ret = Response(json.dumps({
|
||||
|
@ -513,10 +525,10 @@ def merge_list_book():
|
|||
for element in from_book.data:
|
||||
if element.format not in to_file:
|
||||
# create new data entry with: book_id, book_format, uncompressed_size, name
|
||||
filepath_new = os.path.normpath(os.path.join(config.config_calibre_dir,
|
||||
filepath_new = os.path.normpath(os.path.join(config.get_book_path(),
|
||||
to_book.path,
|
||||
to_name + "." + element.format.lower()))
|
||||
filepath_old = os.path.normpath(os.path.join(config.config_calibre_dir,
|
||||
filepath_old = os.path.normpath(os.path.join(config.get_book_path(),
|
||||
from_book.path,
|
||||
element.name + "." + element.format.lower()))
|
||||
copyfile(filepath_old, filepath_new)
|
||||
|
@ -556,7 +568,7 @@ def table_xchange_author_title():
|
|||
|
||||
if edited_books_id:
|
||||
# toDo: Handle error
|
||||
edit_error = helper.update_dir_structure(edited_books_id, config.config_calibre_dir, input_authors[0],
|
||||
edit_error = helper.update_dir_structure(edited_books_id, config.get_book_path(), input_authors[0],
|
||||
renamed_author=renamed)
|
||||
if modify_date:
|
||||
book.last_modified = datetime.utcnow()
|
||||
|
@ -753,7 +765,7 @@ def move_coverfile(meta, db_book):
|
|||
cover_file = meta.cover
|
||||
else:
|
||||
cover_file = os.path.join(constants.STATIC_DIR, 'generic_cover.jpg')
|
||||
new_cover_path = os.path.join(config.config_calibre_dir, db_book.path)
|
||||
new_cover_path = os.path.join(config.get_book_path(), db_book.path)
|
||||
try:
|
||||
os.makedirs(new_cover_path, exist_ok=True)
|
||||
copyfile(cover_file, os.path.join(new_cover_path, "cover.jpg"))
|
||||
|
@ -812,7 +824,7 @@ def delete_whole_book(book_id, book):
|
|||
calibre_db.session.query(db.Books).filter(db.Books.id == book_id).delete()
|
||||
|
||||
|
||||
def render_delete_book_result(book_format, json_response, warning, book_id):
|
||||
def render_delete_book_result(book_format, json_response, warning, book_id, location=""):
|
||||
if book_format:
|
||||
if json_response:
|
||||
return json.dumps([warning, {"location": url_for("edit-book.show_edit_book", book_id=book_id),
|
||||
|
@ -824,22 +836,22 @@ def render_delete_book_result(book_format, json_response, warning, book_id):
|
|||
return redirect(url_for('edit-book.show_edit_book', book_id=book_id))
|
||||
else:
|
||||
if json_response:
|
||||
return json.dumps([warning, {"location": url_for('web.index'),
|
||||
return json.dumps([warning, {"location": get_redirect_location(location, "web.index"),
|
||||
"type": "success",
|
||||
"format": book_format,
|
||||
"message": _('Book Successfully Deleted')}])
|
||||
else:
|
||||
flash(_('Book Successfully Deleted'), category="success")
|
||||
return redirect(url_for('web.index'))
|
||||
return redirect(get_redirect_location(location, "web.index"))
|
||||
|
||||
|
||||
def delete_book_from_table(book_id, book_format, json_response):
|
||||
def delete_book_from_table(book_id, book_format, json_response, location=""):
|
||||
warning = {}
|
||||
if current_user.role_delete_books():
|
||||
book = calibre_db.get_book(book_id)
|
||||
if book:
|
||||
try:
|
||||
result, error = helper.delete_book(book, config.config_calibre_dir, book_format=book_format.upper())
|
||||
result, error = helper.delete_book(book, config.get_book_path(), book_format=book_format.upper())
|
||||
if not result:
|
||||
if json_response:
|
||||
return json.dumps([{"location": url_for("edit-book.show_edit_book", book_id=book_id),
|
||||
|
@ -880,7 +892,7 @@ def delete_book_from_table(book_id, book_format, json_response):
|
|||
else:
|
||||
# book not found
|
||||
log.error('Book with id "%s" could not be deleted: not found', book_id)
|
||||
return render_delete_book_result(book_format, json_response, warning, book_id)
|
||||
return render_delete_book_result(book_format, json_response, warning, book_id, location)
|
||||
message = _("You are missing permissions to delete books")
|
||||
if json_response:
|
||||
return json.dumps({"location": url_for("edit-book.show_edit_book", book_id=book_id),
|
||||
|
@ -992,7 +1004,17 @@ def edit_book_series_index(series_index, book):
|
|||
def edit_book_comments(comments, book):
|
||||
modify_date = False
|
||||
if comments:
|
||||
comments = clean_html(comments)
|
||||
try:
|
||||
if BLEACH:
|
||||
comments = clean_html(comments, tags=set(), attributes=set())
|
||||
else:
|
||||
comments = clean_html(comments)
|
||||
except ParserError as e:
|
||||
log.error("Comments of book {} are corrupted: {}".format(book.id, e))
|
||||
comments = ""
|
||||
except TypeError as e:
|
||||
log.error("Comments can't be parsed, maybe 'lxml' is too new, try installing 'bleach': {}".format(e))
|
||||
comments = ""
|
||||
if len(book.comments):
|
||||
if book.comments[0].text != comments:
|
||||
book.comments[0].text = comments
|
||||
|
@ -1050,7 +1072,18 @@ def edit_cc_data_value(book_id, book, c, to_save, cc_db_value, cc_string):
|
|||
elif c.datatype == 'comments':
|
||||
to_save[cc_string] = Markup(to_save[cc_string]).unescape()
|
||||
if to_save[cc_string]:
|
||||
to_save[cc_string] = clean_html(to_save[cc_string])
|
||||
try:
|
||||
if BLEACH:
|
||||
to_save[cc_string] = clean_html(to_save[cc_string], tags=set(), attributes=set())
|
||||
else:
|
||||
to_save[cc_string] = clean_html(to_save[cc_string])
|
||||
except ParserError as e:
|
||||
log.error("Customs Comments of book {} are corrupted: {}".format(book_id, e))
|
||||
to_save[cc_string] = ""
|
||||
except TypeError as e:
|
||||
to_save[cc_string] = ""
|
||||
log.error("Customs Comments can't be parsed, maybe 'lxml' is too new, "
|
||||
"try installing 'bleach': {}".format(e))
|
||||
elif c.datatype == 'datetime':
|
||||
try:
|
||||
to_save[cc_string] = datetime.strptime(to_save[cc_string], "%Y-%m-%d")
|
||||
|
@ -1172,7 +1205,7 @@ def upload_single_file(file_request, book, book_id):
|
|||
return False
|
||||
|
||||
file_name = book.path.rsplit('/', 1)[-1]
|
||||
filepath = os.path.normpath(os.path.join(config.config_calibre_dir, book.path))
|
||||
filepath = os.path.normpath(os.path.join(config.get_book_path(), book.path))
|
||||
saved_filename = os.path.join(filepath, file_name + '.' + file_ext)
|
||||
|
||||
# check if file path exists, otherwise create it, copy file to calibre path and delete temp file
|
||||
|
|
|
@ -0,0 +1,63 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2024 OzzieIsaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
from uuid import uuid4
|
||||
import os
|
||||
|
||||
from .file_helper import get_temp_dir
|
||||
from .subproc_wrapper import process_open
|
||||
from . import logger, config
|
||||
from .constants import SUPPORTED_CALIBRE_BINARIES
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
||||
def do_calibre_export(book_id, book_format):
|
||||
try:
|
||||
quotes = [3, 5, 7, 9]
|
||||
tmp_dir = get_temp_dir()
|
||||
calibredb_binarypath = get_calibre_binarypath("calibredb")
|
||||
temp_file_name = str(uuid4())
|
||||
my_env = os.environ.copy()
|
||||
if config.config_calibre_split:
|
||||
my_env['CALIBRE_OVERRIDE_DATABASE_PATH'] = os.path.join(config.config_calibre_dir, "metadata.db")
|
||||
library_path = config.config_calibre_split_dir
|
||||
else:
|
||||
library_path = config.config_calibre_dir
|
||||
opf_command = [calibredb_binarypath, 'export', '--dont-write-opf', '--with-library', library_path,
|
||||
'--to-dir', tmp_dir, '--formats', book_format, "--template", "{}".format(temp_file_name),
|
||||
str(book_id)]
|
||||
p = process_open(opf_command, quotes, my_env)
|
||||
_, err = p.communicate()
|
||||
if err:
|
||||
log.error('Metadata embedder encountered an error: %s', err)
|
||||
return tmp_dir, temp_file_name
|
||||
except OSError as ex:
|
||||
# ToDo real error handling
|
||||
log.error_or_exception(ex)
|
||||
return None, None
|
||||
|
||||
|
||||
def get_calibre_binarypath(binary):
|
||||
binariesdir = config.config_binariesdir
|
||||
if binariesdir:
|
||||
try:
|
||||
return os.path.join(binariesdir, SUPPORTED_CALIBRE_BINARIES[binary])
|
||||
except KeyError as ex:
|
||||
log.error("Binary not supported by Calibre-Web: %s", SUPPORTED_CALIBRE_BINARIES[binary])
|
||||
pass
|
||||
return ""
|
34
cps/epub.py
34
cps/epub.py
|
@ -23,10 +23,12 @@ from lxml import etree
|
|||
from . import isoLanguages, cover
|
||||
from . import config, logger
|
||||
from .helper import split_authors
|
||||
from .epub_helper import get_content_opf, default_ns
|
||||
from .constants import BookMeta
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
||||
def _extract_cover(zip_file, cover_file, cover_path, tmp_file_name):
|
||||
if cover_file is None:
|
||||
return None
|
||||
|
@ -44,24 +46,15 @@ def _extract_cover(zip_file, cover_file, cover_path, tmp_file_name):
|
|||
return cover.cover_processing(tmp_file_name, cf, extension)
|
||||
|
||||
def get_epub_layout(book, book_data):
|
||||
ns = {
|
||||
'n': 'urn:oasis:names:tc:opendocument:xmlns:container',
|
||||
'pkg': 'http://www.idpf.org/2007/opf',
|
||||
}
|
||||
file_path = os.path.normpath(os.path.join(config.config_calibre_dir, book.path, book_data.name + "." + book_data.format.lower()))
|
||||
file_path = os.path.normpath(os.path.join(config.get_book_path(),
|
||||
book.path, book_data.name + "." + book_data.format.lower()))
|
||||
|
||||
try:
|
||||
epubZip = zipfile.ZipFile(file_path)
|
||||
txt = epubZip.read('META-INF/container.xml')
|
||||
tree = etree.fromstring(txt)
|
||||
cfname = tree.xpath('n:rootfiles/n:rootfile/@full-path', namespaces=ns)[0]
|
||||
cf = epubZip.read(cfname)
|
||||
tree, __ = get_content_opf(file_path, default_ns)
|
||||
p = tree.xpath('/pkg:package/pkg:metadata', namespaces=default_ns)[0]
|
||||
|
||||
tree = etree.fromstring(cf)
|
||||
p = tree.xpath('/pkg:package/pkg:metadata', namespaces=ns)[0]
|
||||
|
||||
layout = p.xpath('pkg:meta[@property="rendition:layout"]/text()', namespaces=ns)
|
||||
except (etree.XMLSyntaxError, KeyError, IndexError) as e:
|
||||
layout = p.xpath('pkg:meta[@property="rendition:layout"]/text()', namespaces=default_ns)
|
||||
except (etree.XMLSyntaxError, KeyError, IndexError, OSError) as e:
|
||||
log.error("Could not parse epub metadata of book {} during kobo sync: {}".format(book.id, e))
|
||||
layout = []
|
||||
|
||||
|
@ -78,13 +71,7 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
|
|||
'dc': 'http://purl.org/dc/elements/1.1/'
|
||||
}
|
||||
|
||||
epub_zip = zipfile.ZipFile(tmp_file_path)
|
||||
|
||||
txt = epub_zip.read('META-INF/container.xml')
|
||||
tree = etree.fromstring(txt)
|
||||
cf_name = tree.xpath('n:rootfiles/n:rootfile/@full-path', namespaces=ns)[0]
|
||||
cf = epub_zip.read(cf_name)
|
||||
tree = etree.fromstring(cf)
|
||||
tree, cf_name = get_content_opf(tmp_file_path, ns)
|
||||
|
||||
cover_path = os.path.dirname(cf_name)
|
||||
|
||||
|
@ -102,7 +89,7 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
|
|||
elif s == 'date':
|
||||
epub_metadata[s] = tmp[0][:10]
|
||||
else:
|
||||
epub_metadata[s] = tmp[0]
|
||||
epub_metadata[s] = tmp[0].strip()
|
||||
else:
|
||||
epub_metadata[s] = 'Unknown'
|
||||
|
||||
|
@ -127,6 +114,7 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
|
|||
|
||||
epub_metadata = parse_epub_series(ns, tree, epub_metadata)
|
||||
|
||||
epub_zip = zipfile.ZipFile(tmp_file_path)
|
||||
cover_file = parse_epub_cover(ns, tree, epub_zip, cover_path, tmp_file_path)
|
||||
|
||||
identifiers = []
|
||||
|
|
|
@ -0,0 +1,166 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2018 lemmsh, Kennyl, Kyosfonica, matthazinski
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import zipfile
|
||||
from lxml import etree
|
||||
|
||||
from . import isoLanguages
|
||||
|
||||
default_ns = {
|
||||
'n': 'urn:oasis:names:tc:opendocument:xmlns:container',
|
||||
'pkg': 'http://www.idpf.org/2007/opf',
|
||||
}
|
||||
|
||||
OPF_NAMESPACE = "http://www.idpf.org/2007/opf"
|
||||
PURL_NAMESPACE = "http://purl.org/dc/elements/1.1/"
|
||||
|
||||
OPF = "{%s}" % OPF_NAMESPACE
|
||||
PURL = "{%s}" % PURL_NAMESPACE
|
||||
|
||||
etree.register_namespace("opf", OPF_NAMESPACE)
|
||||
etree.register_namespace("dc", PURL_NAMESPACE)
|
||||
|
||||
OPF_NS = {None: OPF_NAMESPACE} # the default namespace (no prefix)
|
||||
NSMAP = {'dc': PURL_NAMESPACE, 'opf': OPF_NAMESPACE}
|
||||
|
||||
|
||||
def updateEpub(src, dest, filename, data, ):
|
||||
# create a temp copy of the archive without filename
|
||||
with zipfile.ZipFile(src, 'r') as zin:
|
||||
with zipfile.ZipFile(dest, 'w') as zout:
|
||||
zout.comment = zin.comment # preserve the comment
|
||||
for item in zin.infolist():
|
||||
if item.filename != filename:
|
||||
zout.writestr(item, zin.read(item.filename))
|
||||
|
||||
# now add filename with its new data
|
||||
with zipfile.ZipFile(dest, mode='a', compression=zipfile.ZIP_DEFLATED) as zf:
|
||||
zf.writestr(filename, data)
|
||||
|
||||
|
||||
def get_content_opf(file_path, ns=default_ns):
|
||||
epubZip = zipfile.ZipFile(file_path)
|
||||
txt = epubZip.read('META-INF/container.xml')
|
||||
tree = etree.fromstring(txt)
|
||||
cf_name = tree.xpath('n:rootfiles/n:rootfile/@full-path', namespaces=ns)[0]
|
||||
cf = epubZip.read(cf_name)
|
||||
|
||||
return etree.fromstring(cf), cf_name
|
||||
|
||||
|
||||
def create_new_metadata_backup(book, custom_columns, export_language, translated_cover_name, lang_type=3):
|
||||
# generate root package element
|
||||
package = etree.Element(OPF + "package", nsmap=OPF_NS)
|
||||
package.set("unique-identifier", "uuid_id")
|
||||
package.set("version", "2.0")
|
||||
|
||||
# generate metadata element and all sub elements of it
|
||||
metadata = etree.SubElement(package, "metadata", nsmap=NSMAP)
|
||||
identifier = etree.SubElement(metadata, PURL + "identifier", id="calibre_id", nsmap=NSMAP)
|
||||
identifier.set(OPF + "scheme", "calibre")
|
||||
identifier.text = str(book.id)
|
||||
identifier2 = etree.SubElement(metadata, PURL + "identifier", id="uuid_id", nsmap=NSMAP)
|
||||
identifier2.set(OPF + "scheme", "uuid")
|
||||
identifier2.text = book.uuid
|
||||
for i in book.identifiers:
|
||||
identifier = etree.SubElement(metadata, PURL + "identifier", nsmap=NSMAP)
|
||||
identifier.set(OPF + "scheme", i.format_type())
|
||||
identifier.text = str(i.val)
|
||||
title = etree.SubElement(metadata, PURL + "title", nsmap=NSMAP)
|
||||
title.text = book.title
|
||||
for author in book.authors:
|
||||
creator = etree.SubElement(metadata, PURL + "creator", nsmap=NSMAP)
|
||||
creator.text = str(author.name)
|
||||
creator.set(OPF + "file-as", book.author_sort) # ToDo Check
|
||||
creator.set(OPF + "role", "aut")
|
||||
contributor = etree.SubElement(metadata, PURL + "contributor", nsmap=NSMAP)
|
||||
contributor.text = "calibre (5.7.2) [https://calibre-ebook.com]"
|
||||
contributor.set(OPF + "file-as", "calibre") # ToDo Check
|
||||
contributor.set(OPF + "role", "bkp")
|
||||
|
||||
date = etree.SubElement(metadata, PURL + "date", nsmap=NSMAP)
|
||||
date.text = '{d.year:04}-{d.month:02}-{d.day:02}T{d.hour:02}:{d.minute:02}:{d.second:02}'.format(d=book.pubdate)
|
||||
if book.comments and book.comments[0].text:
|
||||
for b in book.comments:
|
||||
description = etree.SubElement(metadata, PURL + "description", nsmap=NSMAP)
|
||||
description.text = b.text
|
||||
for b in book.publishers:
|
||||
publisher = etree.SubElement(metadata, PURL + "publisher", nsmap=NSMAP)
|
||||
publisher.text = str(b.name)
|
||||
if not book.languages:
|
||||
language = etree.SubElement(metadata, PURL + "language", nsmap=NSMAP)
|
||||
language.text = export_language
|
||||
else:
|
||||
for b in book.languages:
|
||||
language = etree.SubElement(metadata, PURL + "language", nsmap=NSMAP)
|
||||
language.text = str(b.lang_code) if lang_type == 3 else isoLanguages.get(part3=b.lang_code).part1
|
||||
for b in book.tags:
|
||||
subject = etree.SubElement(metadata, PURL + "subject", nsmap=NSMAP)
|
||||
subject.text = str(b.name)
|
||||
etree.SubElement(metadata, "meta", name="calibre:author_link_map",
|
||||
content="{" + ", ".join(['"' + str(a.name) + '": ""' for a in book.authors]) + "}",
|
||||
nsmap=NSMAP)
|
||||
for b in book.series:
|
||||
etree.SubElement(metadata, "meta", name="calibre:series",
|
||||
content=str(str(b.name)),
|
||||
nsmap=NSMAP)
|
||||
if book.series:
|
||||
etree.SubElement(metadata, "meta", name="calibre:series_index",
|
||||
content=str(book.series_index),
|
||||
nsmap=NSMAP)
|
||||
if len(book.ratings) and book.ratings[0].rating > 0:
|
||||
etree.SubElement(metadata, "meta", name="calibre:rating",
|
||||
content=str(book.ratings[0].rating),
|
||||
nsmap=NSMAP)
|
||||
etree.SubElement(metadata, "meta", name="calibre:timestamp",
|
||||
content='{d.year:04}-{d.month:02}-{d.day:02}T{d.hour:02}:{d.minute:02}:{d.second:02}'.format(
|
||||
d=book.timestamp),
|
||||
nsmap=NSMAP)
|
||||
etree.SubElement(metadata, "meta", name="calibre:title_sort",
|
||||
content=book.sort,
|
||||
nsmap=NSMAP)
|
||||
sequence = 0
|
||||
for cc in custom_columns:
|
||||
value = None
|
||||
extra = None
|
||||
cc_entry = getattr(book, "custom_column_" + str(cc.id))
|
||||
if cc_entry.__len__():
|
||||
value = [c.value for c in cc_entry] if cc.is_multiple else cc_entry[0].value
|
||||
extra = cc_entry[0].extra if hasattr(cc_entry[0], "extra") else None
|
||||
etree.SubElement(metadata, "meta", name="calibre:user_metadata:#{}".format(cc.label),
|
||||
content=cc.to_json(value, extra, sequence),
|
||||
nsmap=NSMAP)
|
||||
sequence += 1
|
||||
|
||||
# generate guide element and all sub elements of it
|
||||
# Title is translated from default export language
|
||||
guide = etree.SubElement(package, "guide")
|
||||
etree.SubElement(guide, "reference", type="cover", title=translated_cover_name, href="cover.jpg")
|
||||
|
||||
return package
|
||||
|
||||
def replace_metadata(tree, package):
|
||||
rep_element = tree.xpath('/pkg:package/pkg:metadata', namespaces=default_ns)[0]
|
||||
new_element = package.xpath('//metadata', namespaces=default_ns)[0]
|
||||
tree.replace(rep_element, new_element)
|
||||
return etree.tostring(tree,
|
||||
xml_declaration=True,
|
||||
encoding='utf-8',
|
||||
pretty_print=True).decode('utf-8')
|
||||
|
||||
|
|
@ -0,0 +1,32 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2023 OzzieIsaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from tempfile import gettempdir
|
||||
import os
|
||||
import shutil
|
||||
|
||||
def get_temp_dir():
|
||||
tmp_dir = os.path.join(gettempdir(), 'calibre_web')
|
||||
if not os.path.isdir(tmp_dir):
|
||||
os.mkdir(tmp_dir)
|
||||
return tmp_dir
|
||||
|
||||
|
||||
def del_temp_dir():
|
||||
tmp_dir = os.path.join(gettempdir(), 'calibre_web')
|
||||
shutil.rmtree(tmp_dir)
|
|
@ -23,7 +23,6 @@
|
|||
import os
|
||||
import hashlib
|
||||
import json
|
||||
import tempfile
|
||||
from uuid import uuid4
|
||||
from time import time
|
||||
from shutil import move, copyfile
|
||||
|
@ -34,6 +33,7 @@ from flask_login import login_required
|
|||
|
||||
from . import logger, gdriveutils, config, ub, calibre_db, csrf
|
||||
from .admin import admin_required
|
||||
from .file_helper import get_temp_dir
|
||||
|
||||
gdrive = Blueprint('gdrive', __name__, url_prefix='/gdrive')
|
||||
log = logger.create()
|
||||
|
@ -139,9 +139,7 @@ try:
|
|||
dbpath = os.path.join(config.config_calibre_dir, "metadata.db").encode()
|
||||
if not response['deleted'] and response['file']['title'] == 'metadata.db' \
|
||||
and response['file']['md5Checksum'] != hashlib.md5(dbpath): # nosec
|
||||
tmp_dir = os.path.join(tempfile.gettempdir(), 'calibre_web')
|
||||
if not os.path.isdir(tmp_dir):
|
||||
os.mkdir(tmp_dir)
|
||||
tmp_dir = get_temp_dir()
|
||||
|
||||
log.info('Database file updated')
|
||||
copyfile(dbpath, os.path.join(tmp_dir, "metadata.db_" + str(current_milli_time())))
|
||||
|
|
|
@ -34,7 +34,6 @@ except ImportError:
|
|||
from sqlalchemy.ext.declarative import declarative_base
|
||||
from sqlalchemy.exc import OperationalError, InvalidRequestError, IntegrityError
|
||||
from sqlalchemy.orm.exc import StaleDataError
|
||||
from sqlalchemy.sql.expression import text
|
||||
|
||||
try:
|
||||
from httplib2 import __version__ as httplib2_version
|
||||
|
|
175
cps/helper.py
175
cps/helper.py
|
@ -22,12 +22,13 @@ import random
|
|||
import io
|
||||
import mimetypes
|
||||
import re
|
||||
import regex
|
||||
import shutil
|
||||
import socket
|
||||
from datetime import datetime, timedelta
|
||||
from tempfile import gettempdir
|
||||
import requests
|
||||
import unidecode
|
||||
from uuid import uuid4
|
||||
|
||||
from flask import send_from_directory, make_response, redirect, abort, url_for
|
||||
from flask_babel import gettext as _
|
||||
|
@ -54,12 +55,16 @@ from . import calibre_db, cli_param
|
|||
from .tasks.convert import TaskConvert
|
||||
from . import logger, config, db, ub, fs
|
||||
from . import gdriveutils as gd
|
||||
from .constants import STATIC_DIR as _STATIC_DIR, CACHE_TYPE_THUMBNAILS, THUMBNAIL_TYPE_COVER, THUMBNAIL_TYPE_SERIES
|
||||
from .constants import (STATIC_DIR as _STATIC_DIR, CACHE_TYPE_THUMBNAILS, THUMBNAIL_TYPE_COVER, THUMBNAIL_TYPE_SERIES,
|
||||
SUPPORTED_CALIBRE_BINARIES)
|
||||
from .subproc_wrapper import process_wait
|
||||
from .services.worker import WorkerThread
|
||||
from .tasks.mail import TaskEmail
|
||||
from .tasks.thumbnail import TaskClearCoverThumbnailCache, TaskGenerateCoverThumbnails
|
||||
from .tasks.metadata_backup import TaskBackupMetadata
|
||||
from .file_helper import get_temp_dir
|
||||
from .epub_helper import get_content_opf, create_new_metadata_backup, updateEpub, replace_metadata
|
||||
from .embed_helper import do_calibre_export
|
||||
|
||||
log = logger.create()
|
||||
|
||||
|
@ -222,7 +227,7 @@ def send_mail(book_id, book_format, convert, ereader_mail, calibrepath, user_id)
|
|||
email_text = N_("%(book)s send to eReader", book=link)
|
||||
WorkerThread.add(user_id, TaskEmail(_("Send to eReader"), book.path, converted_file_name,
|
||||
config.get_mail_settings(), ereader_mail,
|
||||
email_text, _('This Email has been sent via Calibre-Web.')))
|
||||
email_text, _('This Email has been sent via Calibre-Web.'),book.id))
|
||||
return
|
||||
return _("The requested file could not be read. Maybe wrong permissions?")
|
||||
|
||||
|
@ -689,16 +694,18 @@ def valid_password(check_password):
|
|||
if config.config_password_policy:
|
||||
verify = ""
|
||||
if config.config_password_min_length > 0:
|
||||
verify += "^(?=.{" + str(config.config_password_min_length) + ",}$)"
|
||||
verify += r"^(?=.{" + str(config.config_password_min_length) + ",}$)"
|
||||
if config.config_password_number:
|
||||
verify += "(?=.*?\d)"
|
||||
verify += r"(?=.*?\d)"
|
||||
if config.config_password_lower:
|
||||
verify += "(?=.*?[a-z])"
|
||||
verify += r"(?=.*?[\p{Ll}])"
|
||||
if config.config_password_upper:
|
||||
verify += "(?=.*?[A-Z])"
|
||||
verify += r"(?=.*?[\p{Lu}])"
|
||||
if config.config_password_character:
|
||||
verify += r"(?=.*?[\p{Letter}])"
|
||||
if config.config_password_special:
|
||||
verify += "(?=.*?[^A-Za-z\s0-9])"
|
||||
match = re.match(verify, check_password)
|
||||
verify += r"(?=.*?[^\p{Letter}\s0-9])"
|
||||
match = regex.match(verify, check_password)
|
||||
if not match:
|
||||
raise Exception(_("Password doesn't comply with password validation rules"))
|
||||
return check_password
|
||||
|
@ -781,7 +788,7 @@ def get_book_cover_internal(book, resolution=None):
|
|||
|
||||
# Send the book cover from the Calibre directory
|
||||
else:
|
||||
cover_file_path = os.path.join(config.config_calibre_dir, book.path)
|
||||
cover_file_path = os.path.join(config.get_book_path(), book.path)
|
||||
if os.path.isfile(os.path.join(cover_file_path, "cover.jpg")):
|
||||
return send_from_directory(cover_file_path, "cover.jpg")
|
||||
else:
|
||||
|
@ -921,10 +928,7 @@ def save_cover(img, book_path):
|
|||
return False, _("Only jpg/jpeg files are supported as coverfile")
|
||||
|
||||
if config.config_use_google_drive:
|
||||
tmp_dir = os.path.join(gettempdir(), 'calibre_web')
|
||||
|
||||
if not os.path.isdir(tmp_dir):
|
||||
os.mkdir(tmp_dir)
|
||||
tmp_dir = get_temp_dir()
|
||||
ret, message = save_cover_from_filestorage(tmp_dir, "uploaded_cover.jpg", img)
|
||||
if ret is True:
|
||||
gd.uploadFileToEbooksFolder(os.path.join(book_path, 'cover.jpg').replace("\\", "/"),
|
||||
|
@ -934,33 +938,72 @@ def save_cover(img, book_path):
|
|||
else:
|
||||
return False, message
|
||||
else:
|
||||
return save_cover_from_filestorage(os.path.join(config.config_calibre_dir, book_path), "cover.jpg", img)
|
||||
return save_cover_from_filestorage(os.path.join(config.get_book_path(), book_path), "cover.jpg", img)
|
||||
|
||||
|
||||
def do_download_file(book, book_format, client, data, headers):
|
||||
book_name = data.name
|
||||
if config.config_use_google_drive:
|
||||
# startTime = time.time()
|
||||
df = gd.getFileFromEbooksFolder(book.path, data.name + "." + book_format)
|
||||
df = gd.getFileFromEbooksFolder(book.path, book_name + "." + book_format)
|
||||
# log.debug('%s', time.time() - startTime)
|
||||
if df:
|
||||
return gd.do_gdrive_download(df, headers)
|
||||
if config.config_embed_metadata and (
|
||||
(book_format == "kepub" and config.config_kepubifypath ) or
|
||||
(book_format != "kepub" and config.config_binariesdir)):
|
||||
output_path = os.path.join(config.config_calibre_dir, book.path)
|
||||
if not os.path.exists(output_path):
|
||||
os.makedirs(output_path)
|
||||
output = os.path.join(config.config_calibre_dir, book.path, book_name + "." + book_format)
|
||||
gd.downloadFile(book.path, book_name + "." + book_format, output)
|
||||
if book_format == "kepub" and config.config_kepubifypath:
|
||||
filename, download_name = do_kepubify_metadata_replace(book, output)
|
||||
elif book_format != "kepub" and config.config_binariesdir:
|
||||
filename, download_name = do_calibre_export(book.id, book_format)
|
||||
else:
|
||||
return gd.do_gdrive_download(df, headers)
|
||||
else:
|
||||
abort(404)
|
||||
else:
|
||||
filename = os.path.join(config.config_calibre_dir, book.path)
|
||||
if not os.path.isfile(os.path.join(filename, data.name + "." + book_format)):
|
||||
filename = os.path.join(config.get_book_path(), book.path)
|
||||
if not os.path.isfile(os.path.join(filename, book_name + "." + book_format)):
|
||||
# ToDo: improve error handling
|
||||
log.error('File not found: %s', os.path.join(filename, data.name + "." + book_format))
|
||||
log.error('File not found: %s', os.path.join(filename, book_name + "." + book_format))
|
||||
|
||||
if client == "kobo" and book_format == "kepub":
|
||||
headers["Content-Disposition"] = headers["Content-Disposition"].replace(".kepub", ".kepub.epub")
|
||||
|
||||
response = make_response(send_from_directory(filename, data.name + "." + book_format))
|
||||
# ToDo Check headers parameter
|
||||
for element in headers:
|
||||
response.headers[element[0]] = element[1]
|
||||
log.info('Downloading file: {}'.format(os.path.join(filename, data.name + "." + book_format)))
|
||||
return response
|
||||
if book_format == "kepub" and config.config_kepubifypath and config.config_embed_metadata:
|
||||
filename, download_name = do_kepubify_metadata_replace(book, os.path.join(filename,
|
||||
book_name + "." + book_format))
|
||||
elif book_format != "kepub" and config.config_binariesdir and config.config_embed_metadata:
|
||||
filename, download_name = do_calibre_export(book.id, book_format)
|
||||
else:
|
||||
download_name = book_name
|
||||
|
||||
response = make_response(send_from_directory(filename, download_name + "." + book_format))
|
||||
# ToDo Check headers parameter
|
||||
for element in headers:
|
||||
response.headers[element[0]] = element[1]
|
||||
log.info('Downloading file: {}'.format(os.path.join(filename, book_name + "." + book_format)))
|
||||
return response
|
||||
|
||||
|
||||
def do_kepubify_metadata_replace(book, file_path):
|
||||
custom_columns = (calibre_db.session.query(db.CustomColumns)
|
||||
.filter(db.CustomColumns.mark_for_delete == 0)
|
||||
.filter(db.CustomColumns.datatype.notin_(db.cc_exceptions))
|
||||
.order_by(db.CustomColumns.label).all())
|
||||
|
||||
tree, cf_name = get_content_opf(file_path)
|
||||
package = create_new_metadata_backup(book, custom_columns, current_user.locale, _("Cover"), lang_type=2)
|
||||
content = replace_metadata(tree, package)
|
||||
tmp_dir = get_temp_dir()
|
||||
temp_file_name = str(uuid4())
|
||||
# open zipfile and replace metadata block in content.opf
|
||||
updateEpub(file_path, os.path.join(tmp_dir, temp_file_name + ".kepub"), cf_name, content)
|
||||
return tmp_dir, temp_file_name
|
||||
|
||||
|
||||
##################################
|
||||
|
||||
|
@ -984,6 +1027,47 @@ def check_unrar(unrar_location):
|
|||
return _('Error executing UnRar')
|
||||
|
||||
|
||||
def check_calibre(calibre_location):
|
||||
if not calibre_location:
|
||||
return
|
||||
|
||||
if not os.path.exists(calibre_location):
|
||||
return _('Could not find the specified directory')
|
||||
|
||||
if not os.path.isdir(calibre_location):
|
||||
return _('Please specify a directory, not a file')
|
||||
|
||||
try:
|
||||
supported_binary_paths = [os.path.join(calibre_location, binary)
|
||||
for binary in SUPPORTED_CALIBRE_BINARIES.values()]
|
||||
binaries_available = [os.path.isfile(binary_path) for binary_path in supported_binary_paths]
|
||||
binaries_executable = [os.access(binary_path, os.X_OK) for binary_path in supported_binary_paths]
|
||||
if all(binaries_available) and all(binaries_executable):
|
||||
values = [process_wait([binary_path, "--version"], pattern=r'\(calibre (.*)\)')
|
||||
for binary_path in supported_binary_paths]
|
||||
if all(values):
|
||||
version = values[0].group(1)
|
||||
log.debug("calibre version %s", version)
|
||||
else:
|
||||
return _('Calibre binaries not viable')
|
||||
else:
|
||||
ret_val = []
|
||||
missing_binaries=[path for path, available in
|
||||
zip(SUPPORTED_CALIBRE_BINARIES.values(), binaries_available) if not available]
|
||||
|
||||
missing_perms=[path for path, available in
|
||||
zip(SUPPORTED_CALIBRE_BINARIES.values(), binaries_executable) if not available]
|
||||
if missing_binaries:
|
||||
ret_val.append(_('Missing calibre binaries: %(missing)s', missing=", ".join(missing_binaries)))
|
||||
if missing_perms:
|
||||
ret_val.append(_('Missing executable permissions: %(missing)s', missing=", ".join(missing_perms)))
|
||||
return ", ".join(ret_val)
|
||||
|
||||
except (OSError, UnicodeDecodeError) as err:
|
||||
log.error_or_exception(err)
|
||||
return _('Error excecuting Calibre')
|
||||
|
||||
|
||||
def json_serial(obj):
|
||||
"""JSON serializer for objects not serializable by default json code"""
|
||||
|
||||
|
@ -1008,43 +1092,38 @@ def tags_filters():
|
|||
|
||||
|
||||
# checks if domain is in database (including wildcards)
|
||||
# example SELECT * FROM @TABLE WHERE 'abcdefg' LIKE Name;
|
||||
# example SELECT * FROM @TABLE WHERE 'abcdefg' LIKE Name;
|
||||
# from https://code.luasoftware.com/tutorials/flask/execute-raw-sql-in-flask-sqlalchemy/
|
||||
# in all calls the email address is checked for validity
|
||||
def check_valid_domain(domain_text):
|
||||
sql = "SELECT * FROM registration WHERE (:domain LIKE domain and allow = 1);"
|
||||
result = ub.session.query(ub.Registration).from_statement(text(sql)).params(domain=domain_text).all()
|
||||
if not len(result):
|
||||
if not len(ub.session.query(ub.Registration).from_statement(text(sql)).params(domain=domain_text).all()):
|
||||
return False
|
||||
sql = "SELECT * FROM registration WHERE (:domain LIKE domain and allow = 0);"
|
||||
result = ub.session.query(ub.Registration).from_statement(text(sql)).params(domain=domain_text).all()
|
||||
return not len(result)
|
||||
return not len(ub.session.query(ub.Registration).from_statement(text(sql)).params(domain=domain_text).all())
|
||||
|
||||
|
||||
def get_download_link(book_id, book_format, client):
|
||||
book_format = book_format.split(".")[0]
|
||||
book = calibre_db.get_filtered_book(book_id, allow_show_archived=True)
|
||||
data1= ""
|
||||
if book:
|
||||
data1 = calibre_db.get_book_format(book.id, book_format.upper())
|
||||
if data1:
|
||||
# collect downloaded books only for registered user and not for anonymous user
|
||||
if current_user.is_authenticated:
|
||||
ub.update_download(book_id, int(current_user.id))
|
||||
file_name = book.title
|
||||
if len(book.authors) > 0:
|
||||
file_name = file_name + ' - ' + book.authors[0].name
|
||||
file_name = get_valid_filename(file_name, replace_whitespace=False)
|
||||
headers = Headers()
|
||||
headers["Content-Type"] = mimetypes.types_map.get('.' + book_format, "application/octet-stream")
|
||||
headers["Content-Disposition"] = "attachment; filename=%s.%s; filename*=UTF-8''%s.%s" % (
|
||||
quote(file_name), book_format, quote(file_name), book_format)
|
||||
return do_download_file(book, book_format, client, data1, headers)
|
||||
else:
|
||||
log.error("Book id {} not found for downloading".format(book_id))
|
||||
abort(404)
|
||||
if data1:
|
||||
# collect downloaded books only for registered user and not for anonymous user
|
||||
if current_user.is_authenticated:
|
||||
ub.update_download(book_id, int(current_user.id))
|
||||
file_name = book.title
|
||||
if len(book.authors) > 0:
|
||||
file_name = file_name + ' - ' + book.authors[0].name
|
||||
file_name = get_valid_filename(file_name, replace_whitespace=False)
|
||||
headers = Headers()
|
||||
headers["Content-Type"] = mimetypes.types_map.get('.' + book_format, "application/octet-stream")
|
||||
headers["Content-Disposition"] = "attachment; filename=%s.%s; filename*=UTF-8''%s.%s" % (
|
||||
quote(file_name), book_format, quote(file_name), book_format)
|
||||
return do_download_file(book, book_format, client, data1, headers)
|
||||
else:
|
||||
abort(404)
|
||||
abort(404)
|
||||
|
||||
|
||||
def clear_cover_thumbnail_cache(book_id):
|
||||
|
|
|
@ -7760,6 +7760,384 @@ LANGUAGE_NAMES = {
|
|||
"zxx": "Нет языкового содержимого",
|
||||
"zza": "Зазаки"
|
||||
},
|
||||
"sk": {
|
||||
"abk": "Abkhazian",
|
||||
"ace": "Achinese",
|
||||
"ach": "Acoli",
|
||||
"ada": "Adangme",
|
||||
"ady": "Adyghe",
|
||||
"aar": "Afar",
|
||||
"afh": "Afrihili",
|
||||
"afr": "Afrikánsky",
|
||||
"ain": "Ainu (Japan)",
|
||||
"aka": "Akan",
|
||||
"akk": "Akkadian",
|
||||
"sqi": "Albanian",
|
||||
"ale": "Aleut",
|
||||
"amh": "Amharic",
|
||||
"anp": "Angika",
|
||||
"ara": "Arabská",
|
||||
"arg": "Aragonese",
|
||||
"arp": "Arapaho",
|
||||
"arw": "Arawak",
|
||||
"hye": "Arménčina",
|
||||
"asm": "Assamese",
|
||||
"ast": "Asturian",
|
||||
"ava": "Avaric",
|
||||
"ave": "Avestan",
|
||||
"awa": "Awadhi",
|
||||
"aym": "Aymara",
|
||||
"aze": "Ázerbajdžánsky",
|
||||
"ban": "Balinese",
|
||||
"bal": "Baluchi",
|
||||
"bam": "Bambara",
|
||||
"bas": "Basa (Cameroon)",
|
||||
"bak": "Bashkir",
|
||||
"eus": "Baskitský",
|
||||
"bej": "Beja",
|
||||
"bel": "Belarusian",
|
||||
"bem": "Bemba (Zambia)",
|
||||
"ben": "Bengali",
|
||||
"bit": "Berinomo",
|
||||
"bho": "Bhojpuri",
|
||||
"bik": "Bikol",
|
||||
"byn": "Bilin",
|
||||
"bin": "Bini",
|
||||
"bis": "Bislama",
|
||||
"zbl": "Blissymbols",
|
||||
"bos": "Bosnian",
|
||||
"bra": "Braj",
|
||||
"bre": "Bretónsky",
|
||||
"bug": "Buginese",
|
||||
"bul": "Bulharský",
|
||||
"bua": "Buriat",
|
||||
"mya": "Burmese",
|
||||
"cad": "Caddo",
|
||||
"cat": "Katalánsky",
|
||||
"ceb": "Cebuano",
|
||||
"chg": "Chagatai",
|
||||
"cha": "Chamorro",
|
||||
"che": "Chechen",
|
||||
"chr": "Cherokee",
|
||||
"chy": "Cheyenne",
|
||||
"chb": "Chibcha",
|
||||
"zho": "Čínsky",
|
||||
"chn": "Chinook jargon",
|
||||
"chp": "Chipewyan",
|
||||
"cho": "Choctaw",
|
||||
"cht": "Cholón",
|
||||
"chk": "Chuukese",
|
||||
"chv": "Chuvash",
|
||||
"cop": "Coptic",
|
||||
"cor": "Cornish",
|
||||
"cos": "Corsican",
|
||||
"cre": "Cree",
|
||||
"mus": "Creek",
|
||||
"hrv": "Chorvátsky",
|
||||
"ces": "Český",
|
||||
"dak": "Dakota",
|
||||
"dan": "Dánsky",
|
||||
"dar": "Dargwa",
|
||||
"del": "Delaware",
|
||||
"div": "Dhivehi",
|
||||
"din": "Dinka",
|
||||
"doi": "Dogri (macrolanguage)",
|
||||
"dgr": "Dogrib",
|
||||
"dua": "Duala",
|
||||
"nld": "Holandský",
|
||||
"dse": "Dutch Sign Language",
|
||||
"dyu": "Dyula",
|
||||
"dzo": "Dzongkha",
|
||||
"efi": "Efik",
|
||||
"egy": "Egyptian (Ancient)",
|
||||
"eka": "Ekajuk",
|
||||
"elx": "Elamite",
|
||||
"eng": "Angličtina",
|
||||
"enu": "Enu",
|
||||
"myv": "Erzya",
|
||||
"epo": "Esperanto",
|
||||
"est": "Estónsky",
|
||||
"ewe": "Ewe",
|
||||
"ewo": "Ewondo",
|
||||
"fan": "Fang (Equatorial Guinea)",
|
||||
"fat": "Fanti",
|
||||
"fao": "Faroese",
|
||||
"fij": "Fijian",
|
||||
"fil": "Filipino",
|
||||
"fin": "Fínsky",
|
||||
"fon": "Fon",
|
||||
"fra": "Francúzsky",
|
||||
"fur": "Friulian",
|
||||
"ful": "Fulah",
|
||||
"gaa": "Ga",
|
||||
"glg": "Galician",
|
||||
"lug": "Ganda",
|
||||
"gay": "Gayo",
|
||||
"gba": "Gbaya (Central African Republic)",
|
||||
"hmj": "Ge",
|
||||
"gez": "Geez",
|
||||
"kat": "Georgian",
|
||||
"deu": "Nemecký",
|
||||
"gil": "Gilbertese",
|
||||
"gon": "Gondi",
|
||||
"gor": "Gorontalo",
|
||||
"got": "Gothic",
|
||||
"grb": "Grebo",
|
||||
"grn": "Guarani",
|
||||
"guj": "Gujarati",
|
||||
"gwi": "Gwichʼin",
|
||||
"hai": "Haida",
|
||||
"hau": "Hausa",
|
||||
"haw": "Hawaiian",
|
||||
"heb": "Hebrejský",
|
||||
"her": "Herero",
|
||||
"hil": "Hiligaynon",
|
||||
"hin": "Hindi",
|
||||
"hmo": "Hiri Motu",
|
||||
"hit": "Hittite",
|
||||
"hmn": "Hmong",
|
||||
"hun": "Maďarský",
|
||||
"hup": "Hupa",
|
||||
"iba": "Iban",
|
||||
"isl": "Islandský",
|
||||
"ido": "Ido",
|
||||
"ibo": "Igbo",
|
||||
"ilo": "Iloko",
|
||||
"ind": "Indonézsky",
|
||||
"inh": "Ingush",
|
||||
"ina": "Interlingua (International Auxiliary Language Association)",
|
||||
"ile": "Interlingue",
|
||||
"iku": "Inuktitut",
|
||||
"ipk": "Inupiaq",
|
||||
"gle": "Írsky",
|
||||
"ita": "Taliansky",
|
||||
"jpn": "Japonský",
|
||||
"jav": "Javanese",
|
||||
"jrb": "Judeo-Arabic",
|
||||
"jpr": "Judeo-Persian",
|
||||
"kbd": "Kabardian",
|
||||
"kab": "Kabyle",
|
||||
"kac": "Kachin",
|
||||
"kal": "Kalaallisut",
|
||||
"xal": "Kalmyk",
|
||||
"kam": "Kamba (Kenya)",
|
||||
"kan": "Kannada",
|
||||
"kau": "Kanuri",
|
||||
"kaa": "Kara-Kalpak",
|
||||
"krc": "Karachay-Balkar",
|
||||
"krl": "Karelian",
|
||||
"kas": "Kashmiri",
|
||||
"csb": "Kashubian",
|
||||
"kaw": "Kawi",
|
||||
"kaz": "Kazakh",
|
||||
"kha": "Khasi",
|
||||
"kho": "Khotanese",
|
||||
"kik": "Kikuyu",
|
||||
"kmb": "Kimbundu",
|
||||
"kin": "Kinyarwanda",
|
||||
"kir": "Kirghiz",
|
||||
"tlh": "Klingon",
|
||||
"kom": "Komi",
|
||||
"kon": "Kongo",
|
||||
"kok": "Konkani (macrolanguage)",
|
||||
"kor": "Kórejský",
|
||||
"kos": "Kosraean",
|
||||
"kpe": "Kpelle",
|
||||
"kua": "Kuanyama",
|
||||
"kum": "Kumyk",
|
||||
"kur": "Kurdský",
|
||||
"kru": "Kurukh",
|
||||
"kut": "Kutenai",
|
||||
"lad": "Ladino",
|
||||
"lah": "Lahnda",
|
||||
"lam": "Lamba",
|
||||
"lao": "Lao",
|
||||
"lat": "Latin",
|
||||
"lav": "Latvian",
|
||||
"lez": "Lezghian",
|
||||
"lim": "Limburgan",
|
||||
"lin": "Lingala",
|
||||
"lit": "Lotyšský",
|
||||
"jbo": "Lojban",
|
||||
"loz": "Lozi",
|
||||
"lub": "Luba-Katanga",
|
||||
"lua": "Luba-Lulua",
|
||||
"lui": "Luiseno",
|
||||
"smj": "Lule Sami",
|
||||
"lun": "Lunda",
|
||||
"luo": "Luo (Kenya and Tanzania)",
|
||||
"lus": "Lushai",
|
||||
"ltz": "Luxembourgish",
|
||||
"mkd": "Macedónsky",
|
||||
"mad": "Madurese",
|
||||
"mag": "Magahi",
|
||||
"mai": "Maithili",
|
||||
"mak": "Makasar",
|
||||
"mlg": "Malagasy",
|
||||
"msa": "Malay (macrolanguage)",
|
||||
"mal": "Malayalam",
|
||||
"mlt": "Maltézsky",
|
||||
"mnc": "Manchu",
|
||||
"mdr": "Mandar",
|
||||
"man": "Mandingo",
|
||||
"mni": "Manipuri",
|
||||
"glv": "Manx",
|
||||
"mri": "Maori",
|
||||
"arn": "Mapudungun",
|
||||
"mar": "Marathi",
|
||||
"chm": "Mari (Russia)",
|
||||
"mah": "Marshallese",
|
||||
"mwr": "Marwari",
|
||||
"mas": "Masai",
|
||||
"men": "Mende (Sierra Leone)",
|
||||
"mic": "Mi'kmaq",
|
||||
"min": "Minangkabau",
|
||||
"mwl": "Mirandese",
|
||||
"moh": "Mohawk",
|
||||
"mdf": "Moksha",
|
||||
"lol": "Mongo",
|
||||
"mon": "Mongolian",
|
||||
"mos": "Mossi",
|
||||
"mul": "Multiple languages",
|
||||
"nqo": "N'Ko",
|
||||
"nau": "Nauru",
|
||||
"nav": "Navajo",
|
||||
"ndo": "Ndonga",
|
||||
"nap": "Neapolitan",
|
||||
"nia": "Nias",
|
||||
"niu": "Niuean",
|
||||
"zxx": "No linguistic content",
|
||||
"nog": "Nogai",
|
||||
"nor": "Norwegian",
|
||||
"nob": "Norwegian Bokmål",
|
||||
"nno": "Norwegian Nynorsk",
|
||||
"nym": "Nyamwezi",
|
||||
"nya": "Nyanja",
|
||||
"nyn": "Nyankole",
|
||||
"nyo": "Nyoro",
|
||||
"nzi": "Nzima",
|
||||
"oci": "Occitan (post 1500)",
|
||||
"oji": "Ojibwa",
|
||||
"orm": "Oromo",
|
||||
"osa": "Osage",
|
||||
"oss": "Ossetian",
|
||||
"pal": "Pahlavi",
|
||||
"pau": "Palauan",
|
||||
"pli": "Pali",
|
||||
"pam": "Pampanga",
|
||||
"pag": "Pangasinan",
|
||||
"pan": "Panjabi",
|
||||
"pap": "Papiamento",
|
||||
"fas": "Persian",
|
||||
"phn": "Phoenician",
|
||||
"pon": "Pohnpeian",
|
||||
"pol": "Poľský",
|
||||
"por": "Portugalský",
|
||||
"pus": "Pashto",
|
||||
"que": "Quechua",
|
||||
"raj": "Rajasthani",
|
||||
"rap": "Rapanui",
|
||||
"ron": "Rumunský",
|
||||
"roh": "Romansh",
|
||||
"rom": "Romany",
|
||||
"run": "Rundi",
|
||||
"rus": "Ruský",
|
||||
"smo": "Samoan",
|
||||
"sad": "Sandawe",
|
||||
"sag": "Sango",
|
||||
"san": "Sanskrit",
|
||||
"sat": "Santali",
|
||||
"srd": "Sardinian",
|
||||
"sas": "Sasak",
|
||||
"sco": "Scots",
|
||||
"sel": "Selkup",
|
||||
"srp": "Srbský",
|
||||
"srr": "Serer",
|
||||
"shn": "Shan",
|
||||
"sna": "Shona",
|
||||
"scn": "Sicilian",
|
||||
"sid": "Sidamo",
|
||||
"bla": "Siksika",
|
||||
"snd": "Sindhi",
|
||||
"sin": "Sinhala",
|
||||
"den": "Slave (Athapascan)",
|
||||
"slk": "Slovenský",
|
||||
"slv": "Slovinský",
|
||||
"sog": "Sogdian",
|
||||
"som": "Somali",
|
||||
"snk": "Soninke",
|
||||
"spa": "Španielsky",
|
||||
"srn": "Sranan Tongo",
|
||||
"suk": "Sukuma",
|
||||
"sux": "Sumerian",
|
||||
"sun": "Sundanese",
|
||||
"sus": "Susu",
|
||||
"swa": "Swahili (macrolanguage)",
|
||||
"ssw": "Swati",
|
||||
"swe": "Švédsky",
|
||||
"syr": "Syriac",
|
||||
"tgl": "Tagalog",
|
||||
"tah": "Tahitian",
|
||||
"tgk": "Tajik",
|
||||
"tmh": "Tamashek",
|
||||
"tam": "Tamilský",
|
||||
"tat": "Tatar",
|
||||
"tel": "Telugu",
|
||||
"ter": "Tereno",
|
||||
"tet": "Tetum",
|
||||
"tha": "Thajský",
|
||||
"bod": "Tibetan",
|
||||
"tig": "Tigre",
|
||||
"tir": "Tigrinya",
|
||||
"tem": "Timne",
|
||||
"tiv": "Tiv",
|
||||
"tli": "Tlingit",
|
||||
"tpi": "Tok Pisin",
|
||||
"tkl": "Tokelau",
|
||||
"tog": "Tonga (Nyasa)",
|
||||
"ton": "Tonga (Tonga Islands)",
|
||||
"tsi": "Tsimshian",
|
||||
"tso": "Tsonga",
|
||||
"tsn": "Tswana",
|
||||
"tum": "Tumbuka",
|
||||
"tur": "Turecký",
|
||||
"tuk": "Turkmen",
|
||||
"tvl": "Tuvalu",
|
||||
"tyv": "Tuvinian",
|
||||
"twi": "Twi",
|
||||
"udm": "Udmurt",
|
||||
"uga": "Ugaritic",
|
||||
"uig": "Uighur",
|
||||
"ukr": "Ukrainian",
|
||||
"umb": "Umbundu",
|
||||
"mis": "Uncoded languages",
|
||||
"und": "Undetermined",
|
||||
"urd": "Urdu",
|
||||
"uzb": "Uzbek",
|
||||
"vai": "Vai",
|
||||
"ven": "Venda",
|
||||
"vie": "Vietnamský",
|
||||
"vol": "Volapük",
|
||||
"vot": "Votic",
|
||||
"wln": "Vallónsky",
|
||||
"war": "Waray (Philippines)",
|
||||
"was": "Washo",
|
||||
"cym": "Welšský",
|
||||
"wal": "Wolaytta",
|
||||
"wol": "Wolof",
|
||||
"xho": "Xhosa",
|
||||
"sah": "Yakut",
|
||||
"yao": "Yao",
|
||||
"yap": "Yapese",
|
||||
"yid": "Yiddish",
|
||||
"yor": "Yoruba",
|
||||
"zap": "Zapotec",
|
||||
"zza": "Zaza",
|
||||
"zen": "Zenaga",
|
||||
"zha": "Zhuang",
|
||||
"zul": "Zulu",
|
||||
"zun": "Zuni"
|
||||
},
|
||||
"sv": {
|
||||
"aar": "Afar",
|
||||
"abk": "Abchaziska",
|
||||
|
|
|
@ -124,7 +124,7 @@ def formatseriesindex_filter(series_index):
|
|||
return int(series_index)
|
||||
else:
|
||||
return series_index
|
||||
except ValueError:
|
||||
except (ValueError, TypeError):
|
||||
return series_index
|
||||
return 0
|
||||
|
||||
|
|
14
cps/kobo.py
14
cps/kobo.py
|
@ -56,7 +56,7 @@ from .kobo_auth import requires_kobo_auth, get_auth_token
|
|||
|
||||
KOBO_FORMATS = {"KEPUB": ["KEPUB"], "EPUB": ["EPUB3", "EPUB"]}
|
||||
KOBO_STOREAPI_URL = "https://storeapi.kobo.com"
|
||||
KOBO_IMAGEHOST_URL = "https://kbimages1-a.akamaihd.net"
|
||||
KOBO_IMAGEHOST_URL = "https://cdn.kobo.com/book-images"
|
||||
|
||||
SYNC_ITEM_LIMIT = 100
|
||||
|
||||
|
@ -137,10 +137,13 @@ def convert_to_kobo_timestamp_string(timestamp):
|
|||
|
||||
@kobo.route("/v1/library/sync")
|
||||
@requires_kobo_auth
|
||||
@download_required
|
||||
# @download_required
|
||||
def HandleSyncRequest():
|
||||
if not current_user.role_download():
|
||||
log.info("Users need download permissions for syncing library to Kobo reader")
|
||||
return abort(403)
|
||||
sync_token = SyncToken.SyncToken.from_headers(request.headers)
|
||||
log.info("Kobo library sync request received.")
|
||||
log.info("Kobo library sync request received")
|
||||
log.debug("SyncToken: {}".format(sync_token))
|
||||
log.debug("Download link format {}".format(get_download_url_for_book('[bookid]','[bookformat]')))
|
||||
if not current_app.wsgi_app.is_proxied:
|
||||
|
@ -205,7 +208,7 @@ def HandleSyncRequest():
|
|||
for book in books:
|
||||
formats = [data.format for data in book.Books.data]
|
||||
if 'KEPUB' not in formats and config.config_kepubifypath and 'EPUB' in formats:
|
||||
helper.convert_book_format(book.Books.id, config.config_calibre_dir, 'EPUB', 'KEPUB', current_user.name)
|
||||
helper.convert_book_format(book.Books.id, config.get_book_path(), 'EPUB', 'KEPUB', current_user.name)
|
||||
|
||||
kobo_reading_state = get_or_create_reading_state(book.Books.id)
|
||||
entitlement = {
|
||||
|
@ -315,7 +318,7 @@ def generate_sync_response(sync_token, sync_results, set_cont=False):
|
|||
extra_headers["x-kobo-recent-reads"] = store_response.headers.get("x-kobo-recent-reads")
|
||||
|
||||
except Exception as ex:
|
||||
log.error("Failed to receive or parse response from Kobo's sync endpoint: {}".format(ex))
|
||||
log.error_or_exception("Failed to receive or parse response from Kobo's sync endpoint: {}".format(ex))
|
||||
if set_cont:
|
||||
extra_headers["x-kobo-sync"] = "continue"
|
||||
sync_token.to_headers(extra_headers)
|
||||
|
@ -959,6 +962,7 @@ def HandleUnimplementedRequest(dummy=None):
|
|||
@kobo.route("/v1/user/wishlist", methods=["GET", "POST"])
|
||||
@kobo.route("/v1/user/recommendations", methods=["GET", "POST"])
|
||||
@kobo.route("/v1/analytics/<dummy>", methods=["GET", "POST"])
|
||||
@kobo.route("/v1/assets", methods=["GET"])
|
||||
def HandleUserRequest(dummy=None):
|
||||
log.debug("Unimplemented User Request received: %s (request is forwarded to kobo if configured)", request.base_url)
|
||||
return redirect_or_proxy_request()
|
||||
|
|
|
@ -156,6 +156,9 @@ def requires_kobo_auth(f):
|
|||
limiter.check()
|
||||
except RateLimitExceeded:
|
||||
return abort(429)
|
||||
except (ConnectionError, Exception) as e:
|
||||
log.error("Connection error to limiter backend: %s", e)
|
||||
return abort(429)
|
||||
user = (
|
||||
ub.session.query(ub.User)
|
||||
.join(ub.RemoteAuthToken)
|
||||
|
|
|
@ -150,7 +150,7 @@ def setup(log_file, log_level=None):
|
|||
else:
|
||||
try:
|
||||
file_handler = RotatingFileHandler(log_file, maxBytes=100000, backupCount=2, encoding='utf-8')
|
||||
except IOError:
|
||||
except (IOError, PermissionError):
|
||||
if log_file == DEFAULT_LOG_FILE:
|
||||
raise
|
||||
file_handler = RotatingFileHandler(DEFAULT_LOG_FILE, maxBytes=100000, backupCount=2, encoding='utf-8')
|
||||
|
@ -177,7 +177,7 @@ def create_access_log(log_file, log_name, formatter):
|
|||
access_log.setLevel(logging.INFO)
|
||||
try:
|
||||
file_handler = RotatingFileHandler(log_file, maxBytes=50000, backupCount=2, encoding='utf-8')
|
||||
except IOError:
|
||||
except (IOError, PermissionError):
|
||||
if log_file == DEFAULT_ACCESS_LOG:
|
||||
raise
|
||||
file_handler = RotatingFileHandler(DEFAULT_ACCESS_LOG, maxBytes=50000, backupCount=2, encoding='utf-8')
|
||||
|
|
|
@ -169,7 +169,8 @@ class Douban(Metadata):
|
|||
),
|
||||
)
|
||||
|
||||
html = etree.HTML(r.content.decode("utf8"))
|
||||
decode_content = r.content.decode("utf8")
|
||||
html = etree.HTML(decode_content)
|
||||
|
||||
match.title = html.xpath(self.TITTLE_XPATH)[0].text
|
||||
match.cover = html.xpath(
|
||||
|
@ -184,7 +185,7 @@ class Douban(Metadata):
|
|||
if len(tag_elements):
|
||||
match.tags = [tag_element.text for tag_element in tag_elements]
|
||||
else:
|
||||
match.tags = self._get_tags(html.text)
|
||||
match.tags = self._get_tags(decode_content)
|
||||
|
||||
description_element = html.xpath(self.DESCRIPTION_XPATH)
|
||||
if len(description_element):
|
||||
|
|
|
@ -502,7 +502,7 @@ def render_element_index(database_column, linked_table, folder):
|
|||
entries = entries.join(linked_table).join(db.Books)
|
||||
entries = entries.filter(calibre_db.common_filters()).group_by(func.upper(func.substr(database_column, 1, 1))).all()
|
||||
elements = []
|
||||
if off == 0:
|
||||
if off == 0 and entries:
|
||||
elements.append({'id': "00", 'name': _("All")})
|
||||
shift = 1
|
||||
for entry in entries[
|
||||
|
|
|
@ -29,7 +29,7 @@
|
|||
|
||||
from urllib.parse import urlparse, urljoin
|
||||
|
||||
from flask import request, url_for, redirect
|
||||
from flask import request, url_for, redirect, current_app
|
||||
|
||||
|
||||
def is_safe_url(target):
|
||||
|
@ -38,16 +38,15 @@ def is_safe_url(target):
|
|||
return test_url.scheme in ('http', 'https') and ref_url.netloc == test_url.netloc
|
||||
|
||||
|
||||
def get_redirect_target():
|
||||
for target in request.values.get('next'), request.referrer:
|
||||
if not target:
|
||||
continue
|
||||
if is_safe_url(target):
|
||||
return target
|
||||
def remove_prefix(text, prefix):
|
||||
if text.startswith(prefix):
|
||||
return text[len(prefix):]
|
||||
return ""
|
||||
|
||||
|
||||
def redirect_back(endpoint, **values):
|
||||
target = request.form['next']
|
||||
if not target or not is_safe_url(target):
|
||||
def get_redirect_location(next, endpoint, **values):
|
||||
target = next or url_for(endpoint, **values)
|
||||
adapter = current_app.url_map.bind(urlparse(request.host_url).netloc)
|
||||
if not len(adapter.allowed_methods(remove_prefix(target, request.environ.get('HTTP_X_SCRIPT_NAME',"")))):
|
||||
target = url_for(endpoint, **values)
|
||||
return redirect(target)
|
||||
return target
|
||||
|
|
|
@ -21,6 +21,7 @@ import datetime
|
|||
from . import config, constants
|
||||
from .services.background_scheduler import BackgroundScheduler, CronTrigger, use_APScheduler
|
||||
from .tasks.database import TaskReconnectDatabase
|
||||
from .tasks.tempFolder import TaskDeleteTempFolder
|
||||
from .tasks.thumbnail import TaskGenerateCoverThumbnails, TaskGenerateSeriesThumbnails, TaskClearCoverThumbnailCache
|
||||
from .services.worker import WorkerThread
|
||||
from .tasks.metadata_backup import TaskBackupMetadata
|
||||
|
@ -31,6 +32,9 @@ def get_scheduled_tasks(reconnect=True):
|
|||
if reconnect:
|
||||
tasks.append([lambda: TaskReconnectDatabase(), 'reconnect', False])
|
||||
|
||||
# Delete temp folder
|
||||
tasks.append([lambda: TaskDeleteTempFolder(), 'delete temp', True])
|
||||
|
||||
# Generate metadata.opf file for each changed book
|
||||
if config.schedule_metadata_backup:
|
||||
tasks.append([lambda: TaskBackupMetadata("en"), 'backup metadata', False])
|
||||
|
@ -86,6 +90,8 @@ def register_startup_tasks():
|
|||
# Ignore tasks that should currently be running, as these will be added when registering scheduled tasks
|
||||
if constants.APP_MODE in ['development', 'test'] and not should_task_be_running(start, duration):
|
||||
scheduler.schedule_tasks_immediately(tasks=get_scheduled_tasks(False))
|
||||
else:
|
||||
scheduler.schedule_tasks_immediately(tasks=[[lambda: TaskDeleteTempFolder(), 'delete temp', True]])
|
||||
|
||||
|
||||
def should_task_be_running(start, duration):
|
||||
|
|
|
@ -217,8 +217,8 @@ def extend_search_term(searchterm,
|
|||
searchterm.extend([_("Rating <= %(rating)s", rating=rating_high)])
|
||||
if rating_low:
|
||||
searchterm.extend([_("Rating >= %(rating)s", rating=rating_low)])
|
||||
if read_status:
|
||||
searchterm.extend([_("Read Status = %(status)s", status=read_status)])
|
||||
if read_status != "Any":
|
||||
searchterm.extend([_("Read Status = '%(status)s'", status=read_status)])
|
||||
searchterm.extend(ext for ext in tags['include_extension'])
|
||||
searchterm.extend(ext for ext in tags['exclude_extension'])
|
||||
# handle custom columns
|
||||
|
@ -283,7 +283,7 @@ def render_adv_search_results(term, offset=None, order=None, limit=None):
|
|||
cc_present = True
|
||||
|
||||
if any(tags.values()) or author_name or book_title or publisher or pub_start or pub_end or rating_low \
|
||||
or rating_high or description or cc_present or read_status:
|
||||
or rating_high or description or cc_present or read_status != "Any":
|
||||
search_term, pub_start, pub_end = extend_search_term(search_term,
|
||||
author_name,
|
||||
book_title,
|
||||
|
@ -302,7 +302,8 @@ def render_adv_search_results(term, offset=None, order=None, limit=None):
|
|||
q = q.filter(func.datetime(db.Books.pubdate) > func.datetime(pub_start))
|
||||
if pub_end:
|
||||
q = q.filter(func.datetime(db.Books.pubdate) < func.datetime(pub_end))
|
||||
q = q.filter(adv_search_read_status(read_status))
|
||||
if read_status != "Any":
|
||||
q = q.filter(adv_search_read_status(read_status))
|
||||
if publisher:
|
||||
q = q.filter(db.Books.publishers.any(func.lower(db.Publishers.name).ilike("%" + publisher + "%")))
|
||||
q = adv_search_tag(q, tags['include_tag'], tags['exclude_tag'])
|
||||
|
|
|
@ -21,12 +21,13 @@ import os
|
|||
import errno
|
||||
import signal
|
||||
import socket
|
||||
import subprocess # nosec
|
||||
import asyncio
|
||||
|
||||
try:
|
||||
from gevent.pywsgi import WSGIServer
|
||||
from .gevent_wsgi import MyWSGIHandler
|
||||
from gevent.pool import Pool
|
||||
from gevent.socket import socket as GeventSocket
|
||||
from gevent import __version__ as _version
|
||||
from greenlet import GreenletExit
|
||||
import ssl
|
||||
|
@ -36,6 +37,7 @@ except ImportError:
|
|||
from .tornado_wsgi import MyWSGIContainer
|
||||
from tornado.httpserver import HTTPServer
|
||||
from tornado.ioloop import IOLoop
|
||||
from tornado import netutil
|
||||
from tornado import version as _version
|
||||
VERSION = 'Tornado ' + _version
|
||||
_GEVENT = False
|
||||
|
@ -95,7 +97,12 @@ class WebServer(object):
|
|||
log.warning('Cert path: %s', certfile_path)
|
||||
log.warning('Key path: %s', keyfile_path)
|
||||
|
||||
def _make_gevent_unix_socket(self, socket_file):
|
||||
def _make_gevent_socket_activated(self):
|
||||
# Reuse an already open socket on fd=SD_LISTEN_FDS_START
|
||||
SD_LISTEN_FDS_START = 3
|
||||
return GeventSocket(fileno=SD_LISTEN_FDS_START)
|
||||
|
||||
def _prepare_unix_socket(self, socket_file):
|
||||
# the socket file must not exist prior to bind()
|
||||
if os.path.exists(socket_file):
|
||||
# avoid nuking regular files and symbolic links (could be a mistype or security issue)
|
||||
|
@ -103,35 +110,41 @@ class WebServer(object):
|
|||
raise OSError(errno.EEXIST, os.strerror(errno.EEXIST), socket_file)
|
||||
os.remove(socket_file)
|
||||
|
||||
unix_sock = WSGIServer.get_listener(socket_file, family=socket.AF_UNIX)
|
||||
self.unix_socket_file = socket_file
|
||||
|
||||
# ensure current user and group have r/w permissions, no permissions for other users
|
||||
# this way the socket can be shared in a semi-secure manner
|
||||
# between the user running calibre-web and the user running the fronting webserver
|
||||
os.chmod(socket_file, 0o660)
|
||||
|
||||
return unix_sock
|
||||
|
||||
def _make_gevent_socket(self):
|
||||
def _make_gevent_listener(self):
|
||||
if os.name != 'nt':
|
||||
socket_activated = os.environ.get("LISTEN_FDS")
|
||||
if socket_activated:
|
||||
sock = self._make_gevent_socket_activated()
|
||||
sock_info = sock.getsockname()
|
||||
return sock, "systemd-socket:" + _readable_listen_address(sock_info[0], sock_info[1])
|
||||
unix_socket_file = os.environ.get("CALIBRE_UNIX_SOCKET")
|
||||
if unix_socket_file:
|
||||
return self._make_gevent_unix_socket(unix_socket_file), "unix:" + unix_socket_file
|
||||
self._prepare_unix_socket(unix_socket_file)
|
||||
unix_sock = WSGIServer.get_listener(unix_socket_file, family=socket.AF_UNIX)
|
||||
# ensure current user and group have r/w permissions, no permissions for other users
|
||||
# this way the socket can be shared in a semi-secure manner
|
||||
# between the user running calibre-web and the user running the fronting webserver
|
||||
os.chmod(unix_socket_file, 0o660)
|
||||
|
||||
return unix_sock, "unix:" + unix_socket_file
|
||||
|
||||
if self.listen_address:
|
||||
return (self.listen_address, self.listen_port), None
|
||||
return ((self.listen_address, self.listen_port),
|
||||
_readable_listen_address(self.listen_address, self.listen_port))
|
||||
|
||||
if os.name == 'nt':
|
||||
self.listen_address = '0.0.0.0'
|
||||
return (self.listen_address, self.listen_port), None
|
||||
return ((self.listen_address, self.listen_port),
|
||||
_readable_listen_address(self.listen_address, self.listen_port))
|
||||
|
||||
try:
|
||||
address = ('::', self.listen_port)
|
||||
sock = WSGIServer.get_listener(address, family=socket.AF_INET6)
|
||||
except socket.error as ex:
|
||||
log.error('%s', ex)
|
||||
log.warning('Unable to listen on "", trying on IPv4 only...')
|
||||
log.warning('Unable to listen on {}, trying on IPv4 only...'.format(address))
|
||||
address = ('', self.listen_port)
|
||||
sock = WSGIServer.get_listener(address, family=socket.AF_INET)
|
||||
|
||||
|
@ -201,9 +214,7 @@ class WebServer(object):
|
|||
ssl_args = self.ssl_args or {}
|
||||
|
||||
try:
|
||||
sock, output = self._make_gevent_socket()
|
||||
if output is None:
|
||||
output = _readable_listen_address(self.listen_address, self.listen_port)
|
||||
sock, output = self._make_gevent_listener()
|
||||
log.info('Starting Gevent server on %s', output)
|
||||
self.wsgiserver = WSGIServer(sock, self.app, log=self.access_logger, handler_class=MyWSGIHandler,
|
||||
error_log=log,
|
||||
|
@ -228,17 +239,42 @@ class WebServer(object):
|
|||
if os.name == 'nt' and sys.version_info > (3, 7):
|
||||
import asyncio
|
||||
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
|
||||
log.info('Starting Tornado server on %s', _readable_listen_address(self.listen_address, self.listen_port))
|
||||
try:
|
||||
# Max Buffersize set to 200MB
|
||||
http_server = HTTPServer(MyWSGIContainer(self.app),
|
||||
max_buffer_size=209700000,
|
||||
ssl_options=self.ssl_args)
|
||||
|
||||
# Max Buffersize set to 200MB
|
||||
http_server = HTTPServer(MyWSGIContainer(self.app),
|
||||
max_buffer_size=209700000,
|
||||
ssl_options=self.ssl_args)
|
||||
http_server.listen(self.listen_port, self.listen_address)
|
||||
self.wsgiserver = IOLoop.current()
|
||||
self.wsgiserver.start()
|
||||
# wait for stop signal
|
||||
self.wsgiserver.close(True)
|
||||
unix_socket_file = os.environ.get("CALIBRE_UNIX_SOCKET")
|
||||
if os.environ.get("LISTEN_FDS") and os.name != 'nt':
|
||||
SD_LISTEN_FDS_START = 3
|
||||
sock = socket.socket(fileno=SD_LISTEN_FDS_START)
|
||||
http_server.add_socket(sock)
|
||||
sock.setblocking(0)
|
||||
socket_name =sock.getsockname()
|
||||
output = "systemd-socket:" + _readable_listen_address(socket_name[0], socket_name[1])
|
||||
elif unix_socket_file and os.name != 'nt':
|
||||
self._prepare_unix_socket(unix_socket_file)
|
||||
output = "unix:" + unix_socket_file
|
||||
unix_socket = netutil.bind_unix_socket(self.unix_socket_file)
|
||||
http_server.add_socket(unix_socket)
|
||||
# ensure current user and group have r/w permissions, no permissions for other users
|
||||
# this way the socket can be shared in a semi-secure manner
|
||||
# between the user running calibre-web and the user running the fronting webserver
|
||||
os.chmod(self.unix_socket_file, 0o660)
|
||||
else:
|
||||
output = _readable_listen_address(self.listen_address, self.listen_port)
|
||||
http_server.listen(self.listen_port, self.listen_address)
|
||||
log.info('Starting Tornado server on %s', output)
|
||||
|
||||
self.wsgiserver = IOLoop.current()
|
||||
self.wsgiserver.start()
|
||||
# wait for stop signal
|
||||
self.wsgiserver.close(True)
|
||||
finally:
|
||||
if self.unix_socket_file:
|
||||
os.remove(self.unix_socket_file)
|
||||
self.unix_socket_file = None
|
||||
|
||||
def start(self):
|
||||
try:
|
||||
|
@ -291,4 +327,5 @@ class WebServer(object):
|
|||
if restart:
|
||||
self.wsgiserver.call_later(1.0, self.wsgiserver.stop)
|
||||
else:
|
||||
self.wsgiserver.add_callback_from_signal(self.wsgiserver.stop)
|
||||
self.wsgiserver.asyncio_loop.call_soon_threadsafe(self.wsgiserver.stop)
|
||||
|
||||
|
|
|
@ -33,7 +33,7 @@ def b64encode_json(json_data):
|
|||
return b64encode(json.dumps(json_data).encode())
|
||||
|
||||
|
||||
# Python3 has a timestamp() method we could be calling, however it's not avaiable in python2.
|
||||
# Python3 has a timestamp() method we could be calling, however it's not available in python2.
|
||||
def to_epoch_timestamp(datetime_object):
|
||||
return (datetime_object - datetime(1970, 1, 1)).total_seconds()
|
||||
|
||||
|
@ -47,7 +47,7 @@ def get_datetime_from_json(json_object, field_name):
|
|||
|
||||
|
||||
class SyncToken:
|
||||
""" The SyncToken is used to persist state accross requests.
|
||||
""" The SyncToken is used to persist state across requests.
|
||||
When serialized over the response headers, the Kobo device will propagate the token onto following
|
||||
requests to the service. As an example use-case, the SyncToken is used to detect books that have been added
|
||||
to the library since the last time the device synced to the server.
|
||||
|
|
|
@ -266,3 +266,6 @@ class CalibreTask:
|
|||
def _handleSuccess(self):
|
||||
self.stat = STAT_FINISH_SUCCESS
|
||||
self.progress = 1
|
||||
|
||||
def __str__(self):
|
||||
return self.name
|
||||
|
|
|
@ -71,6 +71,14 @@ def add_to_shelf(shelf_id, book_id):
|
|||
else:
|
||||
maxOrder = maxOrder[0]
|
||||
|
||||
if not calibre_db.session.query(db.Books).filter(db.Books.id == book_id).one_or_none():
|
||||
log.error("Invalid Book Id: %s. Could not be added to shelf %s", book_id, shelf.name)
|
||||
if not xhr:
|
||||
flash(_("%(book_id)s is a invalid Book Id. Could not be added to Shelf", book_id=book_id),
|
||||
category="error")
|
||||
return redirect(url_for('web.index'))
|
||||
return "%s is a invalid Book Id. Could not be added to Shelf" % book_id, 400
|
||||
|
||||
shelf.books.append(ub.BookShelf(shelf=shelf.id, book_id=book_id, order=maxOrder + 1))
|
||||
shelf.last_modified = datetime.utcnow()
|
||||
try:
|
||||
|
|
|
@ -3296,6 +3296,7 @@ div.btn-group[role=group][aria-label="Download, send to Kindle, reading"] .dropd
|
|||
left: 0 !important;
|
||||
}
|
||||
#add-to-shelves {
|
||||
min-height: 48px;
|
||||
max-height: calc(100% - 120px);
|
||||
overflow-y: auto;
|
||||
}
|
||||
|
@ -4812,8 +4813,14 @@ body.advsearch:not(.blur) > div.container-fluid > div.row-fluid > div.col-sm-10
|
|||
z-index: 999999999999999999999999999999999999
|
||||
}
|
||||
|
||||
.search #shelf-actions, body.login .home-btn {
|
||||
display: none
|
||||
body.search #shelf-actions button#add-to-shelf {
|
||||
height: 40px;
|
||||
}
|
||||
@media screen and (max-width: 767px) {
|
||||
body.search .discover, body.advsearch .discover {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
}
|
||||
}
|
||||
|
||||
body.read:not(.blur) a[href*=readbooks] {
|
||||
|
@ -5134,7 +5141,7 @@ body.login > div.navbar.navbar-default.navbar-static-top > div > div.navbar-head
|
|||
right: 5px
|
||||
}
|
||||
|
||||
#shelf-actions > .btn-group.open, .downloadBtn.open, .profileDrop[aria-expanded=true] {
|
||||
body:not(.search) #shelf-actions > .btn-group.open, .downloadBtn.open, .profileDrop[aria-expanded=true] {
|
||||
pointer-events: none
|
||||
}
|
||||
|
||||
|
@ -5151,7 +5158,7 @@ body.login > div.navbar.navbar-default.navbar-static-top > div > div.navbar-head
|
|||
color: var(--color-primary)
|
||||
}
|
||||
|
||||
#shelf-actions, #shelf-actions > .btn-group, #shelf-actions > .btn-group > .empty-ul {
|
||||
body:not(.search) #shelf-actions, body:not(.search) #shelf-actions > .btn-group, body:not(.search) #shelf-actions > .btn-group > .empty-ul {
|
||||
pointer-events: none
|
||||
}
|
||||
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 60 KiB After Width: | Height: | Size: 8.9 KiB |
Binary file not shown.
After Width: | Height: | Size: 27 KiB |
|
@ -0,0 +1,5 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
|
||||
<svg xmlns="http://www.w3.org/2000/svg" version="1.1" width="140px" height="140px" style="shape-rendering:geometricPrecision; text-rendering:geometricPrecision; image-rendering:optimizeQuality; fill-rule:evenodd; clip-rule:evenodd" xmlns:xlink="http://www.w3.org/1999/xlink">
|
||||
<g><path style="opacity:1" fill="#45b29d" d="M 70.5,5.5 C 87.7691,3.12603 97.4358,10.4594 99.5,27.5C 95.637,46.6972 84.3037,59.1972 65.5,65C 60.9053,66.3929 56.2387,66.7262 51.5,66C 50.0692,65.5348 48.9025,64.7014 48,63.5C 47.3333,60.5 47.3333,57.5 48,54.5C 62.2513,56.0484 73.5846,50.715 82,38.5C 85.0332,33.8945 86.0332,28.8945 85,23.5C 83.0488,19.2854 79.7155,17.2854 75,17.5C 65.5257,19.0759 57.859,23.7425 52,31.5C 38.306,51.6368 33.9727,73.6368 39,97.5C 44.5639,116.532 56.7306,122.699 75.5,116C 80.6017,113.385 85.2684,110.218 89.5,106.5C 95.1927,108.891 96.6927,112.891 94,118.5C 78.4211,132.151 61.2544,134.651 42.5,126C 31.5182,117.21 25.3516,105.71 24,91.5C 20.9978,65.8515 27.3311,42.8515 43,22.5C 50.6154,14.1193 59.7821,8.45258 70.5,5.5 Z"/></g>
|
||||
</svg>
|
After Width: | Height: | Size: 1.2 KiB |
|
@ -81,56 +81,6 @@ if ($("body.book").length > 0) {
|
|||
$(".rating").insertBefore(".hr");
|
||||
$("#remove-from-shelves").insertAfter(".hr");
|
||||
$(description).appendTo(".bookinfo")
|
||||
/* if book description is not in html format, Remove extra line breaks
|
||||
Remove blank lines/unnecessary spaces, split by line break to array
|
||||
Push array into .description div. If there is still a wall of text,
|
||||
find sentences and split wall into groups of three sentence paragraphs.
|
||||
If the book format is in html format, Keep html, but strip away inline
|
||||
styles and empty elements */
|
||||
|
||||
// If text is sitting in div as text node
|
||||
if ($(".comments:has(p)").length === 0) {
|
||||
newdesc = description.text()
|
||||
.replace(/^(?=\n)$|^\s*|\s*$|\n\n+/gm, "").split(/\n/);
|
||||
$(".comments").empty();
|
||||
$.each(newdesc, function (i, val) {
|
||||
$("div.comments").append("<p>" + newdesc[i] + "</p>");
|
||||
});
|
||||
$(".comments").fadeIn(100);
|
||||
} //If still a wall of text create 3 sentence paragraphs.
|
||||
if ($(".comments p").length === 1) {
|
||||
if (description.context != undefined) {
|
||||
newdesc = description.text()
|
||||
.replace(/^(?=\n)$|^\s*|\s*$|\n\n+/gm, "").split(/\n/);
|
||||
} else {
|
||||
newdesc = description.text();
|
||||
}
|
||||
doc = nlp(newdesc.toString());
|
||||
sentences = doc.map((m) => m.out("text"));
|
||||
sentences[0] = sentences[0].replace(",", "");
|
||||
$(".comments p").remove();
|
||||
let size = 3;
|
||||
let sentenceChunks = [];
|
||||
for (var i = 0; i < sentences.length; i += size) {
|
||||
sentenceChunks.push(sentences.slice(i, i + size));
|
||||
}
|
||||
let output = '';
|
||||
$.each(sentenceChunks, function (i, val) {
|
||||
let preOutput = '';
|
||||
$.each(val, function (i, val) {
|
||||
preOutput += val;
|
||||
});
|
||||
output += "<p>" + preOutput + "</p>";
|
||||
});
|
||||
$("div.comments").append(output);
|
||||
} else {
|
||||
$.each(description, function (i, val) {
|
||||
// $( description[i].outerHTML ).appendTo( ".comments" );
|
||||
$("div.comments :empty").remove();
|
||||
$("div.comments ").attr("style", "");
|
||||
});
|
||||
$("div.comments").fadeIn(100);
|
||||
}
|
||||
|
||||
// Sexy blurred backgrounds
|
||||
cover = $(".cover img").attr("src");
|
||||
|
@ -369,6 +319,13 @@ $("div.comments").readmore({
|
|||
// End of Global Work //
|
||||
///////////////////////////////
|
||||
|
||||
// Search Results
|
||||
if($("body.search").length > 0) {
|
||||
$('div[aria-label="Add to shelves"]').click(function () {
|
||||
$("#add-to-shelves").toggle();
|
||||
});
|
||||
}
|
||||
|
||||
// Advanced Search Results
|
||||
if($("body.advsearch").length > 0) {
|
||||
$("#loader + .container-fluid")
|
||||
|
|
|
@ -179,26 +179,24 @@ kthoom.ImageFile = function(file) {
|
|||
};
|
||||
|
||||
function updateDirectionButtons(){
|
||||
$("#right").show();
|
||||
$("#left").show();
|
||||
if (currentImage == 0 ) {
|
||||
var left = 1;
|
||||
var right = 1;
|
||||
if (currentImage <= 0 ) {
|
||||
if (settings.direction === 0) {
|
||||
$("#right").show();
|
||||
$("#left").hide();
|
||||
left = 0;
|
||||
} else {
|
||||
$("#left").show();
|
||||
$("#right").hide();
|
||||
right = 0;
|
||||
}
|
||||
}
|
||||
if ((currentImage + 1) >= Math.max(totalImages, imageFiles.length)) {
|
||||
if (settings.direction === 0) {
|
||||
$("#left").show();
|
||||
$("#right").hide();
|
||||
right = 0;
|
||||
} else {
|
||||
$("#right").show();
|
||||
$("#left").hide();
|
||||
}
|
||||
left = 0;
|
||||
}
|
||||
}
|
||||
left === 1 ? $("#left").show() : $("#left").hide();
|
||||
right === 1 ? $("#right").show() : $("#right").hide();
|
||||
}
|
||||
function initProgressClick() {
|
||||
$("#progress").click(function(e) {
|
||||
|
|
1
cps/static/js/libs/bootstrap-datepicker/locales/bootstrap-datepicker.sk.min.js
vendored
Normal file
1
cps/static/js/libs/bootstrap-datepicker/locales/bootstrap-datepicker.sk.min.js
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
!function(a){a.fn.datepicker.dates.sk={days:["Nedeľa","Pondelok","Utorok","Streda","Štvrtok","Piatok","Sobota"],daysShort:["Ned","Pon","Uto","Str","Štv","Pia","Sob"],daysMin:["Ne","Po","Ut","St","Št","Pia","So"],months:["Január","Február","Marec","Apríl","Máj","Jún","Júl","August","September","Október","November","December"],monthsShort:["Jan","Feb","Mar","Apr","Máj","Jún","Júl","Aug","Sep","Okt","Nov","Dec"],today:"Dnes",clear:"Vymazať",weekStart:1,format:"d.m.yyyy"}}(jQuery);
|
|
@ -9,6 +9,7 @@
|
|||
"wordSequences": "Das Passwort enthält Buchstabensequenzen",
|
||||
"wordLowercase": "Bitte mindestens einen Kleinbuchstaben verwenden",
|
||||
"wordUppercase": "Bitte mindestens einen Großbuchstaben verwenden",
|
||||
"word": "Bitte mindestens einen Buchstaben verwenden",
|
||||
"wordOneNumber": "Bitte mindestens eine Ziffern verwenden",
|
||||
"wordOneSpecialChar": "Bitte mindestens ein Sonderzeichen verwenden",
|
||||
"errorList": "Fehler:",
|
||||
|
|
|
@ -8,6 +8,7 @@
|
|||
"wordRepetitions": "Too many repetitions",
|
||||
"wordSequences": "Your password contains sequences",
|
||||
"wordLowercase": "Use at least one lowercase character",
|
||||
"word": "Use at least one character",
|
||||
"wordUppercase": "Use at least one uppercase character",
|
||||
"wordOneNumber": "Use at least one number",
|
||||
"wordOneSpecialChar": "Use at least one special character",
|
||||
|
|
|
@ -144,13 +144,13 @@ try {
|
|||
|
||||
validation.wordTwoCharacterClasses = function(options, word, score) {
|
||||
var specialCharRE = new RegExp(
|
||||
'(.' + options.rules.specialCharClass + ')'
|
||||
'(.' + options.rules.specialCharClass + ')', 'u'
|
||||
);
|
||||
|
||||
if (
|
||||
word.match(/([a-z].*[A-Z])|([A-Z].*[a-z])/) ||
|
||||
(word.match(/([a-zA-Z])/) && word.match(/([0-9])/)) ||
|
||||
(word.match(specialCharRE) && word.match(/[a-zA-Z0-9_]/))
|
||||
word.match(/(\p{Ll}.*\p{Lu})|(\p{Lu}.*\p{Ll})/u) ||
|
||||
(word.match(/(\p{Letter})/u) && word.match(/([0-9])/)) ||
|
||||
(word.match(specialCharRE) && word.match(/[\p{Letter}0-9_]/u))
|
||||
) {
|
||||
return score;
|
||||
}
|
||||
|
@ -202,11 +202,15 @@ try {
|
|||
};
|
||||
|
||||
validation.wordLowercase = function(options, word, score) {
|
||||
return word.match(/[a-z]/) && score;
|
||||
return word.match(/\p{Ll}/u) && score;
|
||||
};
|
||||
|
||||
validation.wordUppercase = function(options, word, score) {
|
||||
return word.match(/[A-Z]/) && score;
|
||||
return word.match(/\p{Lu}/u) && score;
|
||||
};
|
||||
|
||||
validation.word = function(options, word, score) {
|
||||
return word.match(/\p{Letter}/u) && score;
|
||||
};
|
||||
|
||||
validation.wordOneNumber = function(options, word, score) {
|
||||
|
@ -218,7 +222,7 @@ try {
|
|||
};
|
||||
|
||||
validation.wordOneSpecialChar = function(options, word, score) {
|
||||
var specialCharRE = new RegExp(options.rules.specialCharClass);
|
||||
var specialCharRE = new RegExp(options.rules.specialCharClass, 'u');
|
||||
return word.match(specialCharRE) && score;
|
||||
};
|
||||
|
||||
|
@ -228,27 +232,27 @@ try {
|
|||
options.rules.specialCharClass +
|
||||
'.*' +
|
||||
options.rules.specialCharClass +
|
||||
')'
|
||||
')', 'u'
|
||||
);
|
||||
|
||||
return word.match(twoSpecialCharRE) && score;
|
||||
};
|
||||
|
||||
validation.wordUpperLowerCombo = function(options, word, score) {
|
||||
return word.match(/([a-z].*[A-Z])|([A-Z].*[a-z])/) && score;
|
||||
return word.match(/(\p{Ll}.*\p{Lu})|(\p{Lu}.*\p{Ll})/u) && score;
|
||||
};
|
||||
|
||||
validation.wordLetterNumberCombo = function(options, word, score) {
|
||||
return word.match(/([a-zA-Z])/) && word.match(/([0-9])/) && score;
|
||||
return word.match(/([\p{Letter}])/u) && word.match(/([0-9])/) && score;
|
||||
};
|
||||
|
||||
validation.wordLetterNumberCharCombo = function(options, word, score) {
|
||||
var letterNumberCharComboRE = new RegExp(
|
||||
'([a-zA-Z0-9].*' +
|
||||
'([\p{Letter}0-9].*' +
|
||||
options.rules.specialCharClass +
|
||||
')|(' +
|
||||
options.rules.specialCharClass +
|
||||
'.*[a-zA-Z0-9])'
|
||||
'.*[\p{Letter}0-9])', 'u'
|
||||
);
|
||||
|
||||
return word.match(letterNumberCharComboRE) && score;
|
||||
|
@ -341,6 +345,7 @@ defaultOptions.rules.scores = {
|
|||
wordTwoCharacterClasses: 2,
|
||||
wordRepetitions: -25,
|
||||
wordLowercase: 1,
|
||||
word: 1,
|
||||
wordUppercase: 3,
|
||||
wordOneNumber: 3,
|
||||
wordThreeNumbers: 5,
|
||||
|
@ -361,6 +366,7 @@ defaultOptions.rules.activated = {
|
|||
wordTwoCharacterClasses: true,
|
||||
wordRepetitions: true,
|
||||
wordLowercase: true,
|
||||
word: true,
|
||||
wordUppercase: true,
|
||||
wordOneNumber: true,
|
||||
wordThreeNumbers: true,
|
||||
|
@ -372,7 +378,7 @@ defaultOptions.rules.activated = {
|
|||
wordIsACommonPassword: true
|
||||
};
|
||||
defaultOptions.rules.raisePower = 1.4;
|
||||
defaultOptions.rules.specialCharClass = "(?=.*?[^A-Za-z\s0-9])"; //'[!,@,#,$,%,^,&,*,?,_,~]';
|
||||
defaultOptions.rules.specialCharClass = "(?=.*?[^\\p{Letter}\\s0-9])"; //'[!,@,#,$,%,^,&,*,?,_,~]';
|
||||
// List taken from https://github.com/danielmiessler/SecLists (MIT License)
|
||||
defaultOptions.rules.commonPasswords = [
|
||||
'123456',
|
||||
|
|
File diff suppressed because one or more lines are too long
|
@ -20,7 +20,7 @@ function getPath() {
|
|||
return jsFileLocation.substr(0, jsFileLocation.search("/static/js/libs/jquery.min.js")); // the js folder path
|
||||
}
|
||||
|
||||
function postButton(event, action){
|
||||
function postButton(event, action, location=""){
|
||||
event.preventDefault();
|
||||
var newForm = jQuery('<form>', {
|
||||
"action": action,
|
||||
|
@ -30,7 +30,14 @@ function postButton(event, action){
|
|||
'name': 'csrf_token',
|
||||
'value': $("input[name=\'csrf_token\']").val(),
|
||||
'type': 'hidden'
|
||||
})).appendTo('body');
|
||||
})).appendTo('body')
|
||||
if(location !== "") {
|
||||
newForm.append(jQuery('<input>', {
|
||||
'name': 'location',
|
||||
'value': location,
|
||||
'type': 'hidden'
|
||||
})).appendTo('body');
|
||||
}
|
||||
newForm.submit();
|
||||
}
|
||||
|
||||
|
@ -212,17 +219,20 @@ $("#delete_confirm").click(function(event) {
|
|||
$( ".navbar" ).after( '<div class="row-fluid text-center" >' +
|
||||
'<div id="flash_'+item.type+'" class="alert alert-'+item.type+'">'+item.message+'</div>' +
|
||||
'</div>');
|
||||
|
||||
}
|
||||
});
|
||||
$("#books-table").bootstrapTable("refresh");
|
||||
}
|
||||
});
|
||||
} else {
|
||||
postButton(event, getPath() + "/delete/" + deleteId);
|
||||
var loc = sessionStorage.getItem("back");
|
||||
if (!loc) {
|
||||
loc = $(this).data("back");
|
||||
}
|
||||
sessionStorage.removeItem("back");
|
||||
postButton(event, getPath() + "/delete/" + deleteId, location=loc);
|
||||
}
|
||||
}
|
||||
|
||||
});
|
||||
|
||||
//triggered when modal is about to be shown
|
||||
|
@ -541,6 +551,7 @@ $(function() {
|
|||
$.get(e.relatedTarget.href).done(function(content) {
|
||||
$modalBody.html(content);
|
||||
preFilters.remove(useCache);
|
||||
$("#back").remove();
|
||||
});
|
||||
})
|
||||
.on("hidden.bs.modal", function() {
|
||||
|
@ -621,8 +632,12 @@ $(function() {
|
|||
"btnfullsync",
|
||||
"GeneralDeleteModal",
|
||||
$(this).data('value'),
|
||||
function(value){
|
||||
path = getPath() + "/ajax/fullsync"
|
||||
function(userid) {
|
||||
if (userid) {
|
||||
path = getPath() + "/ajax/fullsync/" + userid
|
||||
} else {
|
||||
path = getPath() + "/ajax/fullsync"
|
||||
}
|
||||
$.ajax({
|
||||
method:"post",
|
||||
url: path,
|
||||
|
|
|
@ -24,7 +24,7 @@ $(document).ready(function() {
|
|||
},
|
||||
|
||||
}, function () {
|
||||
if ($('#password').data("verify")) {
|
||||
if ($('#password').data("verify") === "True") {
|
||||
// Initialized and ready to go
|
||||
var options = {};
|
||||
options.common = {
|
||||
|
@ -38,22 +38,20 @@ $(document).ready(function() {
|
|||
showVerdicts: false,
|
||||
}
|
||||
options.rules= {
|
||||
specialCharClass: "(?=.*?[^A-Za-z\\s0-9])",
|
||||
specialCharClass: "(?=.*?[^\\p{Letter}\\s0-9])",
|
||||
activated: {
|
||||
wordNotEmail: false,
|
||||
wordMinLength: $('#password').data("min"),
|
||||
// wordMaxLength: false,
|
||||
// wordInvalidChar: true,
|
||||
wordSimilarToUsername: false,
|
||||
wordSequences: false,
|
||||
wordTwoCharacterClasses: false,
|
||||
wordRepetitions: false,
|
||||
wordLowercase: $('#password').data("lower") === "True" ? true : false,
|
||||
wordUppercase: $('#password').data("upper") === "True" ? true : false,
|
||||
word: $('#password').data("word") === "True" ? true : false,
|
||||
wordOneNumber: $('#password').data("number") === "True" ? true : false,
|
||||
wordThreeNumbers: false,
|
||||
wordOneSpecialChar: $('#password').data("special") === "True" ? true : false,
|
||||
// wordTwoSpecialChar: true,
|
||||
wordUpperLowerCombo: false,
|
||||
wordLetterNumberCombo: false,
|
||||
wordLetterNumberCharCombo: false
|
||||
|
|
|
@ -19,8 +19,10 @@
|
|||
import os
|
||||
import re
|
||||
from glob import glob
|
||||
from shutil import copyfile
|
||||
from shutil import copyfile, copyfileobj
|
||||
from markupsafe import escape
|
||||
from time import time
|
||||
from uuid import uuid4
|
||||
|
||||
from sqlalchemy.exc import SQLAlchemyError
|
||||
from flask_babel import lazy_gettext as N_
|
||||
|
@ -32,13 +34,15 @@ from cps.subproc_wrapper import process_open
|
|||
from flask_babel import gettext as _
|
||||
from cps.kobo_sync_status import remove_synced_book
|
||||
from cps.ub import init_db_thread
|
||||
from cps.file_helper import get_temp_dir
|
||||
|
||||
from cps.tasks.mail import TaskEmail
|
||||
from cps import gdriveutils
|
||||
|
||||
from cps import gdriveutils, helper
|
||||
from cps.constants import SUPPORTED_CALIBRE_BINARIES
|
||||
|
||||
log = logger.create()
|
||||
|
||||
current_milli_time = lambda: int(round(time() * 1000))
|
||||
|
||||
class TaskConvert(CalibreTask):
|
||||
def __init__(self, file_path, book_id, task_message, settings, ereader_mail, user=None):
|
||||
|
@ -61,24 +65,33 @@ class TaskConvert(CalibreTask):
|
|||
data = worker_db.get_book_format(self.book_id, self.settings['old_book_format'])
|
||||
df = gdriveutils.getFileFromEbooksFolder(cur_book.path,
|
||||
data.name + "." + self.settings['old_book_format'].lower())
|
||||
df_cover = gdriveutils.getFileFromEbooksFolder(cur_book.path, "cover.jpg")
|
||||
if df:
|
||||
datafile = os.path.join(config.config_calibre_dir,
|
||||
datafile = os.path.join(config.get_book_path(),
|
||||
cur_book.path,
|
||||
data.name + "." + self.settings['old_book_format'].lower())
|
||||
if not os.path.exists(os.path.join(config.config_calibre_dir, cur_book.path)):
|
||||
os.makedirs(os.path.join(config.config_calibre_dir, cur_book.path))
|
||||
if df_cover:
|
||||
datafile_cover = os.path.join(config.get_book_path(),
|
||||
cur_book.path, "cover.jpg")
|
||||
if not os.path.exists(os.path.join(config.get_book_path(), cur_book.path)):
|
||||
os.makedirs(os.path.join(config.get_book_path(), cur_book.path))
|
||||
df.GetContentFile(datafile)
|
||||
if df_cover:
|
||||
df_cover.GetContentFile(datafile_cover)
|
||||
worker_db.session.close()
|
||||
else:
|
||||
# ToDo Include cover in error handling
|
||||
error_message = _("%(format)s not found on Google Drive: %(fn)s",
|
||||
format=self.settings['old_book_format'],
|
||||
fn=data.name + "." + self.settings['old_book_format'].lower())
|
||||
worker_db.session.close()
|
||||
return error_message
|
||||
return self._handleError(self, error_message)
|
||||
|
||||
filename = self._convert_ebook_format()
|
||||
if config.config_use_google_drive:
|
||||
os.remove(self.file_path + '.' + self.settings['old_book_format'].lower())
|
||||
if df_cover:
|
||||
os.remove(os.path.join(config.config_calibre_dir, cur_book.path, "cover.jpg"))
|
||||
|
||||
if filename:
|
||||
if config.config_use_google_drive:
|
||||
|
@ -97,6 +110,7 @@ class TaskConvert(CalibreTask):
|
|||
self.ereader_mail,
|
||||
EmailText,
|
||||
self.settings['body'],
|
||||
id=self.book_id,
|
||||
internal=True)
|
||||
)
|
||||
except Exception as ex:
|
||||
|
@ -112,7 +126,7 @@ class TaskConvert(CalibreTask):
|
|||
|
||||
# check to see if destination format already exists - or if book is in database
|
||||
# if it does - mark the conversion task as complete and return a success
|
||||
# this will allow send to E-Reader workflow to continue to work
|
||||
# this will allow to send to E-Reader workflow to continue to work
|
||||
if os.path.isfile(file_path + format_new_ext) or\
|
||||
local_db.get_book_format(self.book_id, self.settings['new_book_format']):
|
||||
log.info("Book id %d already converted to %s", book_id, format_new_ext)
|
||||
|
@ -152,7 +166,8 @@ class TaskConvert(CalibreTask):
|
|||
if not os.path.exists(config.config_converterpath):
|
||||
self._handleError(N_("Calibre ebook-convert %(tool)s not found", tool=config.config_converterpath))
|
||||
return
|
||||
check, error_message = self._convert_calibre(file_path, format_old_ext, format_new_ext)
|
||||
has_cover = local_db.get_book(book_id).has_cover
|
||||
check, error_message = self._convert_calibre(file_path, format_old_ext, format_new_ext, has_cover)
|
||||
|
||||
if check == 0:
|
||||
cur_book = local_db.get_book(book_id)
|
||||
|
@ -194,8 +209,15 @@ class TaskConvert(CalibreTask):
|
|||
return
|
||||
|
||||
def _convert_kepubify(self, file_path, format_old_ext, format_new_ext):
|
||||
if config.config_embed_metadata and config.config_binariesdir:
|
||||
tmp_dir, temp_file_name = helper.do_calibre_export(self.book_id, format_old_ext[1:])
|
||||
filename = os.path.join(tmp_dir, temp_file_name + format_old_ext)
|
||||
temp_file_path = tmp_dir
|
||||
else:
|
||||
filename = file_path + format_old_ext
|
||||
temp_file_path = os.path.dirname(file_path)
|
||||
quotes = [1, 3]
|
||||
command = [config.config_kepubifypath, (file_path + format_old_ext), '-o', os.path.dirname(file_path)]
|
||||
command = [config.config_kepubifypath, filename, '-o', temp_file_path, '-i']
|
||||
try:
|
||||
p = process_open(command, quotes)
|
||||
except OSError as e:
|
||||
|
@ -209,13 +231,12 @@ class TaskConvert(CalibreTask):
|
|||
if p.poll() is not None:
|
||||
break
|
||||
|
||||
# ToD Handle
|
||||
# process returncode
|
||||
check = p.returncode
|
||||
|
||||
# move file
|
||||
if check == 0:
|
||||
converted_file = glob(os.path.join(os.path.dirname(file_path), "*.kepub.epub"))
|
||||
converted_file = glob(os.path.splitext(filename)[0] + "*.kepub.epub")
|
||||
if len(converted_file) == 1:
|
||||
copyfile(converted_file[0], (file_path + format_new_ext))
|
||||
os.unlink(converted_file[0])
|
||||
|
@ -224,16 +245,35 @@ class TaskConvert(CalibreTask):
|
|||
folder=os.path.dirname(file_path))
|
||||
return check, None
|
||||
|
||||
def _convert_calibre(self, file_path, format_old_ext, format_new_ext):
|
||||
def _convert_calibre(self, file_path, format_old_ext, format_new_ext, has_cover):
|
||||
try:
|
||||
# Linux py2.7 encode as list without quotes no empty element for parameters
|
||||
# linux py3.x no encode and as list without quotes no empty element for parameters
|
||||
# windows py2.7 encode as string with quotes empty element for parameters is okay
|
||||
# windows py 3.x no encode and as string with quotes empty element for parameters is okay
|
||||
# separate handling for windows and linux
|
||||
quotes = [1, 2]
|
||||
# path_tmp_opf = self._embed_metadata()
|
||||
if config.config_embed_metadata:
|
||||
quotes = [3, 5]
|
||||
tmp_dir = get_temp_dir()
|
||||
calibredb_binarypath = os.path.join(config.config_binariesdir, SUPPORTED_CALIBRE_BINARIES["calibredb"])
|
||||
my_env = os.environ.copy()
|
||||
if config.config_calibre_split:
|
||||
my_env['CALIBRE_OVERRIDE_DATABASE_PATH'] = os.path.join(config.config_calibre_dir, "metadata.db")
|
||||
library_path = config.config_calibre_split_dir
|
||||
else:
|
||||
library_path = config.config_calibre_dir
|
||||
|
||||
opf_command = [calibredb_binarypath, 'show_metadata', '--as-opf', str(self.book_id),
|
||||
'--with-library', library_path]
|
||||
p = process_open(opf_command, quotes, my_env)
|
||||
p.wait()
|
||||
path_tmp_opf = os.path.join(tmp_dir, "metadata_" + str(uuid4()) + ".opf")
|
||||
with open(path_tmp_opf, 'w') as fd:
|
||||
copyfileobj(p.stdout, fd)
|
||||
|
||||
quotes = [1, 2, 4, 6]
|
||||
command = [config.config_converterpath, (file_path + format_old_ext),
|
||||
(file_path + format_new_ext)]
|
||||
if config.config_embed_metadata:
|
||||
command.extend(['--from-opf', path_tmp_opf])
|
||||
if has_cover:
|
||||
command.extend(['--cover', os.path.join(os.path.dirname(file_path), 'cover.jpg')])
|
||||
quotes_index = 3
|
||||
if config.config_calibre:
|
||||
parameters = config.config_calibre.split(" ")
|
||||
|
@ -276,9 +316,9 @@ class TaskConvert(CalibreTask):
|
|||
|
||||
def __str__(self):
|
||||
if self.ereader_mail:
|
||||
return "Convert {} {}".format(self.book_id, self.ereader_mail)
|
||||
return "Convert Book {} and mail it to {}".format(self.book_id, self.ereader_mail)
|
||||
else:
|
||||
return "Convert {}".format(self.book_id)
|
||||
return "Convert Book {}".format(self.book_id)
|
||||
|
||||
@property
|
||||
def is_cancellable(self):
|
||||
|
|
|
@ -28,12 +28,11 @@ from email.message import EmailMessage
|
|||
from email.utils import formatdate, parseaddr
|
||||
from email.generator import Generator
|
||||
from flask_babel import lazy_gettext as N_
|
||||
from email.utils import formatdate
|
||||
|
||||
from cps.services.worker import CalibreTask
|
||||
from cps.services import gmail
|
||||
from cps.embed_helper import do_calibre_export
|
||||
from cps import logger, config
|
||||
|
||||
from cps import gdriveutils
|
||||
import uuid
|
||||
|
||||
|
@ -110,7 +109,7 @@ class EmailSSL(EmailBase, smtplib.SMTP_SSL):
|
|||
|
||||
|
||||
class TaskEmail(CalibreTask):
|
||||
def __init__(self, subject, filepath, attachment, settings, recipient, task_message, text, internal=False):
|
||||
def __init__(self, subject, filepath, attachment, settings, recipient, task_message, text, id=0, internal=False):
|
||||
super(TaskEmail, self).__init__(task_message)
|
||||
self.subject = subject
|
||||
self.attachment = attachment
|
||||
|
@ -119,6 +118,7 @@ class TaskEmail(CalibreTask):
|
|||
self.recipient = recipient
|
||||
self.text = text
|
||||
self.asyncSMTP = None
|
||||
self.book_id = id
|
||||
self.results = dict()
|
||||
|
||||
# from calibre code:
|
||||
|
@ -141,7 +141,7 @@ class TaskEmail(CalibreTask):
|
|||
message['To'] = self.recipient
|
||||
message['Subject'] = self.subject
|
||||
message['Date'] = formatdate(localtime=True)
|
||||
message['Message-Id'] = "{}@{}".format(uuid.uuid4(), self.get_msgid_domain()) # f"<{uuid.uuid4()}@{get_msgid_domain(from_)}>" # make_msgid('calibre-web')
|
||||
message['Message-Id'] = "{}@{}".format(uuid.uuid4(), self.get_msgid_domain())
|
||||
message.set_content(self.text.encode('UTF-8'), "text", "plain")
|
||||
if self.attachment:
|
||||
data = self._get_attachment(self.filepath, self.attachment)
|
||||
|
@ -161,6 +161,8 @@ class TaskEmail(CalibreTask):
|
|||
try:
|
||||
# create MIME message
|
||||
msg = self.prepare_message()
|
||||
if not msg:
|
||||
return
|
||||
if self.settings['mail_server_type'] == 0:
|
||||
self.send_standard_email(msg)
|
||||
else:
|
||||
|
@ -236,10 +238,10 @@ class TaskEmail(CalibreTask):
|
|||
self.asyncSMTP = None
|
||||
self._progress = x
|
||||
|
||||
@classmethod
|
||||
def _get_attachment(cls, book_path, filename):
|
||||
def _get_attachment(self, book_path, filename):
|
||||
"""Get file as MIMEBase message"""
|
||||
calibre_path = config.config_calibre_dir
|
||||
calibre_path = config.get_book_path()
|
||||
extension = os.path.splitext(filename)[1][1:]
|
||||
if config.config_use_google_drive:
|
||||
df = gdriveutils.getFileFromEbooksFolder(book_path, filename)
|
||||
if df:
|
||||
|
@ -249,15 +251,22 @@ class TaskEmail(CalibreTask):
|
|||
df.GetContentFile(datafile)
|
||||
else:
|
||||
return None
|
||||
file_ = open(datafile, 'rb')
|
||||
data = file_.read()
|
||||
file_.close()
|
||||
if config.config_binariesdir and config.config_embed_metadata:
|
||||
data_path, data_file = do_calibre_export(self.book_id, extension)
|
||||
datafile = os.path.join(data_path, data_file + "." + extension)
|
||||
with open(datafile, 'rb') as file_:
|
||||
data = file_.read()
|
||||
os.remove(datafile)
|
||||
else:
|
||||
datafile = os.path.join(calibre_path, book_path, filename)
|
||||
try:
|
||||
file_ = open(os.path.join(calibre_path, book_path, filename), 'rb')
|
||||
data = file_.read()
|
||||
file_.close()
|
||||
if config.config_binariesdir and config.config_embed_metadata:
|
||||
data_path, data_file = do_calibre_export(self.book_id, extension)
|
||||
datafile = os.path.join(data_path, data_file + "." + extension)
|
||||
with open(datafile, 'rb') as file_:
|
||||
data = file_.read()
|
||||
if config.config_binariesdir and config.config_embed_metadata:
|
||||
os.remove(datafile)
|
||||
except IOError as e:
|
||||
log.error_or_exception(e, stacklevel=3)
|
||||
log.error('The requested file could not be read. Maybe wrong permissions?')
|
||||
|
|
|
@ -17,26 +17,13 @@
|
|||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
import os
|
||||
from urllib.request import urlopen
|
||||
from lxml import etree
|
||||
|
||||
|
||||
from cps import config, db, gdriveutils, logger
|
||||
from cps.services.worker import CalibreTask
|
||||
from flask_babel import lazy_gettext as N_
|
||||
|
||||
OPF_NAMESPACE = "http://www.idpf.org/2007/opf"
|
||||
PURL_NAMESPACE = "http://purl.org/dc/elements/1.1/"
|
||||
|
||||
OPF = "{%s}" % OPF_NAMESPACE
|
||||
PURL = "{%s}" % PURL_NAMESPACE
|
||||
|
||||
etree.register_namespace("opf", OPF_NAMESPACE)
|
||||
etree.register_namespace("dc", PURL_NAMESPACE)
|
||||
|
||||
OPF_NS = {None: OPF_NAMESPACE} # the default namespace (no prefix)
|
||||
NSMAP = {'dc': PURL_NAMESPACE, 'opf': OPF_NAMESPACE}
|
||||
|
||||
from ..epub_helper import create_new_metadata_backup
|
||||
|
||||
class TaskBackupMetadata(CalibreTask):
|
||||
|
||||
|
@ -101,7 +88,8 @@ class TaskBackupMetadata(CalibreTask):
|
|||
self.calibre_db.session.close()
|
||||
|
||||
def open_metadata(self, book, custom_columns):
|
||||
package = self.create_new_metadata_backup(book, custom_columns)
|
||||
# package = self.create_new_metadata_backup(book, custom_columns)
|
||||
package = create_new_metadata_backup(book, custom_columns, self.export_language, self.translated_title)
|
||||
if config.config_use_google_drive:
|
||||
if not gdriveutils.is_gdrive_ready():
|
||||
raise Exception('Google Drive is configured but not ready')
|
||||
|
@ -114,7 +102,7 @@ class TaskBackupMetadata(CalibreTask):
|
|||
True)
|
||||
else:
|
||||
# ToDo: Handle book folder not found or not readable
|
||||
book_metadata_filepath = os.path.join(config.config_calibre_dir, book.path, 'metadata.opf')
|
||||
book_metadata_filepath = os.path.join(config.get_book_path(), book.path, 'metadata.opf')
|
||||
# prepare finalize everything and output
|
||||
doc = etree.ElementTree(package)
|
||||
try:
|
||||
|
@ -123,93 +111,6 @@ class TaskBackupMetadata(CalibreTask):
|
|||
except Exception as ex:
|
||||
raise Exception('Writing Metadata failed with error: {} '.format(ex))
|
||||
|
||||
def create_new_metadata_backup(self, book, custom_columns):
|
||||
# generate root package element
|
||||
package = etree.Element(OPF + "package", nsmap=OPF_NS)
|
||||
package.set("unique-identifier", "uuid_id")
|
||||
package.set("version", "2.0")
|
||||
|
||||
# generate metadata element and all sub elements of it
|
||||
metadata = etree.SubElement(package, "metadata", nsmap=NSMAP)
|
||||
identifier = etree.SubElement(metadata, PURL + "identifier", id="calibre_id", nsmap=NSMAP)
|
||||
identifier.set(OPF + "scheme", "calibre")
|
||||
identifier.text = str(book.id)
|
||||
identifier2 = etree.SubElement(metadata, PURL + "identifier", id="uuid_id", nsmap=NSMAP)
|
||||
identifier2.set(OPF + "scheme", "uuid")
|
||||
identifier2.text = book.uuid
|
||||
title = etree.SubElement(metadata, PURL + "title", nsmap=NSMAP)
|
||||
title.text = book.title
|
||||
for author in book.authors:
|
||||
creator = etree.SubElement(metadata, PURL + "creator", nsmap=NSMAP)
|
||||
creator.text = str(author.name)
|
||||
creator.set(OPF + "file-as", book.author_sort) # ToDo Check
|
||||
creator.set(OPF + "role", "aut")
|
||||
contributor = etree.SubElement(metadata, PURL + "contributor", nsmap=NSMAP)
|
||||
contributor.text = "calibre (5.7.2) [https://calibre-ebook.com]"
|
||||
contributor.set(OPF + "file-as", "calibre") # ToDo Check
|
||||
contributor.set(OPF + "role", "bkp")
|
||||
|
||||
date = etree.SubElement(metadata, PURL + "date", nsmap=NSMAP)
|
||||
date.text = '{d.year:04}-{d.month:02}-{d.day:02}T{d.hour:02}:{d.minute:02}:{d.second:02}'.format(d=book.pubdate)
|
||||
if book.comments and book.comments[0].text:
|
||||
for b in book.comments:
|
||||
description = etree.SubElement(metadata, PURL + "description", nsmap=NSMAP)
|
||||
description.text = b.text
|
||||
for b in book.publishers:
|
||||
publisher = etree.SubElement(metadata, PURL + "publisher", nsmap=NSMAP)
|
||||
publisher.text = str(b.name)
|
||||
if not book.languages:
|
||||
language = etree.SubElement(metadata, PURL + "language", nsmap=NSMAP)
|
||||
language.text = self.export_language
|
||||
else:
|
||||
for b in book.languages:
|
||||
language = etree.SubElement(metadata, PURL + "language", nsmap=NSMAP)
|
||||
language.text = str(b.lang_code)
|
||||
for b in book.tags:
|
||||
subject = etree.SubElement(metadata, PURL + "subject", nsmap=NSMAP)
|
||||
subject.text = str(b.name)
|
||||
etree.SubElement(metadata, "meta", name="calibre:author_link_map",
|
||||
content="{" + ", ".join(['"' + str(a.name) + '": ""' for a in book.authors]) + "}",
|
||||
nsmap=NSMAP)
|
||||
for b in book.series:
|
||||
etree.SubElement(metadata, "meta", name="calibre:series",
|
||||
content=str(str(b.name)),
|
||||
nsmap=NSMAP)
|
||||
if book.series:
|
||||
etree.SubElement(metadata, "meta", name="calibre:series_index",
|
||||
content=str(book.series_index),
|
||||
nsmap=NSMAP)
|
||||
if len(book.ratings) and book.ratings[0].rating > 0:
|
||||
etree.SubElement(metadata, "meta", name="calibre:rating",
|
||||
content=str(book.ratings[0].rating),
|
||||
nsmap=NSMAP)
|
||||
etree.SubElement(metadata, "meta", name="calibre:timestamp",
|
||||
content='{d.year:04}-{d.month:02}-{d.day:02}T{d.hour:02}:{d.minute:02}:{d.second:02}'.format(
|
||||
d=book.timestamp),
|
||||
nsmap=NSMAP)
|
||||
etree.SubElement(metadata, "meta", name="calibre:title_sort",
|
||||
content=book.sort,
|
||||
nsmap=NSMAP)
|
||||
sequence = 0
|
||||
for cc in custom_columns:
|
||||
value = None
|
||||
extra = None
|
||||
cc_entry = getattr(book, "custom_column_" + str(cc.id))
|
||||
if cc_entry.__len__():
|
||||
value = [c.value for c in cc_entry] if cc.is_multiple else cc_entry[0].value
|
||||
extra = cc_entry[0].extra if hasattr(cc_entry[0], "extra") else None
|
||||
etree.SubElement(metadata, "meta", name="calibre:user_metadata:#{}".format(cc.label),
|
||||
content=cc.to_json(value, extra, sequence),
|
||||
nsmap=NSMAP)
|
||||
sequence += 1
|
||||
|
||||
# generate guide element and all sub elements of it
|
||||
# Title is translated from default export language
|
||||
guide = etree.SubElement(package, "guide")
|
||||
etree.SubElement(guide, "reference", type="cover", title=self.translated_title, href="cover.jpg")
|
||||
|
||||
return package
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return "Metadata backup"
|
||||
|
|
|
@ -0,0 +1,47 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
|
||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||
# Copyright (C) 2023 OzzieIsaacs
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from urllib.request import urlopen
|
||||
|
||||
from flask_babel import lazy_gettext as N_
|
||||
|
||||
from cps import logger, file_helper
|
||||
from cps.services.worker import CalibreTask
|
||||
|
||||
|
||||
class TaskDeleteTempFolder(CalibreTask):
|
||||
def __init__(self, task_message=N_('Delete temp folder contents')):
|
||||
super(TaskDeleteTempFolder, self).__init__(task_message)
|
||||
self.log = logger.create()
|
||||
|
||||
def run(self, worker_thread):
|
||||
try:
|
||||
file_helper.del_temp_dir()
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
except (PermissionError, OSError) as e:
|
||||
self.log.error("Error deleting temp folder: {}".format(e))
|
||||
self._handleSuccess()
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
return "Delete Temp Folder"
|
||||
|
||||
@property
|
||||
def is_cancellable(self):
|
||||
return False
|
|
@ -209,7 +209,7 @@ class TaskGenerateCoverThumbnails(CalibreTask):
|
|||
if stream is not None:
|
||||
stream.close()
|
||||
else:
|
||||
book_cover_filepath = os.path.join(config.config_calibre_dir, book.path, 'cover.jpg')
|
||||
book_cover_filepath = os.path.join(config.get_book_path(), book.path, 'cover.jpg')
|
||||
if not os.path.isfile(book_cover_filepath):
|
||||
raise Exception('Book cover file not found')
|
||||
|
||||
|
@ -404,7 +404,7 @@ class TaskGenerateSeriesThumbnails(CalibreTask):
|
|||
if stream is not None:
|
||||
stream.close()
|
||||
|
||||
book_cover_filepath = os.path.join(config.config_calibre_dir, book.path, 'cover.jpg')
|
||||
book_cover_filepath = os.path.join(config.get_book_path(), book.path, 'cover.jpg')
|
||||
if not os.path.isfile(book_cover_filepath):
|
||||
raise Exception('Book cover file not found')
|
||||
|
||||
|
|
|
@ -32,7 +32,7 @@
|
|||
</div>
|
||||
<div class="row display-flex">
|
||||
{% for entry in entries %}
|
||||
<div id="books" class="col-sm-3 col-lg-2 col-xs-6 book">
|
||||
<div id="books" class="col-sm-3 col-lg-2 col-xs-6 book session">
|
||||
<div class="cover">
|
||||
<a href="{{ url_for('web.show_book', book_id=entry.Books.id) }}" {% if simple==false %}data-toggle="modal" data-target="#bookDetailsModal" data-remote="false"{% endif %}>
|
||||
<span class="img" title="{{entry.Books.title}}">
|
||||
|
@ -99,7 +99,7 @@
|
|||
<h3>{{_("More by")}} {{ author.name.replace('|',',') }}</h3>
|
||||
<div class="row">
|
||||
{% for entry in other_books %}
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book">
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book session">
|
||||
<div class="cover">
|
||||
<a href="https://www.goodreads.com/book/show/{{ entry.gid['#text'] }}" target="_blank" rel="noopener">
|
||||
<img title="{{entry.title}}" src="{{ entry.image_url }}" />
|
||||
|
|
|
@ -16,6 +16,18 @@
|
|||
<button type="button" data-toggle="modal" id="calibre_modal_path" data-link="config_calibre_dir" data-filefilter="metadata.db" data-target="#fileModal" id="library_path" class="btn btn-default"><span class="glyphicon glyphicon-folder-open"></span></button>
|
||||
</span>
|
||||
</div>
|
||||
<div class="form-group required">
|
||||
<input type="checkbox" id="config_calibre_split" name="config_calibre_split" data-control="split_settings" data-t ="{{ config.config_calibre_split_dir }}" {% if config.config_calibre_split %}checked{% endif %} >
|
||||
<label for="config_calibre_split">{{_('Separate Book Files from Library')}}</label>
|
||||
</div>
|
||||
<div data-related="split_settings">
|
||||
<div class="form-group required input-group">
|
||||
<input type="text" class="form-control" id="config_calibre_split_dir" name="config_calibre_split_dir" value="{% if config.config_calibre_split_dir != None %}{{ config.config_calibre_split_dir }}{% endif %}" autocomplete="off">
|
||||
<span class="input-group-btn">
|
||||
<button type="button" data-toggle="modal" id="calibre_modal_split_path" data-link="config_calibre_split_dir" data-filefilter="" data-target="#fileModal" id="book_path" class="btn btn-default"><span class="glyphicon glyphicon-folder-open"></span></button>
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
{% if feature_support['gdrive'] %}
|
||||
<div class="form-group required">
|
||||
<input type="checkbox" id="config_use_google_drive" name="config_use_google_drive" data-control="gdrive_settings" {% if config.config_use_google_drive %}checked{% endif %} >
|
||||
|
|
|
@ -103,6 +103,10 @@
|
|||
<input type="checkbox" id="config_unicode_filename" name="config_unicode_filename" {% if config.config_unicode_filename %}checked{% endif %}>
|
||||
<label for="config_unicode_filename">{{_('Convert non-English characters in title and author while saving to disk')}}</label>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<input type="checkbox" id="config_embed_metadata" name="config_embed_metadata" {% if config.config_embed_metadata %}checked{% endif %}>
|
||||
<label for="config_embed_metadata">{{_('Embed Metadata to Ebook File on Download/Conversion/e-mail (needs Calibre/Kepubify binaries)')}}</label>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<input type="checkbox" id="config_uploading" data-control="upload_settings" name="config_uploading" {% if config.config_uploading %}checked{% endif %}>
|
||||
<label for="config_uploading">{{_('Enable Uploads')}} {{_('(Please ensure that users also have upload permissions)')}}</label>
|
||||
|
@ -323,12 +327,12 @@
|
|||
</div>
|
||||
<div id="collapsefive" class="panel-collapse collapse">
|
||||
<div class="panel-body">
|
||||
<label for="config_converterpath">{{_('Path to Calibre E-Book Converter')}}</label>
|
||||
<label for="config_binariesdir">{{_('Path to Calibre Binaries')}}</label>
|
||||
<div class="form-group input-group">
|
||||
<input type="text" class="form-control" id="config_converterpath" name="config_converterpath" value="{% if config.config_converterpath != None %}{{ config.config_converterpath }}{% endif %}" autocomplete="off">
|
||||
<span class="input-group-btn">
|
||||
<button type="button" data-toggle="modal" id="converter_modal_path" data-link="config_converterpath" data-target="#fileModal" class="btn btn-default"><span class="glyphicon glyphicon-folder-open"></span></button>
|
||||
</span>
|
||||
<input type="text" class="form-control" id="config_binariesdir" name="config_binariesdir" value="{% if config.config_binariesdir != None %}{{ config.config_binariesdir }}{% endif %}" autocomplete="off">
|
||||
<span class="input-group-btn">
|
||||
<button type="button" data-toggle="modal" id="binaries_modal_path" data-link="config_binariesdir" data-folderonly="true" data-target="#fileModal" class="btn btn-default"><span class="glyphicon glyphicon-folder-open"></span></button>
|
||||
</span>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="config_calibre">{{_('Calibre E-Book Converter Settings')}}</label>
|
||||
|
@ -368,6 +372,16 @@
|
|||
<input type="checkbox" id="config_ratelimiter" name="config_ratelimiter" {% if config.config_ratelimiter %}checked{% endif %}>
|
||||
<label for="config_ratelimiter">{{_('Limit failed login attempts')}}</label>
|
||||
</div>
|
||||
<div data-related="ratelimiter_settings">
|
||||
<div class="form-group" style="margin-left:10px;">
|
||||
<label for="config_calibre">{{_('Configure Backend for Limiter')}}</label>
|
||||
<input type="text" class="form-control" id="config_limiter_uri" name="config_limiter_uri" value="{% if config.config_limiter_uri != None %}{{ config.config_limiter_uri }}{% endif %}" autocomplete="off">
|
||||
</div>
|
||||
<div class="form-group" style="margin-left:10px;">
|
||||
<label for="config_calibre">{{_('Options for Limiter')}}</label>
|
||||
<input type="text" class="form-control" id="config_limiter_options" name="config_limiter_options" value="{% if config.config_limiter_options != None %}{{ config.config_limiter_options }}{% endif %}" autocomplete="off">
|
||||
</div>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="config_session">{{_('Session protection')}}</label>
|
||||
<select name="config_session" id="config_session" class="form-control">
|
||||
|
@ -396,6 +410,10 @@
|
|||
<input type="checkbox" id="config_password_upper" name="config_password_upper" {% if config.config_password_upper %}checked{% endif %}>
|
||||
<label for="config_password_upper">{{_('Enforce uppercase characters')}}</label>
|
||||
</div>
|
||||
<div class="form-group" style="margin-left:10px;">
|
||||
<input type="checkbox" id="config_password_character" name="config_password_character" {% if config.config_password_character %}checked{% endif %}>
|
||||
<label for="config_password_lower">{{_('Enforce characters (needed For Chinese/Japanese/Korean Characters)')}}</label>
|
||||
</div>
|
||||
<div class="form-group" style="margin-left:10px;">
|
||||
<input type="checkbox" id="config_password_special" name="config_password_special" {% if config.config_password_special %}checked{% endif %}>
|
||||
<label for="config_password_special">{{_('Enforce special characters')}}</label>
|
||||
|
|
|
@ -205,8 +205,8 @@
|
|||
|
||||
|
||||
{% for c in cc %}
|
||||
<div class="real_custom_columns">
|
||||
{% if entry['custom_column_' ~ c.id]|length > 0 %}
|
||||
{% if entry['custom_column_' ~ c.id]|length > 0 %}
|
||||
<div class="real_custom_columns">
|
||||
{{ c.name }}:
|
||||
{% for column in entry['custom_column_' ~ c.id] %}
|
||||
{% if c.datatype == 'rating' %}
|
||||
|
@ -235,8 +235,9 @@
|
|||
{% endif %}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
</div>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% if not current_user.is_anonymous %}
|
||||
|
@ -332,15 +333,15 @@
|
|||
|
||||
{% endif %}
|
||||
{% if current_user.role_edit() %}
|
||||
<div class="btn-toolbar" role="toolbar">
|
||||
<div class="col-sm-12">
|
||||
<div class="btn-group" role="group" aria-label="Edit/Delete book">
|
||||
<a href="{{ url_for('edit-book.show_edit_book', book_id=entry.id) }}"
|
||||
class="btn btn-sm btn-primary" id="edit_book" role="button"><span
|
||||
class="glyphicon glyphicon-edit"></span> {{ _('Edit Metadata') }}</a>
|
||||
</div>
|
||||
</div>
|
||||
<div class="btn btn-default" data-back="{{ url_for('web.index') }}" id="back">{{_('Cancel')}}</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
@ -366,4 +367,3 @@
|
|||
</script>
|
||||
|
||||
{% endblock %}
|
||||
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/terms/" xmlns:dcterms="http://purl.org/dc/terms/">
|
||||
<icon>{{ url_for('static', filename='favicon.ico') }}</icon>
|
||||
<id>urn:uuid:2853dacf-ed79-42f5-8e8a-a7bb3d1ae6a2</id>
|
||||
<updated>{{ current_time }}</updated>
|
||||
<link rel="self"
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
<h2 class="random-books">{{_('Discover (Random Books)')}}</h2>
|
||||
<div class="row display-flex">
|
||||
{% for entry in random %}
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book" id="books_rand">
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book session" id="books_rand">
|
||||
<div class="cover">
|
||||
<a href="{{ url_for('web.show_book', book_id=entry.Books.id) }}" {% if simple==false %}data-toggle="modal" data-target="#bookDetailsModal" data-remote="false"{% endif %}>
|
||||
<span class="img" title="{{ entry.Books.title }}">
|
||||
|
@ -89,7 +89,7 @@
|
|||
<div class="row display-flex">
|
||||
{% if entries[0] %}
|
||||
{% for entry in entries %}
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book" id="books">
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book session" id="books">
|
||||
<div class="cover">
|
||||
<a href="{{ url_for('web.show_book', book_id=entry.Books.id) }}" {% if simple==false %}data-toggle="modal" data-target="#bookDetailsModal" data-remote="false"{% endif %}>
|
||||
<span class="img" title="{{ entry.Books.title }}">
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<feed xmlns="http://www.w3.org/2005/Atom">
|
||||
<icon>{{ url_for('static', filename='favicon.ico') }}</icon>
|
||||
<id>urn:uuid:2853dacf-ed79-42f5-8e8a-a7bb3d1ae6a2</id>
|
||||
<updated>{{ current_time }}</updated>
|
||||
<link rel="self" href="{{url_for('opds.feed_index')}}" type="application/atom+xml;profile=opds-catalog;kind=navigation"/>
|
||||
|
|
|
@ -41,7 +41,7 @@
|
|||
|
||||
<div class="row display-flex">
|
||||
{% for entry in entries %}
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book">
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book session">
|
||||
<div class="cover">
|
||||
{% if entry.Books.has_cover is defined %}
|
||||
<a href="{{ url_for('web.show_book', book_id=entry.Books.id) }}" {% if simple==false %}data-toggle="modal" data-target="#bookDetailsModal" data-remote="false"{% endif %}>
|
||||
|
|
|
@ -41,7 +41,8 @@
|
|||
<div class="form-group">
|
||||
<label for="read_status">{{_('Read Status')}}</label>
|
||||
<select name="read_status" id="read_status" class="form-control">
|
||||
<option value="" selected></option>
|
||||
<option value="Any" selected>{{_('Any')}}</option>
|
||||
<option value="">{{_('Empty')}}</option>
|
||||
<option value="True" >{{_('Yes')}}</option>
|
||||
<option value="False" >{{_('No')}}</option>
|
||||
</select>
|
||||
|
|
|
@ -31,7 +31,7 @@
|
|||
{% endif %}
|
||||
<div class="row display-flex">
|
||||
{% for entry in entries %}
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book">
|
||||
<div class="col-sm-3 col-lg-2 col-xs-6 book session">
|
||||
<div class="cover">
|
||||
<a href="{{ url_for('web.show_book', book_id=entry.Books.id) }}" {% if simple==false %}data-toggle="modal" data-target="#bookDetailsModal" data-remote="false"{% endif %}>
|
||||
<span class="img" title="{{entry.Books.title}}" >
|
||||
|
|
|
@ -19,13 +19,6 @@
|
|||
<link href="{{ url_for('static', filename='css/caliBlur.css') }}" rel="stylesheet" media="screen">
|
||||
<link href="{{ url_for('static', filename='css/caliBlur_override.css') }}" rel="stylesheet" media="screen">
|
||||
{% endif %}
|
||||
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
|
||||
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
|
||||
<!--[if lt IE 9]>
|
||||
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
|
||||
<script src="https://oss.maxcdn.com/libs/respond.js/1.3.0/respond.min.js"></script>
|
||||
<![endif]-->
|
||||
|
||||
{% block header %}{% endblock %}
|
||||
</head>
|
||||
<body class="{{ page }} shelf-down">
|
||||
|
|
|
@ -21,7 +21,7 @@
|
|||
{% endif %}
|
||||
<div class="form-group">
|
||||
<label for="password">{{_('Password')}}</label>
|
||||
<input type="password" class="form-control" name="password" id="password" data-lang="{{ current_user.locale }}" data-verify="{{ config.config_password_policy }}" {% if config.config_password_policy %} data-min={{ config.config_password_min_length }} data-special={{ config.config_password_special }} data-upper={{ config.config_password_upper }} data-lower={{ config.config_password_lower }} data-number={{ config.config_password_number }}{% endif %} value="" autocomplete="off">
|
||||
<input type="password" class="form-control" name="password" id="password" data-lang="{{ current_user.locale }}" data-verify="{{ config.config_password_policy }}" {% if config.config_password_policy %} data-min={{ config.config_password_min_length }} data-word={{ config.config_password_character }} data-special={{ config.config_password_special }} data-upper={{ config.config_password_upper }} data-lower={{ config.config_password_lower }} data-number={{ config.config_password_number }}{% endif %} value="" autocomplete="off">
|
||||
</div>
|
||||
{% endif %}
|
||||
<div class="form-group">
|
||||
|
@ -67,7 +67,7 @@
|
|||
<div class="btn btn-danger" id="config_delete_kobo_token" data-value="{{ content.id }}" data-remote="false" {% if not content.remote_auth_token.first() %} style="display: none;" {% endif %}>{{_('Delete')}}</div>
|
||||
</div>
|
||||
<div class="form-group col">
|
||||
<div class="btn btn-default" id="kobo_full_sync" data-value="{{ content.id }}" {% if not content.remote_auth_token.first() %} style="display: none;" {% endif %}>{{_('Force full kobo sync')}}</div>
|
||||
<div class="btn btn-default" id="kobo_full_sync" data-value="{% if current_user.role_admin() %}{{ content.id }}{% else %}0{% endif %}" {% if not content.remote_auth_token.first() %} style="display: none;" {% endif %}>{{_('Force full kobo sync')}}</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
<div class="col-sm-6">
|
||||
|
@ -177,7 +177,7 @@
|
|||
<script src="{{ url_for('static', filename='js/libs/bootstrap-table/bootstrap-editable.min.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/libs/pwstrength/i18next.min.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/libs/pwstrength/i18nextHttpBackend.min.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/libs/pwstrength/pwstrength-bootstrap.min.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/libs/pwstrength/pwstrength-bootstrap.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/password.js') }}"></script>
|
||||
<script src="{{ url_for('static', filename='js/table.js') }}"></script>
|
||||
{% endblock %}
|
||||
|
|
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
File diff suppressed because it is too large
Load Diff
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue