Compare commits
2868 Commits
Author | SHA1 | Date |
---|---|---|
aptalca | 7aa122e6cd | |
Ozzie Isaacs | cfdc07e6b2 | |
Ozzie Isaacs | 7295ebaee1 | |
Ozzie Isaacs | d74ea7dcf2 | |
Ozzie Isaacs | c73ec52842 | |
Ozzie Isaacs | 6022f48040 | |
Ozzie Isaacs | e0be168779 | |
Ozzie Isaacs | d3118c0aa9 | |
mapi68 | 04370944b9 | |
yunimoo | d6aa29b095 | |
Ozzie Isaacs | ab11919c0b | |
Ozzie Isaacs | 014a247847 | |
Ozzie Isaacs | 58c269881f | |
Ozzie Isaacs | e91dd5729a | |
Ozzie Isaacs | 6760d6971c | |
Kreeblah | ad05534ed2 | |
Ozzie Isaacs | ee451fb236 | |
Ozzie Isaacs | ab13fcf60c | |
Ozzie Isaacs | c7fa2ac71a | |
Ozzie Isaacs | 6f60ec7b99 | |
Ozzie Isaacs | 93f1ccbea9 | |
Ozzie Isaacs | 894fd9d30a | |
Ozzie Isaacs | 2b1efdb50e | |
Ozzie Isaacs | fc9a9cb9ac | |
Ozzie Isaacs | e99be72ff7 | |
Ozzie Isaacs | f1ceff2b52 | |
Ozzie Isaacs | 7e85894b3a | |
Ozzie Isaacs | 5c49c8cdd7 | |
Ozzie Isaacs | c8c3b3cba3 | |
Ozzie Isaacs | 4911843146 | |
Ozzie Isaacs | 2c37546598 | |
Ozzie Isaacs | 25a875b628 | |
Ozzie Isaacs | 506f0a33cf | |
Ozzie Isaacs | 921caf6716 | |
Ozzie Isaacs | 60ed1904f5 | |
Ozzie Isaacs | cb62d36e44 | |
Ozzie Isaacs | 737d758362 | |
Ozzie Isaacs | 8e27912ff5 | |
Ozzie Isaacs | 3a603cec22 | |
eggy | b1d7badef4 | |
Ozzie Isaacs | e591211b57 | |
Ozzie Isaacs | a305c35de4 | |
growfrow | 51d306b11d | |
mapi68 | abb418fe86 | |
Ozzie Isaacs | 0925f34557 | |
Ozzie Isaacs | 15952a764c | |
Daniel Edwards | 28c810b514 | |
Daniel Edwards | cfe61a26d0 | |
Ozzie Isaacs | fcc95bd895 | |
Ghighi Eftimie | 964e7de920 | |
Ozzie Isaacs | 14b578dd3a | |
Ozzie Isaacs | becb84a73d | |
Ozzie Isaacs | c901ccbb01 | |
Ozzie Isaacs | f987fb0aba | |
Ozzie Isaacs | c30460d76b | |
Ozzie Isaacs | 97380b4b3f | |
Ozzie Isaacs | 4fbd064b85 | |
Ozzie Isaacs | abbd9a5888 | |
Ozzie Isaacs | e860b4e097 | |
Ozzie Isaacs | 23a8a4657d | |
Ozzie Isaacs | b38a1b2298 | |
Ozzie Isaacs | 0ebfba8d05 | |
Ozzie Isaacs | 990ad8d72d | |
Ozzie Isaacs | c3fc125501 | |
Ozzie Isaacs | 3c4ed0de1a | |
Ozzie Isaacs | 117c92233d | |
Ozzie Isaacs | 2ba14acf4f | |
Ozzie Isaacs | 80a2d07009 | |
Ozzie Isaacs | ff9e1ed7c8 | |
Ozzie Isaacs | 8e5bee5352 | |
Ozzie Isaacs | d659430116 | |
Ozzie Isaacs | 859dac462b | |
Ozzie Isaacs | 2bea4dbd06 | |
Ozzie Isaacs | 0180b4b6b5 | |
Ozzie Isaacs | 2bfb02c448 | |
Ozzie Isaacs | 4864254e37 | |
Ozzie Isaacs | 09dce28a0e | |
Ozzie Isaacs | e55d09d8bb | |
Ozzie Isaacs | 92c162b2fd | |
Ozzie Isaacs | 57fb5001e2 | |
Ozzie Isaacs | 64e5314148 | |
Ozzie Isaacs | 873602a5c9 | |
Ozzie Isaacs | 09e966e18a | |
Ozzie Isaacs | f7718cae0c | |
Ozzie Isaacs | 90e728516c | |
Ozzie Isaacs | 7c04b68c88 | |
Ozzie Isaacs | 8549689a0f | |
Ozzie Isaacs | d8f5c17518 | |
mapi68 | 05367d2df5 | |
Webysther Sperandio | eb6fbfc90c | |
Ozzie Isaacs | c2267b6902 | |
Ozzie Isaacs | 0e5520a261 | |
Ozzie Isaacs | 6f5e9f167e | |
Ozzie Isaacs | ce83fb6816 | |
Ozzie Isaacs | fbfb7adef6 | |
Ozzie Isaacs | cc52ad5d27 | |
Ozzie Isaacs | 706b9c4013 | |
Ozzie Isaacs | 6972c1b841 | |
Ozzie Isaacs | b9c329535d | |
Ozzie Isaacs | 8fdf7a94ab | |
Ozzie Isaacs | 31a344b410 | |
Ozzie Isaacs | 3814fbf08f | |
Ozzie Isaacs | ffc13a5565 | |
Ozzie Isaacs | 74c61d9685 | |
Ozzie Isaacs | b8031cd53f | |
Ozzie Isaacs | 898e76fc37 | |
Ozzie Isaacs | af71a1a2ed | |
Ozzie Isaacs | e0327db08f | |
Ozzie Isaacs | bf2ac97c47 | |
Ozzie Isaacs | 902fa254b0 | |
Ozzie Isaacs | f0cc93abd3 | |
Ozzie Isaacs | 977f07364b | |
Ozzie Isaacs | 00acd745f4 | |
Ozzie Isaacs | d272f43424 | |
Whatever Cloud | 7a8d8375d0 | |
Johannes H | 3aa75ef4a7 | |
Ozzie Isaacs | 25fb8d934f | |
GONCALVES Nelson (T0025615) | f08c8faaff | |
Michiel Cornelissen | bc0ebdb78d | |
Ozzie Isaacs | 2a4b3cb7af | |
Ozzie Isaacs | 4401cf66d1 | |
Ozzie Isaacs | d353c9b6d3 | |
Ozzie Isaacs | 0aba96c032 | |
Ozzie Isaacs | c60b7e9192 | |
Ozzie Isaacs | 23033255b8 | |
Ozzie Isaacs | 31c8909dea | |
Ozzie Isaacs | 9ef89dbcc3 | |
Ozzie Isaacs | 1086296d1d | |
Ozzie Isaacs | d341faf204 | |
Ozzie Isaacs | 2334e8f9c9 | |
Ozzie Isaacs | 90ad570578 | |
Ozzie Isaacs | fd90d6e375 | |
Ozzie Isaacs | 7fbbb85f47 | |
Ozzie Isaacs | 52c7557878 | |
Ozzie Isaacs | 794cd354ca | |
Ghighi Eftimie | 389e3f09f5 | |
Ghighi Eftimie | 285979b68d | |
Ozzie Isaacs | 3a012c900e | |
Ozzie Isaacs | ec45de3212 | |
Ozzie Isaacs | f644a2a136 | |
Russell | 01108aac42 | |
Russell Troxel | 400c745692 | |
ye | 9841a4d068 | |
Ozzie Isaacs | 7fd1d10fca | |
Ozzie Isaacs | 4f6bbfa8b8 | |
Ozzie Isaacs | cf6810db87 | |
Ozzie Isaacs | 5afff2231e | |
Ozzie Isaacs | d611582b78 | |
Ozzie Isaacs | 3bbd8ee27e | |
Ozzie Isaacs | f78e0ff938 | |
Ozzie Isaacs | bd71391bfb | |
Ozzie Isaacs | 20b2936cc1 | |
Ozzie Isaacs | 19825a635a | |
Ozzie Isaacs | 0d611d35de | |
Ozzie Isaacs | effd026fe2 | |
Ozzie Isaacs | d68e57c4fc | |
Ozzie Isaacs | 184ce23351 | |
Ozzie Isaacs | 2fbc3da451 | |
Ozzie Isaacs | fad6550ff1 | |
Ozzie Isaacs | b7aaa0f24d | |
Ozzie Isaacs | 5040bb762c | |
Ozzie Isaacs | 55deca1ec8 | |
Ozzie Isaacs | 40a16f4717 | |
Ozzie Isaacs | d26e60724a | |
Ozzie Isaacs | d877fa1c68 | |
Ozzie Isaacs | d55bafdfa9 | |
Ozzie Isaacs | a2a431802a | |
Ozzie Isaacs | b2e4907165 | |
Ozzie Isaacs | 6c2e40f544 | |
Ozzie Isaacs | 5e3d0ec2ad | |
Ozzie Isaacs | c550d6c90d | |
bacpd | 3b1d0b4013 | |
Ozzie Isaacs | 3d07efbb4f | |
mapi68 | c0ae5bb381 | |
Ozzie Isaacs | 6e755a26f9 | |
Ozzie Isaacs | c45188beb2 | |
Ozzie Isaacs | 0736c53d7b | |
Ozzie Isaacs | f0f8011d24 | |
Ozzie Isaacs | 65f3ecb924 | |
Ozzie Isaacs | 87b3999ec8 | |
Ozzie Isaacs | e32312b54a | |
Ozzie Isaacs | d7ea569e5d | |
Ozzie Isaacs | 96958e7266 | |
Ozzie Isaacs | 2c339ed10c | |
Ozzie Isaacs | dc2c30f508 | |
Ozzie Isaacs | 0c43d80163 | |
Ozzie Isaacs | df71a86f94 | |
Ozzie Isaacs | 7ed56b4397 | |
Ozzie Isaacs | 5ceb2b6d83 | |
Ozzie Isaacs | 8abea1ddd0 | |
Ozzie Isaacs | 11816d3405 | |
Ozzie Isaacs | 198bff928f | |
lawsssscat | cac200ba61 | |
David K | 8cc36ab081 | |
databoy2k | b3d1558df8 | |
byword77 | a045b6f467 | |
Ozzie Isaacs | 7a961c9011 | |
Ozzie Isaacs | 444ac181f8 | |
Ozzie Isaacs | 4bbcec21e4 | |
Ozzie Isaacs | 5509d4598b | |
Ozzie Isaacs | fab35e69ec | |
Ozzie Isaacs | 4f0f5b1495 | |
Ozzie Isaacs | cfa309f0d1 | |
Ozzie Isaacs | 885d914f18 | |
Ozzie Isaacs | b580f418f7 | |
Ozzie Isaacs | 6a14e2cf68 | |
Ozzie Isaacs | 8535bb5821 | |
Ozzie Isaacs | b2a26a421c | |
Ozzie Isaacs | 7aea7fc0bb | |
Ozzie Isaacs | 52172044e6 | |
Ozzie Isaacs | 9b99427c84 | |
Ozzie Isaacs | 0499e578cd | |
Ozzie Isaacs | b3a85ffcbb | |
PhracturedBlue | 074e611705 | |
Ozzie Isaacs | a1899bf582 | |
Ozzie Isaacs | f7ff3e7cba | |
Ozzie Isaacs | 3a08b91ffa | |
Ozzie Isaacs | d253804a50 | |
Ozzie Isaacs | ba0e5399d6 | |
Ozzie Isaacs | 7818c4a7b0 | |
Ozzie Isaacs | 3f6a12898b | |
Ozzie Isaacs | caf69669cb | |
Ozzie Isaacs | de59181be7 | |
Ozzie Isaacs | 966c9236b9 | |
Ozzie Isaacs | 34c6010ad0 | |
Ozzie Isaacs | 60e904967b | |
Ozzie Isaacs | 3efcbcc679 | |
Ozzie Isaacs | 7bb4bc934c | |
Ozzie Isaacs | dcb8a0f77b | |
Ozzie Isaacs | 2f12b2e315 | |
Ozzie Isaacs | 279f0569e4 | |
Ozzie Isaacs | 6723369d65 | |
Ozzie Isaacs | 4b93ac034f | |
Ozzie Isaacs | fda62dde1d | |
Ozzie Isaacs | df74fdb4d1 | |
Ozzie Isaacs | cce538d5a7 | |
Ozzie Isaacs | e8b0051b31 | |
Ozzie Isaacs | fe55958ecc | |
Ozzie Isaacs | 7b321d63c1 | |
Ozzie Isaacs | 986eaf9f02 | |
Ozzie Isaacs | caf8ed77d7 | |
Ghighi Eftimie | ee5cfa1f36 | |
Horus68 | 5eef476135 | |
Horus68 | b5e4a88357 | |
Horus68 | 256f4bb428 | |
Horus68 | a4d45512ee | |
Horus68 | 074687c330 | |
archont | 2f7b175dda | |
Ozzie Isaacs | a256bd5260 | |
Ozzie Isaacs | fdd1410b06 | |
Ozzie Isaacs | 3f5583017f | |
Ozzie Isaacs | 63b7d70f33 | |
Ozzie Isaacs | 500758050c | |
Ozzie Isaacs | 4b4c0daab0 | |
Ozzie Isaacs | 709a4e51ba | |
Ozzie Isaacs | eff0750d77 | |
Ozzie Isaacs | e63a04093c | |
Ozzie Isaacs | 07d97d18d0 | |
Ozzie Isaacs | d8f30983d5 | |
Ozzie Isaacs | 062efc4e78 | |
boosh | 4e6c9c2703 | |
Ozzie Isaacs | 4dc5885723 | |
quarz12 | 39638d3c9c | |
Ozzie Isaacs | 3ef34c8f15 | |
Ozzie Isaacs | 932abbf090 | |
Ozzie Isaacs | 860443079d | |
Ozzie Isaacs | bd4b7ffaba | |
Ozzie Isaacs | 33e35eeb52 | |
Ozzie Isaacs | ed09814460 | |
Ozzie Isaacs | e52eb74121 | |
Daniel | dc7fbce4f7 | |
Ozzie Isaacs | dad0fd5a1c | |
Ozzie Isaacs | 8a87c152b4 | |
Ozzie Isaacs | 16baa306c5 | |
Daniel | 2eb334fb3d | |
Ozzie Isaacs | cb7356a04d | |
Ozzie Isaacs | 63a561bf9b | |
Ozzie Isaacs | cc733454b2 | |
whilenot | 940544577a | |
xlivevil | 9e0fc320cb | |
xlivevil | bf3ca20fb2 | |
Ozzie Isaacs | c7e1736ade | |
Ozzie Isaacs | bc6a50550e | |
Ozzie Isaacs | fe4dc1bb8f | |
Ozzie Isaacs | f2369609e8 | |
Ozzie Isaacs | de4d6ec7df | |
mapi68 | 7754f4aa5d | |
Ozzie Isaacs | 524751ea51 | |
Ozzie Isaacs | 8111d0dd51 | |
Ozzie Isaacs | fad5929253 | |
Ozzie Isaacs | 9f28144779 | |
Ozzie Isaacs | 42fd6973a0 | |
driz | b2e20ff50c | |
Ozzie Isaacs | 6075b3dd1d | |
driz | 37871ea8cb | |
Ozzie Isaacs | e2785c3985 | |
Ozzie Isaacs | dba83a2900 | |
Ozzie Isaacs | 33c19b20f4 | |
Ozzie Isaacs | d2f39d3dce | |
Ozzie Isaacs | 1c8bc78b48 | |
Ozzie Isaacs | 6c6841f8b0 | |
Ozzie Isaacs | 592216588c | |
Wladimir Kirianov | f4db0f04d2 | |
Wladimir Kirianov | b16e3a6e2c | |
Ozzie Isaacs | 13c0d30a8f | |
Ozzie Isaacs | b9c942befc | |
Ozzie Isaacs | a68a0dd037 | |
Thomas de Ruiter | a952c36ab7 | |
Thomas de Ruiter | 5f0c7737fe | |
Ozzie Isaacs | 38484624e9 | |
Ozzie Isaacs | 1451a67912 | |
Ozzie Isaacs | a72f0a160b | |
Ozzie Isaacs | 253386b0a5 | |
Ozzie Isaacs | 6c8ffb3e7e | |
Ozzie Isaacs | 7d26e6fc85 | |
Ozzie Isaacs | 085a6b88a3 | |
Ozzie Isaacs | bde36e3cd4 | |
Ozzie Isaacs | 9646b6e2dd | |
Ozzie Isaacs | d35e781d41 | |
Ozzie Isaacs | 321db4d712 | |
Ozzie Isaacs | 2b9f920454 | |
Ozzie Isaacs | 45acd3febe | |
Ozzie Isaacs | cbd7ca2f3e | |
Ozzie Isaacs | ba7fee3918 | |
Ozzie Isaacs | 1210ccb43f | |
Jerry Vonau | 04f1f6493b | |
Ozzie Isaacs | 46d2d217ee | |
Ozzie Isaacs | e3fffa8a8f | |
Ozzie Isaacs | dfb49bfca9 | |
Ozzie Isaacs | 224777f5e3 | |
Ozzie Isaacs | 7ade4615a4 | |
Ozzie Isaacs | cbd679eb24 | |
Ozzie Isaacs | b277ed3359 | |
Ozzie Isaacs | fa95b07a95 | |
Ozzie Isaacs | 87bc8c6d96 | |
Ozzie Isaacs | db2bc6a2c2 | |
Ozzie Isaacs | cf850c6ed5 | |
Ozzie Isaacs | a6b54e398b | |
Ozzie Isaacs | 28eeb9eec3 | |
Ozzie Isaacs | 7d76f2ae33 | |
Ozzie Isaacs | 49e4f540c9 | |
Ozzie Isaacs | 64e9b13311 | |
Ozzie Isaacs | 3cf778b591 | |
Ozzie Isaacs | 942bcff5c4 | |
Ozzie Isaacs | 5c5db34a52 | |
Ozzie Isaacs | ae850172a3 | |
Ozzie Isaacs | 7ff4747f63 | |
Ozzie Isaacs | 76b0411c33 | |
Ozzie Isaacs | a414db0243 | |
Ozzie Isaacs | 162ac73bee | |
Ozzie Isaacs | fc31132f4e | |
Ozzie Isaacs | 856dce8add | |
Ozzie Isaacs | 3debe4aa4b | |
Ozzie Isaacs | b28a2cc58c | |
Ozzie Isaacs | 5fd0e4c046 | |
Ozzie Isaacs | 595f01e7a3 | |
Ozzie Isaacs | c79aa75f00 | |
Ozzie Isaacs | 0177a8bcca | |
Ozzie Isaacs | 38c601bb10 | |
Ozzie Isaacs | e7a6fe0bec | |
Ozzie Isaacs | 6119eb3681 | |
Ozzie Isaacs | 6b2ca9537d | |
Ozzie Isaacs | 3cb9a9b04a | |
Ozzie Isaacs | 3d8256b6a6 | |
Ozzie Isaacs | 660d1fb1ff | |
Ozzie Isaacs | fa3fe47059 | |
Ozzie Isaacs | 89bc72958e | |
Ozzie Isaacs | 73ea18b8ce | |
Ozzie Isaacs | 7ca07f06ce | |
Ozzie Isaacs | 8ee34bf428 | |
Ozzie Isaacs | 66d5b5a697 | |
Ozzie Isaacs | ce48e06c45 | |
Ozzie Isaacs | f4ecfe4aca | |
Ozzie Isaacs | dda20eb912 | |
GarcaMan | c4326c9495 | |
Ozzie Isaacs | 63a3edd429 | |
Ozzie Isaacs | 3b45234beb | |
Ozzie Isaacs | 8d0a699078 | |
Ozzie Isaacs | 5b5146a793 | |
Ozzie Isaacs | 7a4e6fbdfb | |
Ozzie Isaacs | 14d14637cd | |
Ozzie Isaacs | fb42f6bfff | |
Ozzie Isaacs | 4b7a0f3662 | |
Ozzie Isaacs | 275675b48a | |
Ozzie Isaacs | 907606295d | |
Ozzie Isaacs | 794c6ba254 | |
Ozzie Isaacs | ac13f6042a | |
Ozzie Isaacs | f8fbc807f1 | |
Ozzie Isaacs | 98da7dd5b0 | |
Ozzie Isaacs | 1c3b69c710 | |
mapi68 | 1dd638a786 | |
_Fervor_ | 6da7d05c6c | |
_Fervor_ | 3f72c3fffe | |
Ozzie Isaacs | cf9a7d538f | |
Ozzie Isaacs | ea9e8d4384 | |
Ozzie Isaacs | b9769a0975 | |
Ozzie Isaacs | 3a262661b5 | |
Ozzie Isaacs | d2056ceb51 | |
Ozzie Isaacs | e71a3452e1 | |
Ozzie Isaacs | 189da65fac | |
Ozzie Isaacs | 0e6b7f96d3 | |
Ozzie Isaacs | 1babb566fb | |
Ozzie Isaacs | c4e4acfc26 | |
Ozzie Isaacs | 6afb429185 | |
Ozzie Isaacs | f241b260d7 | |
Ozzie Isaacs | 260a694834 | |
Ozzie Isaacs | 508e2b4d0a | |
Ozzie Isaacs | 9701a97a57 | |
Ozzie Isaacs | 4913f06e0d | |
Petipopotam | d545ea9e6f | |
Petipopotam | 1ad8dc102a | |
Ozzie Isaacs | 36cb454d1c | |
Ozzie Isaacs | 1899cda8d1 | |
Ozzie Isaacs | 8dd4d0be1b | |
Ozzie Isaacs | d48d6880af | |
Ozzie Isaacs | 94a6931d48 | |
Ozzie Isaacs | c21a870b8e | |
Ozzie Isaacs | 791bc9621a | |
Ozzie Isaacs | 2d6fe483ba | |
Ozzie Isaacs | ad43f07dab | |
Ozzie Isaacs | 77637d81dd | |
Ozzie Isaacs | a2bf6dfb7b | |
Ozzie Isaacs | 1cd05d614c | |
Ozzie Isaacs | d75f681247 | |
Ozzie Isaacs | 2be2920833 | |
Ozzie Isaacs | d6184619f5 | |
Ozzie Isaacs | 43ee85fbb5 | |
Ozzie Isaacs | 8022b1bb36 | |
Ozzie Isaacs | 9e75c65af8 | |
Ozzie Isaacs | 7881950e66 | |
Ozzie Isaacs | 031658ae94 | |
Arief Hidayat | 48c2c7b543 | |
Petipopotam | beb619c2c2 | |
Petipopotam | ed22209e6c | |
blitzmann | 364c48edd8 | |
Ozzie Isaacs | e178efb58c | |
Vegard Fladby | 4105c64320 | |
Josh O'Brien | b3335f6733 | |
Benedikt McMullin | fba95956de | |
Ozzie Isaacs | ce0b3d8d10 | |
Ozzie Isaacs | 9545aa2a0b | |
jvoisin | 4629eec774 | |
Jeroen Kroese | 4977381b1c | |
Ozzie Isaacs | 6c1631acba | |
Ozzie Isaacs | 1489228649 | |
Ozzie Isaacs | 74efa52f26 | |
Ozzie Isaacs | 1ca1281346 | |
jvoisin | 631496775e | |
jvoisin | c5e539bbcd | |
jvoisin | 02ec853e3b | |
Ozzie Isaacs | d0411fd9c7 | |
Ozzie Isaacs | 567cb2e097 | |
Ozzie Isaacs | a635e136be | |
Ozzie Isaacs | 5ffb3e917f | |
Ozzie Isaacs | 5dc3385ae5 | |
Ozzie Isaacs | 66e0a81d23 | |
Ozzie Isaacs | 1f6eb2def6 | |
Ozzie Isaacs | 7d3af5bbd0 | |
Ozzie Isaacs | 043a612d1a | |
Ozzie Isaacs | 928e24fd1a | |
Ozzie Isaacs | 3361c41c6d | |
Ozzie Isaacs | 85a6616606 | |
Ozzie Isaacs | c15b603fef | |
Ozzie Isaacs | b12e47d0e5 | |
Ozzie Isaacs | 389263f5e7 | |
Ozzie Isaacs | 307b4526f6 | |
jvoisin | 7d023ce741 | |
Julien Voisin | 2ddbaa2150 | |
jvoisin | 29fef4a314 | |
JonathanHerrewijnen | 9450084d6e | |
Vijay Pillai | b52c7aac53 | |
Feige-cn | e8c461b14f | |
Ghighi Eftimie | 9409b9db9c | |
Ghighi Eftimie | a992aafc13 | |
Ghighi Eftimie | b663f1ce83 | |
Olivier | b45d69ef2d | |
Olivier | a80735d7d3 | |
Olivier | adfbd447ed | |
xlivevil | 73567db4fb | |
Ozzieisaacs | 3d59a78c9f | |
Ozzieisaacs | 8ba23ac3ee | |
Ghighi Eftimie | 397cd987cb | |
Ozzie Isaacs | 7eef44f73c | |
Ozzie Isaacs | e22ecda137 | |
ElQuimm | a003cd9758 | |
Ozzie Isaacs | 44f6655dd2 | |
Ozzie Isaacs | bd52f08a30 | |
Ozzie Isaacs | edc9703716 | |
Ozzie Isaacs | 56d697122c | |
Ozzie Isaacs | d39a43e838 | |
ElQuimm | 9df3a2558d | |
xlivevil | 7339c804a3 | |
xlivevil | 4d61c5535e | |
xlivevil | 09e1ec3d08 | |
Ozzie Isaacs | 8421a017f4 | |
Ozzie Isaacs | 27eb514ca4 | |
Ozzie Isaacs | b4d9e400d9 | |
Ozzie Isaacs | 67bc23ee0c | |
Ozzie Isaacs | b898b37e29 | |
Ozzie Isaacs | 10dcf39d50 | |
Ozzie Isaacs | e676e1685b | |
Ozzie Isaacs | 59a5ccd05c | |
Ozzie Isaacs | 04908e22fe | |
Ozzie Isaacs | 0f67e57be4 | |
Ozzie Isaacs | 071d19b8b3 | |
halink0803 | 1ffa190938 | |
Ozzie Isaacs | c10708ed07 | |
Ozzie Isaacs | b4851e1d70 | |
Ozzie Isaacs | 26be5ee237 | |
Ozzieisaacs | 241aa77d41 | |
Ozzieisaacs | ca0ee5d391 | |
Ozzieisaacs | 110d283a50 | |
Giulio De Pasquale | f6a9030c33 | |
Giulio De Pasquale | 452093db47 | |
Ozzieisaacs | 9fa56a2323 | |
Ozzieisaacs | 3a133901e4 | |
Ozzieisaacs | 7750ebde0f | |
Ozzieisaacs | 2472e03a69 | |
Ozzieisaacs | 6598c4d259 | |
Ozzie Isaacs | a9b20ca136 | |
Ozzie Isaacs | bf0375d51d | |
Ozzie Isaacs | 89d226e36b | |
Ozzie Isaacs | ec8844c7d4 | |
Ozzie Isaacs | e5c8a7ce50 | |
Ozzie Isaacs | dc3cafd23d | |
Ozzie Isaacs | 9de474e665 | |
Martin Brodbeck | cd143b7ef4 | |
Martin Brodbeck | 8a5112502d | |
Ozzie Isaacs | b5d5660d04 | |
Ozzie Isaacs | fc9c641e55 | |
Ozzie Isaacs | 68e21e1098 | |
Ozzie Isaacs | 828be29a80 | |
viljasenville | 46e5305f23 | |
Ozzie Isaacs | a3f7dc2a5a | |
Thore Schillmann | 9bcbe523d7 | |
Ozzie Isaacs | ae3e3559b8 | |
Ozzie Isaacs | a72f16fd3a | |
Ozzie Isaacs | c2545315e1 | |
Ozzie Isaacs | 61a0c72f8e | |
Ozzie Isaacs | 1e44cb3b6c | |
Ozzie Isaacs | 462aa47ed6 | |
Thore Schillmann | e176d63ca6 | |
Thore Schillmann | 80b0e88650 | |
Thore Schillmann | 0b4731913e | |
Thore Schillmann | fc7ce8da2d | |
Thore Schillmann | c89bc12c9b | |
Thore Schillmann | 4913673e8f | |
Thore Schillmann | fc004f4f0c | |
Ozzie Isaacs | 7344ef353c | |
Ozzie Isaacs | 3bde8a5d95 | |
Thore Schillmann | c5c3874243 | |
Kian-Meng Ang | c4104ddaf4 | |
Thore Schillmann | 0d34f41a48 | |
xlivevil | b47c1d2431 | |
Thore Schillmann | a77aef83c6 | |
Thore Schillmann | e39c6130c3 | |
subdiox | 12071f3e64 | |
subdiox | 92b6dbf26f | |
subdiox | 98b554a3a0 | |
Thore Schillmann | 03359599ed | |
Thore Schillmann | 3c4330ba51 | |
Thore Schillmann | 8c781ad4a4 | |
Thore Schillmann | 5e9ec706c5 | |
Ozzie Isaacs | 07c67b09db | |
Ozzie Isaacs | b1c70d5b4a | |
Ozzieisaacs | c5fc30a1be | |
Ozzie Isaacs | 29fd4ae4a2 | |
Ozzieisaacs | 4ef8c35fb7 | |
Ozzieisaacs | 04326af2da | |
Ozzieisaacs | d6a31e5db8 | |
Ozzie Isaacs | 73d48e4ac1 | |
Ozzie Isaacs | b206b7a5d8 | |
Ozzie Isaacs | 02e1be09df | |
Ozzie Isaacs | f85b587d0a | |
Ozzie Isaacs | 89d522e389 | |
Thore Schillmann | 2816a75c3e | |
Ozzie Isaacs | 78b45f716a | |
Ozzie Isaacs | 91df265d40 | |
Illia Maier | 7e7f54cfa7 | |
Illia Maier | 80bc14c0cf | |
Illia Maier | 7685818b16 | |
Ozzie Isaacs | f44d42f834 | |
GarcaMan | bf12542df5 | |
Ozzie Isaacs | 25f2af3f03 | |
Ozzie Isaacs | 909797dc49 | |
Ozzie Isaacs | 07d4e60655 | |
Ozzie Isaacs | d90cfce97f | |
Ozzie Isaacs | 543fe12862 | |
Ozzie Isaacs | 4f66d6b3b1 | |
Illia Maier | c36138b144 | |
Thore Schillmann | 7f6e88ce5e | |
Ozzie Isaacs | aa442d8c51 | |
Aisha Tammy | a3cd217cea | |
Thore Schillmann | 0f3f918153 | |
Ozzieisaacs | 790080f2a0 | |
Ozzieisaacs | 4ea80e9810 | |
Ozzieisaacs | 034d57134d | |
Ozzieisaacs | f6101fd462 | |
Ozzieisaacs | 1fa7de397a | |
leexia | 3b7cd38d5e | |
ImanSharaf | 78fb7a9756 | |
Evan Peterson | 7ae9f89bbf | |
Ozzie Isaacs | 8a6a8dcbe8 | |
Ozzie Isaacs | fbac3e38ac | |
Ozzie Isaacs | c1f1952b04 | |
Ozzie Isaacs | 5e4cf839bc | |
Ozzie Isaacs | 056ecf0d90 | |
Chris Thurber | 0c2f67bc7b | |
Ozzie Isaacs | 1bcb714fac | |
Ozzie Isaacs | 8cb3fe32a5 | |
Ozzieisaacs | ae5053e072 | |
Ozzie Isaacs | cde51e743a | |
Ozzie Isaacs | 3233b357f8 | |
Ozzie Isaacs | 49655e9f2d | |
Ozzie Isaacs | 7b45324149 | |
Ozzie Isaacs | 858d099509 | |
Ozzie Isaacs | 12f3a13d1d | |
Ozzieisaacs | 813d303ea7 | |
Ozzieisaacs | c1ca18f7dc | |
Ozzie Isaacs | e8e4d87d39 | |
Ozzie Isaacs | 5d5a94c9e5 | |
Ozzie Isaacs | 258b4a6767 | |
Ozzie Isaacs | ef4b5e2881 | |
Ozzie Isaacs | a968ddaef2 | |
Ozzie Isaacs | aaa749933d | |
Ozzie Isaacs | 2e007a160e | |
Ozzie Isaacs | e7464f2694 | |
Ozzie Isaacs | 47414ada69 | |
Ozzie Isaacs | 9410b47144 | |
Ozzie Isaacs | db03fb3edd | |
Ozzie Isaacs | 2b03cae017 | |
Ozzie Isaacs | 21ebdc0130 | |
Ozzie Isaacs | 6e8445fed5 | |
Ozzie Isaacs | d83c731030 | |
Ozzie Isaacs | ae9a970782 | |
Ozzie Isaacs | 1e723dff3a | |
Ozzie Isaacs | 8421a17a82 | |
Ozzie Isaacs | bc96ff9a39 | |
Ozzie Isaacs | bf049d8240 | |
Ozzie Isaacs | 6e783cd7ee | |
Ozzie Isaacs | 069dc2766f | |
Ozzie Isaacs | 2f5b9e41ac | |
Ozzie Isaacs | 9a8093db31 | |
Ozzie Isaacs | 5c342d4e7c | |
Ozzie Isaacs | 3c98cd1b9a | |
Ozzie Isaacs | 2303fc0814 | |
Ozzie Isaacs | a8680a45ca | |
Ozzie Isaacs | d75d95f401 | |
Ozzie Isaacs | fbb6de7195 | |
xlivevil | 3cbbf6fa86 | |
Ozzieisaacs | c92d65aad3 | |
Ozzieisaacs | c61e5d6ac0 | |
Ozzieisaacs | 130af069aa | |
Ozzieisaacs | 09b381101b | |
Ozzieisaacs | 35bb899879 | |
Ozzie Isaacs | 6184e2b7bc | |
Ozzie Isaacs | 2f3e5eadeb | |
Ozzie Isaacs | fe5d684d2c | |
Ozzie Isaacs | df53a5d8c9 | |
Ozzie Isaacs | 83b99fcb1a | |
Ozzie Isaacs | 028e6855a7 | |
Wulf Rajek | adf6728f14 | |
Ozzie Isaacs | 652d0fd86f | |
Ozzie Isaacs | 1136383b9a | |
Ozzie Isaacs | d770e5392e | |
Ozzie Isaacs | 3d2e7e847e | |
Ozzie Isaacs | a63af5882e | |
Wulf Rajek | 2d0af0ab49 | |
Ozzie Isaacs | d912c1c476 | |
Ozzie Isaacs | 42b0226f1a | |
Ozzie Isaacs | 8adae6ed0c | |
Ozzie Isaacs | fee76741a0 | |
Ozzie Isaacs | f36d3a76be | |
Ozzie Isaacs | afaf496fbe | |
Ozzie Isaacs | c06754975e | |
Ozzie Isaacs | 834edadc28 | |
Ozzie Isaacs | 7861f8a89a | |
Ozzie Isaacs | 73d359af05 | |
Ozzie Isaacs | 036cd7be48 | |
Ozzie Isaacs | baffe1f537 | |
Ozzie Isaacs | 2f949ce1dd | |
Ozzie Isaacs | 32a3c45ee0 | |
Ozzie Isaacs | 14a6e7c42c | |
Ozzie Isaacs | 2a5e9a97bb | |
Nicolas Ferrari | 504e58abdb | |
Ozzie Isaacs | a6a8f7eb43 | |
Ozzie Isaacs | 5070cc4c23 | |
Ozzie Isaacs | 0d49b56883 | |
Ozzie Isaacs | f5b79930ad | |
Ozzie Isaacs | c0d0660986 | |
Ozzie Isaacs | ec53570118 | |
Ozzie Isaacs | 8cb5989c97 | |
Ozzie Isaacs | 39459603d4 | |
Ozzie Isaacs | f34fc002da | |
Ozzie Isaacs | 06e8845641 | |
Ozzie Isaacs | 034ab73ccc | |
Ozzie Isaacs | 57cd8160a0 | |
Ozzie Isaacs | 399ddc5d6f | |
Ozzie Isaacs | d9a83e0638 | |
Ozzie Isaacs | 8f3bb2e338 | |
Ozzie Isaacs | 4545f4a20d | |
Ozzie Isaacs | 296f76b5fb | |
Ozzie Isaacs | 3b5e5f9b90 | |
Ozzie Isaacs | 8e2536c53b | |
Ozzie Isaacs | 4379669cf8 | |
Ozzie Isaacs | 2b31b6a306 | |
Ozzie Isaacs | 3a0dacc6a6 | |
Ozzie Isaacs | 547ea93dc9 | |
Ozzie Isaacs | d80297e1a8 | |
Ozzie Isaacs | 49692b4a45 | |
xlivevil | b54a170a00 | |
Ozzie Isaacs | 34478079d8 | |
Ozzie Isaacs | 753319c8b6 | |
Ozzie Isaacs | c53817859a | |
Ozzie Isaacs | 153a443fca | |
Bharat KNV | 9efd644360 | |
Ozzie Isaacs | 598618e428 | |
Ozzie Isaacs | 965352c8d9 | |
xlivevil | 97cf20764b | |
xlivevil | 695ce83681 | |
xlivevil | 86b779f39b | |
Ozzie Isaacs | 8007e450b3 | |
Ozzie Isaacs | 0aac961cde | |
Ozzie Isaacs | ef7c6731bc | |
Ozzie Isaacs | e9b674f46e | |
Ozzie Isaacs | 8f665ebd58 | |
Ozzie Isaacs | 7bb3cac7fb | |
Ozzie Isaacs | 9c5970bbfc | |
Ozzie Isaacs | ba23ada1fe | |
Ozzie Isaacs | 86b621e768 | |
Ozzie Isaacs | 5f70406b30 | |
Ozzie Isaacs | 6b026513cb | |
Ozzie Isaacs | 0436f0f9b2 | |
Ozzie Isaacs | 295888c654 | |
Ozzie Isaacs | 7317084a4e | |
Ozzie Isaacs | 0981337cdf | |
Ozzie Isaacs | 461dd05e2f | |
Ozzie Isaacs | 764389ea2a | |
Ozzie Isaacs | 4a0dde0371 | |
Ozzie Isaacs | e22b3da601 | |
Ozzie Isaacs | 7c623941de | |
Ozzie Isaacs | 3bb41aca6d | |
Ozzie Isaacs | 0c3c0c0664 | |
Ozzie Isaacs | 411c13977f | |
Ozzie Isaacs | 41f89af959 | |
Ozzie Isaacs | 895f68033f | |
Ozzie Isaacs | 2d49589e4b | |
Ozzie Isaacs | 89877835b3 | |
Ozzie Isaacs | 0ce41aef56 | |
Ozzie Isaacs | 5b3015619d | |
Ozzie Isaacs | 6ca08a7cc1 | |
Ozzie Isaacs | 7254ce6c81 | |
Ozzie Isaacs | cfa6b405da | |
Ozzie Isaacs | 26a8ac1425 | |
Ozzie Isaacs | 1ce45f3253 | |
Ozzie Isaacs | a03c95329c | |
Ozzie Isaacs | e0bf829def | |
Ozzie Isaacs | 0bc15636f2 | |
Ozzie Isaacs | 61bfeae936 | |
Ozzie Isaacs | 95e0255aa1 | |
Ozzie Isaacs | 23e47ba4e6 | |
Ozzie Isaacs | 4dcc44803c | |
Ozzie Isaacs | 111ab121b1 | |
Ozzie Isaacs | 1e04b51148 | |
Ozzie Isaacs | 3ae1b97d72 | |
Ozzie Isaacs | 3123a914a4 | |
Ozzie Isaacs | f6b46bb170 | |
Ozzie Isaacs | bb7f4cf74e | |
Ozzie Isaacs | 39ac37861f | |
mmonkey | 62ff6f7e8a | |
mmonkey | 032fced9c7 | |
Ozzie Isaacs | ae9c5da777 | |
Ozzie Isaacs | 42f8209a4a | |
Ozzie Isaacs | e757be6953 | |
Ozzie Isaacs | 4f3c396450 | |
mmonkey | 50bb74d748 | |
mmonkey | 3416323767 | |
mmonkey | 18ce310b30 | |
Ozzie Isaacs | 6339d25af0 | |
quarz12 | 477b202c38 | |
quarz12 | 326d6e7b9d | |
Ozzie Isaacs | baf32f9045 | |
Ozzie Isaacs | 3c4cd22d9e | |
Ozzie Isaacs | d9d6fb33ba | |
Ozzie Isaacs | 17b4643b7c | |
Daniel | 239f389c5c | |
Daniel | 8362c82d54 | |
Daniel | 62e7aca0fb | |
Ozzie Isaacs | 6a37c7ca9d | |
Ozzie Isaacs | e0e0422010 | |
Ozzie Isaacs | 128db26301 | |
Ozzie Isaacs | 01ab75a158 | |
Ozzie Isaacs | d9c10b830a | |
Ozzie Isaacs | 35f6f4c727 | |
Ozzie Isaacs | 3b216bfa07 | |
Ozzieisaacs | e8e2f789e5 | |
Ozzie Isaacs | d8f5bdea6d | |
Ozzie Isaacs | 127bf98aac | |
Ozzie Isaacs | ede273a8f9 | |
Ozzie Isaacs | bc7a305285 | |
Ozzie Isaacs | d759df0df6 | |
Ozzie Isaacs | bbef41290f | |
Ozzie Isaacs | 81b85445d8 | |
Ozzie Isaacs | 35209ede67 | |
Ozzie Isaacs | 0c0313f375 | |
Ozzie Isaacs | 6bf0753978 | |
Ozzie Isaacs | a02f621f08 | |
Ozzie Isaacs | de1bc3f9af | |
Ozzie Isaacs | 01090169a7 | |
Ozzie Isaacs | b564a97cdf | |
Ozzie Isaacs | a118fffc99 | |
Ozzie Isaacs | 5b59aab81a | |
Ozzie Isaacs | 7d3d0c661e | |
Ozzie Isaacs | f6f20ebc77 | |
Thomas | 58cb54c76f | |
Thomas | 3fc326fa48 | |
collerek | 20b5a9a2c0 | |
Evan Peterson | 4eaa9413f9 | |
Ozzie Isaacs | a50aff67a2 | |
Ozzie Isaacs | 3c1f5fd37f | |
byword77 | 8ae066c387 | |
Ozzie Isaacs | 0feb62c142 | |
Ozzie Isaacs | 96b1e8960b | |
Ozzie Isaacs | df67079573 | |
Ozzie Isaacs | e3dbf7a88d | |
byword77 | 8dd0585919 | |
Ozzie Isaacs | c830a5936e | |
Ozzie Isaacs | 405f3c181f | |
Ozzie Isaacs | 6d839d5cc7 | |
Ozzieisaacs | bbadfa2251 | |
Ozzieisaacs | 7b8b2f93a0 | |
Ozzieisaacs | c1030dfd13 | |
Ozzieisaacs | a90177afa0 | |
Ozzieisaacs | c095ee3c14 | |
Ozzieisaacs | ae1f515446 | |
Ozzieisaacs | 0548fbb685 | |
Ozzieisaacs | f22e4d996c | |
Ozzieisaacs | 3e0d8763c3 | |
Ozzieisaacs | 47f5e2ffb4 | |
Ozzieisaacs | f39dc100b4 | |
Ozzieisaacs | 573c9f9fb4 | |
Ozzieisaacs | 785726deee | |
Ozzieisaacs | 7eb875f388 | |
cbartondock | 4edd1914b4 | |
cbartondock | 222929e741 | |
cbartondock | 70b67077cc | |
Ozzie Isaacs | ec73558b03 | |
Ozzieisaacs | bdedec90dd | |
Ozzieisaacs | d45085215f | |
Ozzie Isaacs | 1a6579312f | |
Ozzie Isaacs | b85627da5c | |
Ozzie Isaacs | 2e815147fb | |
Ozzie Isaacs | 592288cb22 | |
Ozzie Isaacs | 2e3a3ee460 | |
Ozzie Isaacs | b7927a0df1 | |
Ozzie Isaacs | 021298374e | |
Ozzie Isaacs | 92f65882b2 | |
Bharat KNV | 0693cb1ddb | |
collerek | bea14d1784 | |
Ozzie Isaacs | f0399d04b7 | |
Ozzie Isaacs | 9b57fa25de | |
Ozzie Isaacs | afbe77de3d | |
xlivevil | d26d357151 | |
Laurin Neff | 4db9691cfc | |
Ozzie Isaacs | 45c433caab | |
collerek | 51bf35c2e4 | |
collerek | d64589914f | |
collerek | 362fdc5716 | |
collerek | d55626d445 | |
Ozzie Isaacs | 25422b3411 | |
Ozzie Isaacs | 42bf40d7bb | |
collerek | 920acaca99 | |
xlivevil | a184f4e71a | |
cbartondock | 4569188008 | |
Ozzie Isaacs | 9d9acb058d | |
Ozzie Isaacs | 7d67168a4a | |
cbartondock | 09751d8b87 | |
Ozzie Isaacs | 6e15280fac | |
Ozzie Isaacs | f78d2245aa | |
Ozzie Isaacs | fd5ab0ef53 | |
Ozzie Isaacs | d217676350 | |
Ozzie Isaacs | cd5711e651 | |
Ozzie Isaacs | 3bf173d958 | |
cbartondock | 98d630d453 | |
Ozzie Isaacs | eb2e816bfd | |
Ozzie Isaacs | bd01e840ca | |
Ozzie Isaacs | 91a21ababe | |
Ozzie Isaacs | f4096b136e | |
GarcaMan | e2eab808c0 | |
GarcaMan | 3ac08a8c0d | |
cbartondock | 7598dfe952 | |
Ozzie Isaacs | 5ed3b1cf53 | |
Ozzie Isaacs | 7640ac1b3b | |
Jonathan Fenske | 66874f8163 | |
Jonathan Fenske | 3f91313303 | |
xlivevil | 8438b2a07b | |
xlivevil | 67e3721530 | |
cbartondock | 2252d661c0 | |
Ozzie Isaacs | 87e526642c | |
Ozzie Isaacs | 7f9da94a18 | |
Ozzie Isaacs | ec7c2db971 | |
Denis Rodríguez | 3f56f0dca7 | |
GarcaMan | 7fc04b353b | |
GarcaMan | a8689ae26b | |
cbartondock | fcd2b68359 | |
Ozzie Isaacs | a1d372630d | |
cbartondock | fc859afb92 | |
Ozzie Isaacs | 2e0d0a2429 | |
Ozzie Isaacs | d084a06e63 | |
Ozzie Isaacs | e880238cb9 | |
cbartondock | f58c5bee1c | |
Ozzie Isaacs | cbb9edac19 | |
Ozzie Isaacs | 1b8bd27b3c | |
Ozzieisaacs | 1e9d88fa98 | |
Ozzieisaacs | 6cb713d62c | |
Ozzieisaacs | 642af2f973 | |
Ozzieisaacs | 6deb527769 | |
Ozzieisaacs | 9273843062 | |
cbartondock | 2989586c6d | |
Ozzie Isaacs | 5ede079401 | |
Ozzie Isaacs | 7ad419dc8c | |
Ozzie Isaacs | bcdc976414 | |
Ozzie Isaacs | 6aad9378b8 | |
Ozzie Isaacs | 6f5390ead5 | |
Ozzie Isaacs | d624b67e93 | |
Ozzie Isaacs | 01cc97c1b2 | |
Ozzieisaacs | 8e5bb02a28 | |
Ozzieisaacs | 4fd4cf4355 | |
Ozzieisaacs | baba205bce | |
cbartondock | 7c016265d2 | |
Ozzie Isaacs | 974549b1af | |
Ozzieisaacs | add502d236 | |
cbartondock | 294c594cbb | |
Ozzieisaacs | 60aa016734 | |
Ozzieisaacs | 27e8fbd248 | |
cbartondock | d856c4d78e | |
Ozzieisaacs | 25b09a532f | |
Ozzieisaacs | c1f4ca36b6 | |
Ozzieisaacs | 58379159fb | |
Ozzieisaacs | 17470b3b56 | |
Ozzieisaacs | 42cc13d1e2 | |
Ozzieisaacs | ecc5cb167e | |
Ozzieisaacs | d72210c6ae | |
Ozzieisaacs | 61deda1076 | |
Ozzieisaacs | 1e0ff0f9c2 | |
cbartondock | 00bf1f5ec9 | |
Ozzieisaacs | 2b6f5b1565 | |
cbartondock | e5abc5f281 | |
Ozzieisaacs | 95371d0d7f | |
Ozzieisaacs | c7df8a1a34 | |
Ozzieisaacs | b414d91964 | |
cbartondock | 82164fc8e7 | |
Ozzieisaacs | a5415e00d5 | |
cbartondock | 434029270b | |
Ozzie Isaacs | b3b85bf692 | |
Ozzie Isaacs | a36c6da3ae | |
Ozzie Isaacs | 9e72c3b40d | |
cbartondock | a0cebfbfb1 | |
Ozzie Isaacs | 64e833f5d6 | |
Ozzie Isaacs | 4da64ceb23 | |
cbartondock | 99ec99b539 | |
Ozzie Isaacs | 3077b854d7 | |
Ozzie Isaacs | a8317d900b | |
KN4CK3R | 668bf3a15e | |
Ozzie Isaacs | 481e52b503 | |
Ozzie Isaacs | bbb65ec804 | |
Ozzie Isaacs | 4b7b646692 | |
Ozzie Isaacs | 65cfb1ccbc | |
Ozzie Isaacs | 9e9d7b3642 | |
Ozzie Isaacs | 1294672809 | |
i7-8700 | 857584a929 | |
cbartondock | 75c68d92ec | |
ElQuimm | b4772d9b66 | |
Ozzie Isaacs | 3cfffa1487 | |
Ozzie Isaacs | 382cd9458f | |
Ozzie Isaacs | be7ac7e163 | |
Ozzie Isaacs | 3a1a32f053 | |
cbartondock | 27f71d910b | |
Ozzie Isaacs | f6a2b8a9ef | |
Bernat | 9f260128cf | |
Ozzie Isaacs | cdd38350fe | |
Ozzie Isaacs | 516e76de4f | |
Ozzie Isaacs | 4c7b5999f7 | |
Ozzie Isaacs | bb20979c71 | |
Ozzie Isaacs | 917909cfdb | |
Ozzie Isaacs | aefed40a2f | |
cbartondock | 88278672d8 | |
Ozzie Isaacs | bd0071354c | |
cbartondock | c6bf62a6eb | |
cbartondock | 95544ef885 | |
Ozzie Isaacs | fe4db16a7e | |
cbartondock | 9b127a9f97 | |
Ozzie Isaacs | 9e4aeac16d | |
cbartondock | 9653308300 | |
Ozzie Isaacs | 7671a1d5c8 | |
Ozzie Isaacs | bd6b5ac873 | |
Ozzie Isaacs | 00dc60da79 | |
Ozzie Isaacs | 708861bcd5 | |
Ozzie Isaacs | 24a2c0a5cf | |
Ozzie Isaacs | 1e1d3a7c81 | |
Ozzie Isaacs | 2ebddcfee3 | |
Ozzie Isaacs | 4517f5b0cb | |
Ozzie Isaacs | 666015e867 | |
Ozzie Isaacs | 9d5e9b28ae | |
Ozzie Isaacs | 6f1e78b9a3 | |
cbartondock | d5aa345da6 | |
Ozzie Isaacs | e060c62742 | |
cbartondock | 5695268d1b | |
Ozzie Isaacs | cea10d3945 | |
Ozzie Isaacs | fe21d15194 | |
Ozzie Isaacs | 369d5059c3 | |
Ozzie Isaacs | 56a9c62421 | |
Ozzie Isaacs | b4262b1317 | |
Ozzie Isaacs | 50d703e2d8 | |
cbartondock | 5aefc893de | |
Ozzie Isaacs | 6e5d9d7657 | |
Ozzie Isaacs | 1be7dfcdca | |
Ozzie Isaacs | e58eb8dac1 | |
Ozzie Isaacs | d02eb842c7 | |
Ozzie Isaacs | f3efef1f60 | |
cbartondock | 659481c83b | |
Ozzie Isaacs | bad4c01474 | |
Ozzie Isaacs | cd53d57516 | |
Ozzie Isaacs | ac54899415 | |
Ozzie Isaacs | abe46e1862 | |
Ozzie Isaacs | 3be47d6e57 | |
Ozzie Isaacs | 25f608d109 | |
Ozzie Isaacs | aca5324914 | |
Ozzie Isaacs | d8f9e2feb2 | |
Ozzie Isaacs | ed26d34961 | |
Ozzie Isaacs | 50919d4721 | |
Ozzie Isaacs | 5edde53fed | |
Ozzie Isaacs | 1c15e10ac0 | |
Ozzie Isaacs | 52be2ad4a2 | |
Ozzie Isaacs | 43fdef5e53 | |
Ozzie Isaacs | b699796236 | |
Ozzie Isaacs | b82d03c12c | |
Ozzie Isaacs | b8eb557761 | |
Ozzie Isaacs | ba40c6693e | |
Ozzie Isaacs | e064a3ec2b | |
GarcaMan | f6561456f7 | |
xlivevil | deb91996a8 | |
mmonkey | cd3791f5f4 | |
Ozzie Isaacs | cd1fe6dde0 | |
Ozzie Isaacs | 3f6a466ca7 | |
cbartondock | 50f4fe6546 | |
mmonkey | 9e7f69e38a | |
mmonkey | 46205a1f83 | |
Ozzie Isaacs | 861277460d | |
Fernando Mesquita | 83babf88fc | |
Ozzieisaacs | 58735caff3 | |
mmonkey | 26071d4e7a | |
cbartondock | 844eb6c379 | |
xlivevil | 1b8410e786 | |
xlivevil | fed9eff7b8 | |
xlivevil | bf5de95fdc | |
Ozzie Isaacs | 2e25e797dd | |
Ozzie Isaacs | c1ac68e2ae | |
Ozzie Isaacs | 623a92ebec | |
mmonkey | 0bd544704d | |
Ozzie Isaacs | 2edcd16119 | |
Ozzie Isaacs | af202bd6d1 | |
Ozzie Isaacs | d5a332a84e | |
Fernando Mesquita | d6884164a5 | |
mmonkey | be28a91315 | |
cbartondock | 707556a36d | |
Ozzie Isaacs | 356a4f588e | |
Ozzie Isaacs | 241b31458d | |
Ozzie Isaacs | efb04ddd8f | |
mmonkey | 524ed07a6c | |
mmonkey | 8bee2b9552 | |
Anatolii Fetisov | 781ca7a3f3 | |
mmonkey | d648785471 | |
mmonkey | 9a08bcd2bc | |
mmonkey | 04a5db5c1d | |
Jeroen Hellingman | 6f6d8df431 | |
cbartondock | 9ebde57fc3 | |
Ozzie Isaacs | 8dc1e9bfa4 | |
Ozzie Isaacs | e3db2796c9 | |
Ozzie Isaacs | 42ef049b63 | |
cbartondock | 8930426f2a | |
Ozzie Isaacs | ceffa3a108 | |
Ozzie Isaacs | c0a06eec46 | |
Ozzie Isaacs | 86ef1d47e8 | |
cbartondock | b835aa660a | |
Ozzie Isaacs | 4d92d7da3a | |
Ozzie Isaacs | d1e6a85803 | |
Ozzie Isaacs | d8bad7394a | |
Ozzie Isaacs | f5062d1354 | |
Ozzie Isaacs | 1d79d9ded2 | |
Ozzieisaacs | 10e212fcde | |
Ozzie Isaacs | 1fa267ce1b | |
cbartondock | abcf9a1808 | |
Ozzie Isaacs | d4cfad6363 | |
Ozzie Isaacs | 3a24561ca2 | |
Ozzie Isaacs | 5c19a8aacc | |
Ozzie Isaacs | 3946ef8f0d | |
cbartondock | 6a2868c68d | |
Ozzie Isaacs | 7ae3255ea9 | |
Ozzie Isaacs | 91e6d94c83 | |
Ozzie Isaacs | e615073893 | |
Ozzie Isaacs | 32e27712f0 | |
cbartondock | ff39fac50f | |
Daniela Mazza | 80d3ba42b4 | |
Ozzie Isaacs | d25cfb7499 | |
cbartondock | 24ad15be11 | |
Ozzie Isaacs | a35c635987 | |
Ozzie Isaacs | aa9fdd2ada | |
Ozzie Isaacs | afa585eb65 | |
cbartondock | 03acb77162 | |
Ozzie Isaacs | 28ca39ca13 | |
cbartondock | b05c17b8bb | |
cbartondock | 3839af65c4 | |
Ozzie Isaacs | 275e073c42 | |
Ozzie Isaacs | c4876010e9 | |
xlivevil | bce69b2dfc | |
xlivevil | fb97e39d9f | |
xlivevil | 913690df78 | |
Ozzie Isaacs | e68a3f18fa | |
Ozzie Isaacs | 1534827ad7 | |
Ozzie Isaacs | b36422bc05 | |
Ozzie Isaacs | 1e2335dea0 | |
Ozzie Isaacs | e052590270 | |
Ozzie Isaacs | 41ca85268f | |
Ozzie Isaacs | 0e9709f304 | |
Ozzie Isaacs | ceec1051d5 | |
cbartondock | 81ccc11c9c | |
Ozzie Isaacs | 702e96ddd6 | |
Ozzie Isaacs | 53603f79bd | |
Ozzie Isaacs | 250cafe814 | |
Ozzie Isaacs | 16633ef1d3 | |
Ozzie Isaacs | f0a5225524 | |
Ozzie Isaacs | a32b36bf81 | |
Ozzie Isaacs | 302679719d | |
Ozzie Isaacs | a63baa1758 | |
cbartondock | 752192a057 | |
Ozzie Isaacs | e245a147d9 | |
Ozzie Isaacs | 0ec2bcd897 | |
xlivevil | 19c75d6790 | |
cbartondock | 0654156dd6 | |
Ziding Zhang | e4b0434733 | |
cbartondock | b32721c293 | |
Ozzie Isaacs | c5e39a7523 | |
cbartondock | 0db837c0ce | |
Ozzie Isaacs | 53dae32897 | |
Ozzie Isaacs | 018f3ca250 | |
Ozzie Isaacs | a1a8a0cf29 | |
Ozzie Isaacs | f9c3e751f6 | |
Ozzie Isaacs | c7b057ec51 | |
Ozzie Isaacs | 85ea762054 | |
Ozzie Isaacs | 56cd62ed90 | |
Ozzie Isaacs | 9a8c342e61 | |
Ozzie Isaacs | 3b81ea37f4 | |
Ozzie Isaacs | 3c8bfc31e4 | |
Ileana Maricel Barrionuevo | 59881367fe | |
Ileana Maricel Barrionuevo | c8ebaee0f7 | |
Ileana Maricel Barrionuevo | d5d0ad50fa | |
cbartondock | 6ef792d65d | |
Ozzie Isaacs | 20fa9f5523 | |
Ozzie Isaacs | 616cc2018a | |
Ozzie Isaacs | e69b1adccd | |
Ozzie Isaacs | 280efad939 | |
Ozzie Isaacs | 15ec6bec95 | |
Ozzie Isaacs | aae81c3d24 | |
Ozzie Isaacs | 259ac94b93 | |
Ozzie Isaacs | a27314464a | |
Ozzie Isaacs | 87f07003f4 | |
Ozzie Isaacs | 1bf065fd04 | |
Ozzie Isaacs | f8de7e75cc | |
cbartondock | 7d586b3745 | |
Ozzie Isaacs | a56e071a19 | |
Ozzie Isaacs | 480aecb16c | |
Ozzie Isaacs | ec7803fa76 | |
cbartondock | 91938fd18a | |
Ozzie Isaacs | aa2d3d2b36 | |
Ozzie Isaacs | d5e9cdc5b7 | |
Ozzie Isaacs | 305e75c0ae | |
Ozzie Isaacs | 0d247fef6a | |
Ozzie Isaacs | 94da61c57e | |
Ozzie Isaacs | 430ccd9ab1 | |
Ozzie Isaacs | 47d94d9bd6 | |
Ozzie Isaacs | a6d1f6039d | |
Ozzie Isaacs | 31234a4b98 | |
Ozzie Isaacs | 476275ea53 | |
Ozzie Isaacs | 792d4a65bc | |
Ozzie Isaacs | 0e2dca5f4d | |
Ozzie Isaacs | 557296b7be | |
JFernando122 | 2236191263 | |
JFernando122 | 1138c86868 | |
Thomas | 08500c66a8 | |
Ozzie Isaacs | 67836006c5 | |
JFernando122 | fa03a9ee25 | |
cbartondock | bf4564e365 | |
Radoslaw Kierznowski | 6a96664381 | |
Ozzie Isaacs | e6e3032f02 | |
Ozzie Isaacs | 6d424f0a30 | |
Ozzie Isaacs | a6f0375db3 | |
Ozzie Isaacs | 1833e8fdb4 | |
Radoslaw Kierznowski | 70151c2e11 | |
JVT038 | ab69962e8a | |
ElQuimm | 075fe994af | |
Ozzie Isaacs | 1aca1b9fdd | |
Ozzie Isaacs | 9fea5a55f4 | |
Ozzie Isaacs | 1a0bf45c34 | |
Ozzie Isaacs | 67874e07b6 | |
Ozzie Isaacs | ced0bcfe33 | |
Ozzie Isaacs | 8c0aa79f78 | |
Ozzie Isaacs | 93e8c5be32 | |
Ozzie Isaacs | d9f86aecd2 | |
Ozzie Isaacs | e9552fedef | |
Ozzie Isaacs | 84e1c6e809 | |
Ozzie Isaacs | aadd6fd7e0 | |
Ozzie Isaacs | 59987ed359 | |
Ozzie Isaacs | a79600c81f | |
Ozzie Isaacs | e8a4c3c6b9 | |
Ozzie Isaacs | 62fce26651 | |
Ozzie Isaacs | 25add6511f | |
Ozzie Isaacs | b2a28cd39a | |
Ozzie Isaacs | 6de3aebf3a | |
Ozzie Isaacs | 1b7e422772 | |
Ozzie Isaacs | b0f48b7f00 | |
Ozzie Isaacs | 94b07d05c1 | |
Ozzie Isaacs | 4392fec7f5 | |
flying-sausages | c1d0ec076b | |
Ozzie Isaacs | a47d6cd937 | |
Ozzie Isaacs | dcdb5e2a9e | |
Ozzie Isaacs | 240325dc18 | |
Ozzie Isaacs | e599d2ec05 | |
Ozzie Isaacs | c62a417cbf | |
Ozzie Isaacs | 98fc1ee0a7 | |
Ozzie Isaacs | 78d0fd811b | |
Ozzie Isaacs | 54d06e580d | |
Ozzie Isaacs | a62fca1e55 | |
Ozzie Isaacs | 7b1d23eba4 | |
Ozzie Isaacs | 2502c279fc | |
Ozzie Isaacs | 878fc8c2fd | |
Ozzie Isaacs | 64902eebcb | |
flying-sausages | e520f6afc4 | |
Ozzie Isaacs | 0cc1ae9324 | |
cbartondock | 1b2124aa35 | |
Ozzie Isaacs | bd4fde9e63 | |
Ozzie Isaacs | c85cfa90a4 | |
Ozzie Isaacs | a0449b50c8 | |
Ozzie Isaacs | 010c0bfc7d | |
Ozzie Isaacs | 394b063b8c | |
Ozzie Isaacs | 251780bed6 | |
Ozzie Isaacs | decc2a9c79 | |
Ozzie Isaacs | b62d84424e | |
Ozzie Isaacs | e15324d2cd | |
Ozzie Isaacs | 20b84a9459 | |
Ozzie Isaacs | 109198a374 | |
Ozzie Isaacs | dafc68f049 | |
Ozzie Isaacs | 3e5c944365 | |
Ozzie Isaacs | d95838309e | |
Ozzie Isaacs | 4745fc0db1 | |
Ozzie Isaacs | b3648187ff | |
cbartondock | b35483edf8 | |
Ozzie Isaacs | f29088e854 | |
Ozzie Isaacs | b009dfe4ee | |
Ozzie Isaacs | eba94f430c | |
Ozzie Isaacs | 06347b7e3c | |
Ozzie Isaacs | 6bc426b21d | |
cbartondock | e0fac8d2c0 | |
Ozzie Isaacs | 4a849aab7f | |
Ozzie Isaacs | ad2d0c84e4 | |
Ozzie Isaacs | 6bf360fbfb | |
Ozzie Isaacs | 380292a8aa | |
jonyxx-alt | f9949da745 | |
Ozzie Isaacs | b0cc52e0aa | |
Ozzie Isaacs | 9ef705650b | |
Ozzie Isaacs | c6dadbe75e | |
cbartondock | 0a65c05083 | |
Robert Schütz | 21026340ad | |
Ozzieisaacs | eb2273247f | |
Ozzieisaacs | 02fc698f1c | |
Ozzieisaacs | 8dc11e89bd | |
cbartondock | 12c3264000 | |
Ozzieisaacs | b34672ed19 | |
Ozzieisaacs | 541c8c4b93 | |
Ozzieisaacs | b97373bf37 | |
Ozzieisaacs | c0b561cb5a | |
Ozzieisaacs | bd3ccfd0a9 | |
Ozzieisaacs | 5470acd3af | |
Ozzie Isaacs | 64696fe973 | |
cbartondock | f2cd93dc54 | |
Ozzie Isaacs | ed2fa4cdd8 | |
cbartondock | c2c4636961 | |
dalin | 03cca1d1c0 | |
dalin | 354b075885 | |
Ozzie Isaacs | de7f039b99 | |
cbartondock | c2e530af17 | |
Ozzie Isaacs | 19797d5d23 | |
Ozzie Isaacs | 144c2b5fc7 | |
cbartondock | 577b525508 | |
GarcaMan | 7da40d1c2e | |
Ozzie Isaacs | 4e3a5ca33b | |
Ozzie Isaacs | 97e4707f72 | |
cbartondock | 541dd6b579 | |
Ozzie Isaacs | 450ee43677 | |
cbartondock | e57229e670 | |
Ozzie Isaacs | c0b2e886d2 | |
Ozzie Isaacs | bb4749c65b | |
cbartondock | 1ef2a96465 | |
ElQuimm | 5aa37c68a2 | |
Ozzie Isaacs | 6e5a1a1f4d | |
Ozzie Isaacs | b624ce16c3 | |
Ozzie Isaacs | 3be5b5f919 | |
Ozzie Isaacs | 04ac5b69ac | |
Ozzie Isaacs | 9cc14ac5c7 | |
BuildTools | 755eb1405b | |
subdiox | 0138294b36 | |
Ozzie Isaacs | c0a4addf30 | |
Ozzie Isaacs | 39bbee0eeb | |
Ozzie Isaacs | 39dda3f534 | |
Ozzie Isaacs | 1cb8dbe795 | |
cbartondock | 85f1ca6101 | |
Ozzie Isaacs | e13820bbf0 | |
malletfils | 3973362457 | |
cbartondock | 2f3f13afb9 | |
Ozzie Isaacs | b38877e193 | |
Ozzie Isaacs | 0e1dbb5377 | |
Ozzie Isaacs | f07cc8b103 | |
Ozzie Isaacs | 67775bc797 | |
ElQuimm | 05933a5f0c | |
cbartondock | e15ebd5aac | |
Ozzie Isaacs | d32b2ca524 | |
Ozzie Isaacs | d0a895628e | |
Ozzie Isaacs | 90f2b3fb21 | |
Ozzie Isaacs | 0f95800dde | |
cbartondock | ff99cd2456 | |
Ozzieisaacs | 04971f8672 | |
Ozzieisaacs | b6177b27f4 | |
Ozzieisaacs | ae97e87506 | |
cbartondock | df91ca500f | |
Ozzie Isaacs | 2d73f541c0 | |
Ozzie Isaacs | 7561eabe52 | |
Ozzie Isaacs | 067fb1b0b7 | |
Ozzie Isaacs | 78071841cc | |
Ozzie Isaacs | fac232229e | |
Ozzie Isaacs | a43021e87c | |
cbartondock | eb599373ee | |
cbartondock | 7b529b5f9e | |
Ozzie Isaacs | 1a0de3b3cf | |
Ozzie Isaacs | 947154dc9c | |
Ozzie Isaacs | 8acd1f1fe4 | |
Ozzie Isaacs | 665210e506 | |
Ozzie Isaacs | 91b9370a21 | |
Ozzie Isaacs | ca0aa600b4 | |
Ozzie Isaacs | 1c42c4c969 | |
cbartondock | 389f0ae452 | |
Ozzie Isaacs | 6e406311c3 | |
cbartondock | 00a24756f4 | |
cbartondock | 9025f312cc | |
cbartondock | d5e453edd3 | |
cbartondock | 26cc64bdd6 | |
robochud | 87884c1af2 | |
Ozzie Isaacs | c0623a949c | |
cbartondock | 42a23ea23c | |
cbartondock | 710c31d1ca | |
cbartondock | f637ba4dad | |
Ozzie Isaacs | 80d2c60cef | |
Ozzie Isaacs | b17d71e4c3 | |
Ozzie Isaacs | 3c5bd3a605 | |
Ozzie Isaacs | 7cc8d1e693 | |
Ozzie Isaacs | 99520d54a5 | |
Ozzie Isaacs | e10a8c078b | |
Ozzie Isaacs | 7a196fed7c | |
Ozzie Isaacs | c418c7e725 | |
Ozzie Isaacs | e35748deff | |
Ozzie Isaacs | b57efbe31c | |
Ozzie Isaacs | 8f91437701 | |
Ozzie Isaacs | a7eb547ca4 | |
Ozzie Isaacs | 0ceb12f74f | |
Ozzie Isaacs | 4960624de7 | |
Ozzie Isaacs | 2c92f24d89 | |
Ozzie Isaacs | 0b7c679dba | |
Ozzie Isaacs | aab3cdc58a | |
Ozzie Isaacs | 3c63f5b204 | |
Ozzie Isaacs | 0b97bbf827 | |
Ozzie Isaacs | 970dbb0c59 | |
Ozzie Isaacs | d21d3c2ceb | |
Ozzie Isaacs | 0f6493b8ce | |
Ozzie Isaacs | f26beec1d3 | |
Ozzie Isaacs | c4225e29ed | |
Ozzie Isaacs | 9243326cd3 | |
Ozzie Isaacs | 86beb8023a | |
Ozzie Isaacs | f4e134742b | |
Ozzie Isaacs | 2b17bf4114 | |
Ozzie Isaacs | c4f0fc8f7b | |
Ozzie Isaacs | 3d6c836e7d | |
Gavin Mogan | c279055af4 | |
Gavin Mogan | 657cba042a | |
Ozzie Isaacs | 7a58e48cae | |
Ozzie Isaacs | 670eab62bf | |
Ozzie Isaacs | fc85586809 | |
Ozzie Isaacs | 837fc4988d | |
Ozzie Isaacs | 4664b47851 | |
Ozzie Isaacs | 9864d932e0 | |
Ozzie Isaacs | 436f60caa9 | |
Ozzie Isaacs | e9530eda9d | |
Ozzie Isaacs | f4ddac16f9 | |
Ozzie Isaacs | 33bdc07f55 | |
Ozzie Isaacs | 130a4ed2d3 | |
Ozzie Isaacs | 59ebc1af8a | |
Ozzie Isaacs | 10731696df | |
Ozzie Isaacs | 82e15d2e98 | |
Ozzie Isaacs | 9c842f1895 | |
Ozzie Isaacs | 8cc849488b | |
Ozzie Isaacs | fcaa232967 | |
Ozzie Isaacs | d3f8153b90 | |
Ozzie Isaacs | 99ecf22790 | |
Ozzie Isaacs | b1b7ee65b4 | |
Ozzie Isaacs | 6889456662 | |
Ozzie Isaacs | 081553dc9f | |
Ozzie Isaacs | 1e7a2c400b | |
Ozzie Isaacs | 30e897af48 | |
Ozzie Isaacs | dd30ac4fbd | |
Ozzie Isaacs | 5d8d796807 | |
Ozzie Isaacs | f3d88fc746 | |
Ozzie Isaacs | d87ccae6c9 | |
Ozzie Isaacs | 2760a7816d | |
Ozzie Isaacs | 8f5c649d0f | |
Ozzie Isaacs | fcf9e7a1ef | |
Ozzie Isaacs | 2be7b6480a | |
cbartondock | f5ded86c02 | |
rra | 8abfaf0ffd | |
Ozzie Isaacs | b070ba142f | |
Ozzie Isaacs | bd7c6828bf | |
cbartondock | d0671ec58c | |
cbartondock | 1e40ffd1cc | |
Ozzie Isaacs | da2c3e9ed7 | |
Ozzie Isaacs | f62d6abb69 | |
Ozzie Isaacs | 12ad7a6322 | |
Ozzie Isaacs | b75247ea3a | |
Ozzie Isaacs | 9a963bbe79 | |
Ozzie Isaacs | 994bc8b0e4 | |
Ozzie Isaacs | 2451605033 | |
Ozzie Isaacs | 10942527f3 | |
Ozzie Isaacs | 4909ed5ccd | |
Ozzie Isaacs | 5cf5df68dc | |
Ozzie Isaacs | dd32cc99ea | |
Ozzie Isaacs | 79092dc8eb | |
Ozzie Isaacs | 6229e4610a | |
Northguy | bfe36d3f4a | |
Northguy | d42bf44fad | |
rra | 33e352819c | |
Ozzie Isaacs | 53ee0aaee1 | |
Ozzie Isaacs | 42707a19bd | |
Ozzie Isaacs | 0888706790 | |
Ozzie Isaacs | 16453a05f8 | |
Ozzie Isaacs | 2fbb7466d3 | |
Ozzie Isaacs | f29f94f45f | |
Ozzie Isaacs | cd973868fc | |
Ozzie Isaacs | 3c35f02cac | |
Ozzie Isaacs | 22c93e2389 | |
Ozzie Isaacs | 8c751eb532 | |
Ozzie Isaacs | 4df443e007 | |
Ozzie Isaacs | f52fa41439 | |
Ozzie Isaacs | f77d72fd86 | |
Ozzie Isaacs | 9b80c84794 | |
Ozzie Isaacs | 725fc658f8 | |
alfred82santa | 24bbf226a1 | |
Ozzie Isaacs | e4e27662f5 | |
cbartondock | 83474da7b5 | |
cbartondock | 9146e5f287 | |
cbartondock | ff4502c63a | |
cbartondock | 9711bd8fe1 | |
cbartondock | 05139e53be | |
cbartondock | 870b2642a9 | |
ElQuimm | d31b26ae7d | |
Zaroz | 5511925ba2 | |
Angel Docampo | f96b20717d | |
Zaroz | 940c9c45d7 | |
Ozzie Isaacs | 87d6008dfc | |
Ozzie Isaacs | b9c0c8d2dc | |
Ozzie Isaacs | 81c30d5fd5 | |
Ozzie Isaacs | 0aa33d88a5 | |
Ozzie Isaacs | e64a504bb1 | |
jvoisin | bc876a159e | |
Ozzieisaacs | 4aa1a838ed | |
Ozzieisaacs | 095a51edd0 | |
Ozzie Isaacs | a3a11bdf3f | |
Ozzie Isaacs | 70b503f3d4 | |
Ozzie Isaacs | bbf609b880 | |
Ozzie Isaacs | 0992bafe30 | |
Ozzie Isaacs | c810c5275a | |
Ozzie Isaacs | 3c1b06872d | |
Ozzie Isaacs | cefdd2f66c | |
Ozzie Isaacs | 5dac13b1da | |
alfred82santa | 6014b04b2a | |
Ozzie Isaacs | 8aebf48193 | |
Ozzie Isaacs | fbb905957b | |
Ozzie Isaacs | e0ce135838 | |
Ozzie Isaacs | 60497c60c1 | |
Ozzie Isaacs | 251a77c8b4 | |
alfred82santa | 2b7c1345ee | |
alfred82santa | 69b7d94774 | |
Ozzie Isaacs | 7b7494b8a4 | |
alfred82santa | 8fe762709b | |
alfred82santa | a3f17deb17 | |
Ozzie Isaacs | 9390dcdd43 | |
Ozzie Isaacs | e6fb460071 | |
Ozzie Isaacs | 6137fdeb33 | |
Ozzie Isaacs | 4a4d02ea6a | |
Ozzie Isaacs | be26e5f152 | |
Ozzie Isaacs | 127bfba135 | |
Ozzie Isaacs | 7efae3c125 | |
Ozzie Isaacs | 1e5af21000 | |
Ozzie Isaacs | 33a0a4c173 | |
Ozzie Isaacs | eeb7974e05 | |
Ozzie Isaacs | f45ea1a31c | |
Ozzie Isaacs | a866dbaa80 | |
Ozzie Isaacs | 62447d6b89 | |
Ozzie Isaacs | 93f0724b83 | |
Ozzie Isaacs | 724762843d | |
Ozzie Isaacs | ff16afbf0b | |
Ozzie Isaacs | 9d7ef25062 | |
Ozzie Isaacs | 88078d65e9 | |
Ozzie Isaacs | b07a97c17e | |
Ozzie Isaacs | 41e7d65e2a | |
Ozzie Isaacs | 7fa5865cf6 | |
Ozzie Isaacs | e0b8fe3b1a | |
Ozzie Isaacs | 4a11dd1e16 | |
Ozzie Isaacs | 34a474101f | |
Ozzie Isaacs | e6799e7a04 | |
Ozzie Isaacs | 0f83f9992c | |
Ozzie Isaacs | d2ad78eb40 | |
Ozzie Isaacs | 0b32738f4e | |
Ozzie Isaacs | a9cedb3fca | |
Ozzie Isaacs | 51f9cd4bb4 | |
Ozzie Isaacs | a1668e2411 | |
Ozzie Isaacs | 9418045a2c | |
Ozzie Isaacs | a7da6d210a | |
chbmb | 59a41dc844 | |
Ozzie Isaacs | e09f2c9beb | |
Ozzie Isaacs | 4bc3c8d9ac | |
Ozzie Isaacs | 9acea8adf4 | |
Ozzie Isaacs | e5f754ed0e | |
Ozzie Isaacs | 263a8f9048 | |
Ozzie Isaacs | 6f9e52792a | |
Ozzie Isaacs | 4a9b01e93b | |
Ozzie Isaacs | b7de23e895 | |
Ozzie Isaacs | f358f78da8 | |
jvoisin | b8ab66369e | |
jvoisin | 54a78d5565 | |
mmonkey | 2c8d055ca4 | |
mmonkey | 8cc06683df | |
mmonkey | 774799f316 | |
mmonkey | b4324cd685 | |
Ozzieisaacs | ca212c8737 | |
Ozzieisaacs | 6fe4ed3e24 | |
Ozzieisaacs | a659f2e49d | |
OzzieIsaacs | 760fbbb357 | |
OzzieIsaacs | 56388145b5 | |
Ozzieisaacs | cd60db417c | |
Ozzieisaacs | 5cce01215f | |
mmonkey | d53daaa387 | |
mmonkey | bc8bdfe385 | |
Ozzieisaacs | 4578af7a6d | |
Ozzieisaacs | 9e1cdd8f57 | |
OzzieIsaacs | 682c3b834f | |
OzzieIsaacs | e269bab186 | |
OzzieIsaacs | 33a89c5d89 | |
Ozzie Isaacs | 139047b22b | |
OzzieIsaacs | 9b50114852 | |
Ozzieisaacs | b100d198e8 | |
Ozzieisaacs | 7849f2fb4b | |
Ozzieisaacs | b35ecddde3 | |
Ozzieisaacs | bde7921016 | |
mmonkey | 35c60eaee5 | |
Ethan Lin | ee28e3346b | |
Ozzieisaacs | d33b0587cb | |
Ozzieisaacs | 7e0ed537b7 | |
Ozzieisaacs | 1e351eb01d | |
Ozzieisaacs | 1a83bddf8c | |
Ozzieisaacs | 2a63c35743 | |
Ozzieisaacs | 27dcbcd7e1 | |
mmonkey | af24d4edbe | |
mmonkey | 0cf4c7b7b7 | |
mmonkey | eef21759cd | |
mmonkey | 242a2767a1 | |
ElQuimm | 623372387d | |
mmonkey | 626051e489 | |
OzzieIsaacs | abf0f4d699 | |
Ozzieisaacs | 2bea447de5 | |
mmonkey | 541fc7e14e | |
Ozzieisaacs | f6538b6110 | |
Ozzieisaacs | 2d3ae71a3d | |
mmonkey | e48bdf9d5a | |
mmonkey | 21fce9a5b5 | |
mmonkey | 774b9ae12d | |
Ozzieisaacs | 9a20faf640 | |
andylizi | 123493ee59 | |
mmonkey | 2d498dd138 | |
Ozzieisaacs | 376214e2d2 | |
Ozzieisaacs | 62da469fd1 | |
Ozzieisaacs | d64009e23e | |
Ozzieisaacs | fd8b642d64 | |
OzzieIsaacs | d5ed5cd665 | |
OzzieIsaacs | fa95d064ff | |
Ozzieisaacs | 1905e0ee6f | |
Ozzieisaacs | 96b18faea1 | |
Ozzieisaacs | 0d7f2e157a | |
Marcel | 2d66936d8b | |
Ozzieisaacs | b637a63e71 | |
Ozzieisaacs | 1ae778d81e | |
Ozzieisaacs | f4412ee96b | |
Ozzieisaacs | 9130aceb5a | |
Ozzieisaacs | b8336c03c3 | |
Ozzieisaacs | 76c3ade394 | |
Ozzieisaacs | d9b22fd513 | |
Ozzieisaacs | 88ea998f8b | |
Ozzieisaacs | 8b605aeaa8 | |
Ozzie Isaacs | db91577485 | |
Ozzie Isaacs | 5d9404863d | |
Ozzieisaacs | d3986ca14a | |
Ozzieisaacs | 2508c1abb2 | |
OzzieIsaacs | 983e3b2274 | |
Ozzieisaacs | 72a02e087c | |
Ozzieisaacs | 352b4a0b73 | |
Ozzieisaacs | d957b2d20f | |
Ozzieisaacs | dcab8af8ab | |
OzzieIsaacs | e1987c34d9 | |
OzzieIsaacs | a80a8aab1c | |
OzzieIsaacs | d6fbcdb09d | |
OzzieIsaacs | d39b28b011 | |
OzzieIsaacs | 8f36128fe3 | |
OzzieIsaacs | 986d4c99bd | |
Ozzieisaacs | f677dcb1f4 | |
Ozzieisaacs | 1a9b220ec2 | |
Ozzieisaacs | d15d252af7 | |
Ozzieisaacs | 5e3618716d | |
Ozzieisaacs | f13522559d | |
Ozzieisaacs | 777c2726d3 | |
Ozzieisaacs | c25afdc203 | |
Ozzieisaacs | c25f6d7c38 | |
Ozzieisaacs | 388e46ee81 | |
Ozzieisaacs | 242866948f | |
Ozzieisaacs | 15bb0ce990 | |
Ozzieisaacs | a82911ea5d | |
Ozzieisaacs | 9a8f20317b | |
Ozzieisaacs | b605a0f622 | |
Ozzieisaacs | 046a074c3a | |
Ozzieisaacs | 7c96fac95c | |
Ozzieisaacs | a3ef53102d | |
Ozzieisaacs | 68513b775b | |
Ozzieisaacs | a79dcc93f6 | |
Ozzieisaacs | 7aabfc573b | |
Ozzieisaacs | 22dde5d18e | |
Ozzieisaacs | 56505457eb | |
Ozzieisaacs | 5930c6d5fb | |
Ozzie Isaacs | 6b162a4e49 | |
Ozzieisaacs | c3b9888b31 | |
Ozzieisaacs | cb8dfdde4c | |
OzzieIsaacs | ff5d333fc8 | |
Ozzieisaacs | 8eb4b6288a | |
Ozzieisaacs | f18836be90 | |
Ozzieisaacs | 3372070a58 | |
Ozzieisaacs | f99e2ebd13 | |
Ozzieisaacs | 4feb26eefb | |
Ozzieisaacs | 2da7cd2064 | |
Ozzieisaacs | 9d7daf7afd | |
Ozzieisaacs | cb1ebc1cd0 | |
Ozzieisaacs | 6ad56a0859 | |
Ozzieisaacs | 8515781564 | |
ElQuimm | 7e2bfbd255 | |
Jennifer Thakar | e2325f7ba4 | |
OzzieIsaacs | 9afdab8c52 | |
Ozzieisaacs | f620d4a9ca | |
Ozzieisaacs | fb1e763bbe | |
Ozzieisaacs | ecea7e7493 | |
Ozzieisaacs | 560ade00b4 | |
Ozzieisaacs | 81ea24ad54 | |
Ozzieisaacs | 747b25046a | |
Ozzieisaacs | b9536812f4 | |
Ozzieisaacs | eed2f0a430 | |
Ozzieisaacs | 31fe8cd263 | |
Ozzieisaacs | a3fadbaa1a | |
Ozzieisaacs | e2be655d74 | |
Ozzieisaacs | 2cd653c773 | |
Ozzieisaacs | 6538aff02f | |
OzzieIsaacs | c1e3dec9be | |
Ozzieisaacs | 42c13ae135 | |
Ozzieisaacs | 9bd51c650b | |
Ozzieisaacs | ba1c1c87c4 | |
Ozzieisaacs | ceefba2743 | |
Ozzieisaacs | 70da46b05e | |
Ozzieisaacs | 32b7b39223 | |
verglor | 2343c79126 | |
verglor | 09a5a69f86 | |
Ozzieisaacs | 4081895a78 | |
Ozzieisaacs | a522566a0c | |
Ozzieisaacs | 013c4e9c35 | |
Ozzieisaacs | 067f289050 | |
Ozzieisaacs | 06511b92aa | |
Ozzieisaacs | 32f4c9eabf | |
Ozzieisaacs | 14bc345883 | |
Ozzieisaacs | d76b4fd7ea | |
ElQuimm | 83cdd7e9fb | |
verglor | 50441bae62 | |
Martin | 5b0766a9b0 | |
Martin | d979fe8e5f | |
Markus Gruber | f2c52fd278 | |
Ozzieisaacs | 87d60308f2 | |
L0garithmic | 64ebc56c87 | |
Ozzieisaacs | 58d485cbb5 | |
Ozzieisaacs | e99dd3310c | |
Ozzieisaacs | 400f6e02a5 | |
jvoisin | 700b0609df | |
cbartondock | bc52f90ed4 | |
Ozzieisaacs | 0771546dad | |
jvoisin | 95a1a71a66 | |
OzzieIsaacs | b9b8e3f632 | |
Ozzieisaacs | 1e03a2ae40 | |
Ozzieisaacs | 25da6aeeca | |
Ozzieisaacs | 00b422807b | |
Ozzieisaacs | 130701a7bb | |
Ozzieisaacs | ff88e68904 | |
Ozzieisaacs | 55fcf23d2b | |
Ozzieisaacs | 1b0b4c4cc5 | |
Ozzieisaacs | cd57731593 | |
Ozzieisaacs | 37a80b935d | |
Ozzieisaacs | 4d61aec153 | |
Ozzieisaacs | 1adf25d50b | |
Julian Naydichev | ae33aee3f6 | |
jvoisin | 2c99e71626 | |
jvoisin | e7f7775efa | |
jvoisin | 8b60a19577 | |
Ozzieisaacs | 5701e08db9 | |
Ozzie Isaacs | 72a2fc49f8 | |
jvoisin | d2617322c6 | |
jvoisin | fa82745f64 | |
jvoisin | 19b2a334e4 | |
vagra | 627c2adf08 | |
ElQuimm | e3e137ca50 | |
cbartondock | 0978be580f | |
Ozzieisaacs | 5792838333 | |
Ozzieisaacs | f9995583a5 | |
KN4CK3R | 4cc68bd139 | |
cbartondock | d2510ad1bc | |
celogeek | 4d81d3613c | |
celogeek | 7d28963a32 | |
celogeek | b2594468b4 | |
celogeek | 754b9832e9 | |
celogeek | 097ac879ea | |
Ozzieisaacs | bc0416cbb4 | |
Ozzieisaacs | 2814617e4b | |
cbartondock | da9dfd166d | |
cbartondock | 1be07a42df | |
cbartondock | 1d83a6a898 | |
Ghighi | 2ff286b672 | |
Ozzie Isaacs | e8620a0986 | |
Ozzie Isaacs | 7fb18bbdc7 | |
Ozzieisaacs | 5b67b687d3 | |
Ghighi Eftimie | 9adcfa99f4 | |
Ghighi Eftimie | e723aaa5f6 | |
Ozzie Isaacs | d128b037a5 | |
Ozzieisaacs | 99fda00442 | |
Ghighi Eftimie | f574f8faf0 | |
Ghighi Eftimie | cedfa90d76 | |
Ghighi Eftimie | f1e6f6e505 | |
Ghighi Eftimie | b4f95cced7 | |
Ozzieisaacs | e16c0caebb | |
Ozzieisaacs | 52489a484a | |
Ozzieisaacs | e1d5c2c578 | |
Ozzieisaacs | 49f49632ad | |
Ghighi Eftimie | 6dadc6fb1e | |
Ozzieisaacs | a58a2f5fe4 | |
Ghighi Eftimie | c33c6bbff0 | |
Ozzieisaacs | 093f90d4c1 | |
Ozzieisaacs | 20ffa325d3 | |
Ghighi Eftimie | 5027304801 | |
Ozzieisaacs | 85d5afd6d9 | |
OzzieIsaacs | df295e92ee | |
Ozzieisaacs | 2e67bd2407 | |
OzzieIsaacs | 9aa01ee8cf | |
OzzieIsaacs | 6b993ad329 | |
OzzieIsaacs | d70ded0993 | |
OzzieIsaacs | 3dacdcc8bb | |
OzzieIsaacs | bb03026589 | |
Ghighi Eftimie | 2f69e3141e | |
Ozzieisaacs | 1f4564da76 | |
OzzieIsaacs | 3b8e5ddfb3 | |
Nacho Soler | edc293f96a | |
blitzmann | 3fa4149bb0 | |
OzzieIsaacs | 4b68a6ff23 | |
Ozzieisaacs | 28116c49dc | |
Ozzieisaacs | 6e6f144b7a | |
Ozzieisaacs | e3f4f24c3e | |
Ozzieisaacs | eb37e3a52b | |
Ozzieisaacs | 95d540630e | |
Ozzieisaacs | 23d66a0d68 | |
Ozzieisaacs | 0d64692c84 | |
Ozzieisaacs | 1cb640e51e | |
Ozzieisaacs | 2d98285545 | |
OzzieIsaacs | 376001100a | |
Ozzieisaacs | e2954249f8 | |
Ozzieisaacs | cc0b0196f4 | |
Ozzieisaacs | 6dfa171b4e | |
Ozzieisaacs | b140073988 | |
Ozzieisaacs | c22bc857b0 | |
Ozzieisaacs | 497fbdcdfc | |
Ozzieisaacs | 861f1b2ca3 | |
Ozzieisaacs | 6108ef4c89 | |
OzzieIsaacs | a9c0bcb4a2 | |
Ozzie Isaacs | 8f9de05768 | |
Ozzieisaacs | e61e94f0fa | |
Ozzieisaacs | 85aac02593 | |
Ozzieisaacs | 9a678c41fe | |
Ozzieisaacs | 7c8f6ce62f | |
Ozzieisaacs | 9a896ea81e | |
Ozzieisaacs | 422c1893c0 | |
ElQuimm | f1cb5276d7 | |
Ozzieisaacs | bed1b24340 | |
Ozzieisaacs | da909ff084 | |
Ozzieisaacs | 8f743b70a4 | |
Ozzieisaacs | a761017116 | |
Ozzieisaacs | f06cc25a99 | |
Ozzieisaacs | eff8480d5c | |
Ozzieisaacs | eec303de49 | |
ElQuimm | 07a936c5e8 | |
Ryan Long | a6002a2c6c | |
Ozzieisaacs | 7ba014ba49 | |
Ozzieisaacs | 165c649f31 | |
Ozzieisaacs | 4cf71dd336 | |
Ozzieisaacs | c0a401216b | |
Ozzieisaacs | 2d712a3841 | |
Ozzieisaacs | a2b5b4dd17 | |
blitzmann | 0480edce2a | |
Alexander Yakovlev | 4eded82102 | |
Alexander Yakovlev | ec4ff83465 | |
Alexander Yakovlev | 8745b8b051 | |
Alexander Yakovlev | 4e28c3cadb | |
blitzmann | 76c724c783 | |
blitzmann | 032cb59388 | |
OzzieIsaacs | e29247774c | |
blitzmann | 18d16f9a8b | |
OzzieIsaacs | d137735be1 | |
dickreckard | 65929c02bc | |
root | 22466d6b98 | |
dickreckard | 23fe79c618 | |
OzzieIsaacs | b202ca5619 | |
OzzieIsaacs | 7929711fea | |
OzzieIsaacs | 49a028a599 | |
OzzieIsaacs | 670cbcd336 | |
Ozzieisaacs | 449d31e8a1 | |
Ozzieisaacs | fe82583813 | |
Ozzieisaacs | 1450a21d00 | |
Ozzieisaacs | ae15544aed | |
Ozzieisaacs | 4b7a37cf7d | |
dickreckard | cb7727900c | |
Ozzieisaacs | e012726cd4 | |
Ozzieisaacs | f49688fdb9 | |
Ozzieisaacs | e32b017431 | |
Ozzieisaacs | 393869e538 | |
Ozzieisaacs | d3bde0408f | |
Ozzieisaacs | 34d3225984 | |
Ozzieisaacs | eaed53e25b | |
OzzieIsaacs | feacbe8ebd | |
norangebit | bdf6052388 | |
Ozzieisaacs | ec0a0190e7 | |
norangebit | 99d653eece | |
OzzieIsaacs | f825c5ae83 | |
OzzieIsaacs | 173484c30e | |
OzzieIsaacs | 884270093b | |
Dawid Gliwka | 62e7ab8c2b | |
blitzmann | ded480207b | |
blitzmann | ef49e2b5b3 | |
blitzmann | b0a055a870 | |
blitzmann | 9b9e29a3b6 | |
OzzieIsaacs | 1a9a436cbe | |
OzzieIsaacs | 62dd29d2f3 | |
OzzieIsaacs | d406d91856 | |
OzzieIsaacs | 98494c610a | |
OzzieIsaacs | 649a553fa4 | |
OzzieIsaacs | e17a06d7bd | |
OzzieIsaacs | 4b14cc6a74 | |
OzzieIsaacs | 65560ab65e | |
OzzieIsaacs | b65095fd0f | |
OzzieIsaacs | 28733179d2 | |
OzzieIsaacs | 960d23ca50 | |
Michael Knepher | 057f70ea9c | |
blitzmann | 3e378bd665 | |
blitzmann | c120138f26 | |
blitzmann | 31b20362ec | |
blitzmann | e7eb5b6ea6 | |
OzzieIsaacs | fe91ed815e | |
OzzieIsaacs | 9e5cad0dc8 | |
OzzieIsaacs | 36cbc42363 | |
blitzmann | 4cb82ea9bd | |
blitzmann | 572ac4a17b | |
Ozzie Isaacs | 6c552f6b43 | |
Ozzie Isaacs | 9723129436 | |
Ozzie Isaacs | dc5191f8ba | |
OzzieIsaacs | d79726899f | |
Ozzie Isaacs | 8ab5710098 | |
OzzieIsaacs | 61d628d596 | |
Sean Leonard | 078fc25845 | |
Marvel Renju | 67eb4b317a | |
blitzmann | 6322919bc7 | |
blitzmann | b81b8a1dea | |
blitzmann | f3a3797850 | |
blitzmann | 5ec1283bb1 | |
blitzmann | 8634b0c6f0 | |
Ozzie Isaacs | d89830af61 | |
Ozzie Isaacs | ef1736b571 | |
Dave Mogle | d951ee4b83 | |
blitzmann | 04081f62c4 | |
blitzmann | 0f28dc5e55 | |
blitzmann | 508f49df18 | |
blitzmann | bec280c6b1 | |
blitzmann | 6a8ae9c0c4 | |
blitzmann | ac22483f98 | |
blitzmann | 59d56d5c83 | |
OzzieIsaacs | cdaad2fb4a | |
OzzieIsaacs | 843279bacb | |
blitzmann | a000de0270 | |
blitzmann | bf41b04cfa | |
blitzmann | 9ce2e8ea53 | |
OzzieIsaacs | f066926fc9 | |
OzzieIsaacs | 4c38b0ab10 | |
OzzieIsaacs | cf35c9dcef | |
OzzieIsaacs | 45ff9394f2 | |
blitzmann | 414043ded1 | |
blitzmann | 2533c9c14e | |
blitzmann | f10f0dada6 | |
OzzieIsaacs | 282859bc1b | |
OzzieIsaacs | a3ae97a5a3 | |
OzzieIsaacs | ed8275a20b | |
OzzieIsaacs | f2add3f788 | |
OzzieIsaacs | ad144922fb | |
OzzieIsaacs | b9c0f2a3d3 | |
OzzieIsaacs | 0cc07362b8 | |
Brandon Ingli | 4ee5dcaff3 | |
Ozzie Isaacs | 4d44746a88 | |
OzzieIsaacs | 1535bdbcd8 | |
OzzieIsaacs | ecb160b28d | |
OzzieIsaacs | 7e9941def0 | |
OzzieIsaacs | f9c6fb30bf | |
Ozzie Isaacs | 94ad93ebd7 | |
blitzmann | 0e1ec5034e | |
Marvel Renju | 76b0505bd9 | |
OzzieIsaacs | b309c1fc91 | |
OzzieIsaacs | b9a802a19c | |
OzzieIsaacs | a882ad3e65 | |
Ryan Holmes | 969105b205 | |
Ryan Holmes | 28bfb06c67 | |
Ryan Holmes | 704dcb3e58 | |
Efreak | a7e723d8d4 | |
Ren Wei | 4c6f5096be | |
Ozzie Isaacs | 1a1d105fae | |
Ozzie Isaacs | 42a0639bb5 | |
Ozzie Isaacs | e27b08203d | |
Clément Poissonnier | 1ca4583896 | |
Ozzie Isaacs | 25fc6f1937 | |
Ozzie Isaacs | 0ccc3f7252 | |
Ozzie Isaacs | 66acd1821d | |
Ozzie Isaacs | 93a0217d5f | |
Jef LeCompte | 711a697570 | |
Ozzieisaacs | f8139f3198 | |
Ozzieisaacs | df01022f49 | |
Ozzieisaacs | 450411a732 | |
Ozzieisaacs | f80c67828b | |
Ozzieisaacs | d1889a5e06 | |
Ozzieisaacs | 12263ff02f | |
Ozzieisaacs | 76f914cbc2 | |
Ozzieisaacs | c1f5252b3f | |
Ozzieisaacs | 20c6f79a44 | |
Ozzieisaacs | ee6f1405d4 | |
Ozzieisaacs | ee3541d74e | |
Jef LeCompte | e048388213 | |
Ozzieisaacs | 3ff3431b17 | |
Ozzieisaacs | 8608ff11f7 | |
Michael Knepher | 7e0d9fbace | |
Ozzieisaacs | c6c9cfea22 | |
Ozzieisaacs | a758976c69 | |
OzzieIsaacs | a14192b7e0 | |
OzzieIsaacs | 601464083b | |
Ozzieisaacs | ccca5d4d1c | |
Ozzieisaacs | ba10657829 | |
Ozzieisaacs | 6315655f93 | |
Ozzieisaacs | 533cb23b73 | |
Ozzieisaacs | 0b95424a0d | |
Ozzieisaacs | 852f252d13 | |
Ozzieisaacs | 080042882f | |
Ozzieisaacs | dde5b08c47 | |
Ozzieisaacs | 88d2c60ee8 | |
Ghighi Eftimie | eeff5a5d43 | |
Michael Knepher | 67dd4a72b0 | |
Ozzieisaacs | a0b8cc21cc | |
Ozzieisaacs | 329a7a03a5 | |
Ozzieisaacs | d0a3503d74 | |
Ozzieisaacs | 59b78f9984 | |
Ozzieisaacs | bf75f16169 | |
Ozzieisaacs | 2f833dc457 | |
Ozzieisaacs | 22344a3971 | |
Ozzieisaacs | 8dde6ba60f | |
Ozzieisaacs | d44f283a05 | |
Ozzieisaacs | c18d5786dd | |
Ozzieisaacs | f26ccfe16c | |
Ozzieisaacs | 1c681ee378 | |
flying-sausages | 4c1ae44bbe | |
Ozzieisaacs | 94b5ec91cc | |
Ozzieisaacs | 6bfbf3ee41 | |
Lukáš Heroudek | fc24fa337e | |
Ozzieisaacs | 7b4306b1d6 | |
Ozzieisaacs | 4516cc0d65 | |
Ozzieisaacs | 308784c601 | |
Ozzieisaacs | 9145c9a52c | |
Ozzieisaacs | 4038cb5b85 | |
Ozzieisaacs | 628658972c | |
Ozzieisaacs | db0fe9a755 | |
Ozzieisaacs | fdf10e3d2e | |
Ozzieisaacs | ded3e06a9b | |
Ozzieisaacs | 0dd0605a1f | |
Ozzieisaacs | 827b0c6e50 | |
Ozzieisaacs | d1b533848d | |
Ozzieisaacs | 13ae28edab | |
Ozzieisaacs | 82253219e8 | |
Ozzieisaacs | 94592b74a6 | |
Ozzieisaacs | eef2112e1e | |
Ozzieisaacs | b83d56eff2 | |
Ozzieisaacs | 4c539b6db4 | |
OzzieIsaacs | 2ad329e64c | |
Ozzieisaacs | 4de89ec6ce | |
Ozzieisaacs | fc885c8fa2 | |
Ozzieisaacs | ef2c98ba39 | |
ElQuimm | 41d922867e | |
Ozzieisaacs | 89223760e6 | |
Ozzieisaacs | 9a8a1f75ca | |
flying-sausages | 52e8e7e4b0 | |
OzzieIsaacs | 93f65484de | |
Ozzieisaacs | 7d08cde8b8 | |
Ozzieisaacs | d3c2bf7dd4 | |
Michael Knepher | 54cf3652b0 | |
Ozzieisaacs | 5607d2086d | |
Ozzieisaacs | 27ed918896 | |
Ozzieisaacs | 9b2df9cfd9 | |
Michael Knepher | 5dd08e438c | |
Ozzieisaacs | 2f8c8c3a28 | |
Wolviex | dde7cf18f7 | |
Rafael Roa | 48aadbf716 | |
Ozzieisaacs | 5514af60e5 | |
ElQuimm | e8449eb02f | |
Ozzieisaacs | 9ea21a7ecb | |
Ozzieisaacs | fec86c2862 | |
Ozzieisaacs | 11e5d4c5b7 | |
Ozzieisaacs | b852fb0e26 | |
Ozzieisaacs | 0fc18bf3b1 | |
Rafael Roa | d4be584782 | |
Knepherbird | 2120d72901 | |
Ozzieisaacs | 6f9c08f906 | |
Ozzieisaacs | 70c666c380 | |
Ozzieisaacs | 46197d82b5 | |
Ozzie Isaacs | e4eab17595 | |
Ozzieisaacs | cf10244f20 | |
Ozzieisaacs | 570684d308 | |
Ozzieisaacs | 244db8d894 | |
Ozzieisaacs | 96d6018ecc | |
OzzieIsaacs | 73ad6dd0c4 | |
OzzieIsaacs | d0e15da352 | |
Ozzieisaacs | b1b293a3ec | |
Ozzieisaacs | 5f0660a4e5 | |
Ozzieisaacs | ec3a3a73ef | |
Ozzieisaacs | 087c4c5929 | |
Ozzieisaacs | e0ee2e0801 | |
Michael Knepher | c79a9e9858 | |
OzzieIsaacs | b3fdce36af | |
OzzieIsaacs | b7535b9526 | |
OzzieIsaacs | 0cf1cc5587 | |
Ozzieisaacs | 098dab889a | |
Ozzieisaacs | cc856c7cd1 | |
Ozzieisaacs | a20a155d39 | |
Ozzieisaacs | 1a458fe39f | |
celogeek | 051ffdda35 | |
OzzieIsaacs | a48418364c | |
Ozzieisaacs | b75497231e | |
Knepherbird | 5cceeb6ef2 | |
Ozzieisaacs | 29b94c5615 | |
Ozzieisaacs | 81fc1eccd3 | |
Braincoke | 22fae51c9d | |
Ozzieisaacs | 4332f7a640 | |
Ozzieisaacs | d0e603e62d | |
Ozzieisaacs | 38c28f4358 | |
Ozzieisaacs | 742799b4ac | |
Ozzieisaacs | 5405dc5141 | |
Ozzieisaacs | 013793f989 | |
Ozzieisaacs | 2468cf63cc | |
Ozzieisaacs | 8e9b5d7e50 | |
Ozzieisaacs | a33515d907 | |
Ozzieisaacs | a276a33081 | |
h1f0x | e325c3dbf6 | |
h1f0x | b3fef625a8 | |
Ozzieisaacs | 13994a5f96 | |
Ozzieisaacs | e8ac62cdd8 | |
Ozzieisaacs | 1bc7134ec2 | |
Ozzieisaacs | 273572f44c | |
Ozzieisaacs | ac37483d47 | |
Ozzieisaacs | 0cc2b49296 | |
Ozzieisaacs | 44d5adc16f | |
Ozzieisaacs | f749fc641b | |
Ozzieisaacs | 87c76c09e2 | |
Ozzieisaacs | e787f9dd9f | |
Ozzieisaacs | ec7d5b17ab | |
Ozzieisaacs | 47641eee59 | |
Ozzieisaacs | 42abe28cc1 | |
Ozzieisaacs | b48afa38ac | |
Ozzieisaacs | f7269d8df2 | |
Ozzieisaacs | ce0fab8d2f | |
Ozzieisaacs | 33472567de | |
Ozzieisaacs | f5e12328dc | |
Ozzieisaacs | 486c0f2937 | |
jvoisin | e69c4cd1dc | |
Marcel | e0d3ccd8d1 | |
Ozzieisaacs | 16a3deec2c | |
OzzieIsaacs | 92db00692a | |
OzzieIsaacs | fe88010a72 | |
OzzieIsaacs | 99ae4be2c2 | |
OzzieIsaacs | a9085752c1 | |
OzzieIsaacs | e1fbc9255c | |
OzzieIsaacs | f33e25ac40 | |
OzzieIsaacs | 51365ab006 | |
OzzieIsaacs | d61b7e48d7 | |
OzzieIsaacs | f590b24f85 | |
Ozzieisaacs | 308fd55483 | |
Ozzieisaacs | fefb44e612 | |
jvoisin | dd3b562f1a | |
jvoisin | 30c9aa3df9 | |
jvoisin | 688184e255 | |
Ozzieisaacs | 75fb7c2e95 | |
jvoisin | 264b4b669e | |
Ozzieisaacs | 51f12c51ad | |
Ozzieisaacs | 03d134697c | |
Ozzie Isaacs | e706e1a68d | |
Ozzieisaacs | ff3f42db95 | |
Ozzieisaacs | e8aedeac36 | |
Ozzie Isaacs | 2bf6b263ed | |
jvoisin | bf166b757a | |
jvoisin | b4165335a7 | |
jvoisin | 2a1bf2fa71 | |
Ozzieisaacs | 718d50a037 | |
Ozzieisaacs | 41960ada4a | |
Ozzieisaacs | 0a02caad04 | |
Ozzieisaacs | ff75bdba9e | |
Ozzieisaacs | 189243a9b0 | |
Ozzieisaacs | 34e339c506 | |
Ozzieisaacs | a437c603c6 | |
Ozzieisaacs | 4e940f7fa0 | |
Knepherbird | 69fde7dead | |
Ozzieisaacs | 4368c182a0 | |
Ozzieisaacs | 48f4b12c0e | |
Ozzieisaacs | 6a6c1b6b21 | |
Ozzieisaacs | 51808d2ad4 | |
Ozzieisaacs | 0735fb1e92 | |
Ozzieisaacs | 850a85915b | |
ElQuimm | 148f1109c6 | |
Ozzieisaacs | fcbeeca305 | |
Ozzieisaacs | fb16429867 | |
Ozzieisaacs | e1439b529b | |
ElQuimm | db38d7ee78 | |
Ozzieisaacs | 0adcd1b3d9 | |
OzzieIsaacs | 36a984ce3c | |
Ozzieisaacs | fcefd8031a | |
Ozzieisaacs | 64bebaa1d1 | |
Ozzieisaacs | 0138ff9e16 | |
Ozzieisaacs | 9bc085a23e | |
Ozzieisaacs | 1ce432b136 | |
Michael Shavit | e0fbfa44a4 | |
Ozzieisaacs | 0a92d79ec0 | |
Ozzieisaacs | b95f6563cc | |
Ozzieisaacs | 547bbecef1 | |
Ozzieisaacs | 700cb3b553 | |
Ozzieisaacs | 8646f8f23a | |
OzzieIsaacs | 99cc69c67d | |
OzzieIsaacs | 2c5d76908a | |
Ozzieisaacs | 832b34fc54 | |
Ozzieisaacs | 000b85ff81 | |
Ozzieisaacs | bb317d54f2 | |
Ozzieisaacs | d6f41d8dc0 | |
Marcel | 6dff5ed679 | |
OzzieIsaacs | fb8b6310d5 | |
Ozzieisaacs | 02aaf17ac5 | |
Ozzieisaacs | b160a8de0b | |
Ozzieisaacs | e3246fd751 | |
Ozzieisaacs | 91b1775f50 | |
Ozzieisaacs | fb18ab1ca5 | |
Ozzieisaacs | 01ff55c84e | |
jvoisin | 523aab2e9e | |
Ozzieisaacs | 9a7d9da654 | |
Ozzieisaacs | e9446556a1 | |
jvoisin | 806a5f209f | |
ZIzA | c864b368b0 | |
Ozzieisaacs | 27eb09fb19 | |
Ozzieisaacs | bea7223a0a | |
Ozzieisaacs | 0297823bda | |
Ozzieisaacs | d1bce5c2d9 | |
Ozzieisaacs | 46c0ae3ccc | |
Ozzieisaacs | ce6b689147 | |
Ozzieisaacs | 2d92818613 | |
jvoisin | 487878819e | |
Ozzieisaacs | 6682b1ced1 | |
Ozzieisaacs | bc89b0658a | |
Ozzieisaacs | d9dde36c74 | |
Ozzieisaacs | 44284ea5fb | |
Ozzieisaacs | 9f0c0b34af | |
Ozzieisaacs | 87ec44aed5 | |
Ozzieisaacs | 898e6b4f80 | |
Ozzieisaacs | dea2600913 | |
Ozzieisaacs | dc46ad16ae | |
Ozzieisaacs | 456550a943 | |
Ozzieisaacs | 7b789b3701 | |
Ozzieisaacs | 0480d493cf | |
Ozzieisaacs | b4d7733e0a | |
Ozzieisaacs | 8b8fe7a0ae | |
Ozzie Isaacs | 85ffc90f66 | |
Ozzieisaacs | fa38798066 | |
Ozzieisaacs | d657330584 | |
Ozzieisaacs | 36cb79de62 | |
Ozzieisaacs | 063ee5e855 | |
Ozzieisaacs | 7393b69757 | |
Ozzieisaacs | 8bd1903d98 | |
jvoisin | d8bf540db2 | |
Ozzieisaacs | e29f17ac46 | |
iz | 1770f3fd0d | |
iz | 4239f2ed71 | |
iz | 3939ec28ba | |
Michael Shavit | 742ec2b38d | |
Michael Shavit | 9296d35517 | |
Michael Shavit | 06c15a792e | |
Ozzieisaacs | 95ca1e6a9d | |
Ozzieisaacs | 1df82110d1 | |
Ozzieisaacs | 24c743d23d | |
ElQuimm | f1b1ebe29e | |
Jeff | 6384cdc74d | |
celogeek | d093fbb069 | |
celogeek | f9b4505843 | |
celogeek | ca2bcc647d | |
pthiben | b90048d72e | |
pthiben | a5fe08e84c | |
pthiben | 2874cf531c | |
pthiben | ea7058e896 | |
pthiben | 38bb10d36b | |
pthiben | 58943bb156 | |
pthiben | 2d66da3cb9 | |
pthiben | b7efbf9040 | |
pthiben | 5222e157cb | |
pthiben | 1e3a948977 | |
pthiben | 20cc5107da | |
pthiben | a6a4d5f09b | |
pthiben | 028c53287d | |
pthiben | 24a9bea137 | |
pthiben | ef1a2d1730 | |
pthiben | 6abff36e07 | |
Ozzieisaacs | 902685a197 | |
Ozzieisaacs | c04b146486 | |
Ozzieisaacs | 7bb5afa585 | |
Ozzieisaacs | 06fde4fcd0 | |
Ozzieisaacs | 195845ab0c | |
Ozzieisaacs | 81a329f1e7 | |
OzzieIsaacs | cd6272a1c9 | |
Ozzieisaacs | 53be752787 | |
Ozzieisaacs | 42ac06c114 | |
Ozzieisaacs | 9e159ed5ab | |
Ozzieisaacs | 2c42972230 | |
aribes | f926e58891 | |
Ozzieisaacs | a784c6bd52 | |
ElQuimm | 05d78f5cb5 | |
pthiben | 4ef7615d88 | |
pthiben | 77c2783a3e | |
Ozzieisaacs | ce4f1258b5 | |
Ozzieisaacs | 3fbaba6693 | |
Ozzieisaacs | a8b36aed92 | |
Ozzieisaacs | 4749eccfa5 | |
pthiben | 5b1dfc123f | |
Michael Shavit | 41a3623fcc | |
Marcel | 296ac203d4 | |
xcffl | 0c9436ca82 | |
xcffl | 70c9dd1b95 | |
xcffl | 80753b1115 | |
Ozzieisaacs | a194216568 | |
Ozzieisaacs | 8bee424cc0 | |
Ozzie Isaacs | 25ab3cabfe | |
Ozzieisaacs | 587174b771 | |
Ozzieisaacs | 3e1c34efe6 | |
Ozzieisaacs | 5864637f1c | |
Ozzieisaacs | ec6b346ca1 | |
Ozzieisaacs | e99f5bcced | |
Ozzieisaacs | b89309ab82 | |
OzzieIsaacs | 2d230ec96a | |
Ozzieisaacs | 4550333f1e | |
Ozzieisaacs | 3ba610eb64 | |
Ozzieisaacs | 2436c6a118 | |
Ozzieisaacs | 4de80b26c1 | |
Ozzieisaacs | bab14a1fbf | |
Ozzieisaacs | 0c27ff11b9 | |
BeckyDTP | 734e2ffbb2 | |
hexeth | da42c51af2 | |
Michael Shavit | 7cb6801241 | |
Ozzieisaacs | f6c04b9b84 | |
Ozzieisaacs | 4eacb21259 | |
Ozzie Isaacs | 6d1a3ccdcc | |
Josh O'Brien | c870f6e87d | |
Ozzieisaacs | 6d907094d7 | |
Ozzieisaacs | 6643f0d1e0 | |
Ozzieisaacs | 20b07e0752 | |
Ozzieisaacs | 092423adc7 | |
Ozzieisaacs | a50ca1a85f | |
Jony | 02199c8c1d | |
Jony | c166c92685 | |
Ozzieisaacs | f243515261 | |
Ozzieisaacs | 98181fe21c | |
Unknown | a26ce8d8b5 | |
Michael Shavit | de0e27c512 | |
Ozzieisaacs | 09e7d76c6f | |
Ozzieisaacs | d597e05fa9 | |
Ozzieisaacs | 98dc991339 | |
Ozzieisaacs | 1d40434d2b | |
Ozzieisaacs | 46b87dc7eb | |
ElQuimm | fe7c56d269 | |
OzzieIsaacs | 5ba4801a79 | |
Michael Shavit | ad564e25ca | |
Ozzieisaacs | 404be948d4 | |
Ozzieisaacs | 8cbc345f36 | |
Ozzieisaacs | 89927fd7e9 | |
Ozzieisaacs | 6b4a024234 | |
Ozzieisaacs | 18794831e0 | |
Ozzieisaacs | 3fb851304f | |
Ozzieisaacs | d267338837 | |
Michael Shavit | 8e1641dac9 | |
Michael Shavit | 57d37ffba8 | |
ZIzA | 82afa81220 | |
ZIzA | d730eb8d31 | |
ZIzA | 5a219b580f | |
Ozzieisaacs | fb83bfb363 | |
Ozzie Isaacs | df7d3d18b6 | |
Wanoo | a0535aa3db | |
Ozzieisaacs | 202b6121ab | |
Ozzieisaacs | 4e8b814ec2 | |
Ozzieisaacs | 6becca17bf | |
Ozzieisaacs | c8b64d4162 | |
Jeff | 0854303710 | |
Michael Shavit | cba3e62e71 | |
Ozzieisaacs | dcba720e97 | |
Rewerson | 6c614c06f6 | |
Ozzieisaacs | 9a812f11e7 | |
Ozzieisaacs | 917132fe26 | |
Ozzieisaacs | 187ca5dc8f | |
Ozzieisaacs | 7d795771d3 | |
ElQuimm | e51bc4ea78 | |
Ozzieisaacs | 2dc3235d4f | |
Ozzieisaacs | 040bb4a5a8 | |
Ozzieisaacs | fc4436f091 | |
Ozzie Isaacs | 5dbdef25d3 | |
Ozzie Isaacs | b9f3ac2eea | |
Ozzieisaacs | 9fc0c3b3de | |
Ozzieisaacs | 4f81184da0 | |
Ozzieisaacs | 8223561844 | |
Ozzieisaacs | 32a6beae65 | |
Ozzieisaacs | 146068c936 | |
Ozzieisaacs | 3b8c5ef21a | |
Ozzieisaacs | e60ef8fc97 | |
Ozzieisaacs | 24d755b123 | |
Ozzieisaacs | 7c89f0b5b9 | |
Ozzieisaacs | 134a10f56c | |
Ozzieisaacs | 3a70c86f49 | |
Ozzieisaacs | ac431bbc4a | |
OzzieIsaacs | 371097eb4d | |
OzzieIsaacs | 6346059698 | |
Ozzieisaacs | 372c284ad4 | |
Ozzie Isaacs | 264ccdbf6d | |
Ozzie Isaacs | b95150ff11 | |
Ozzie Isaacs | 81fc04f24d | |
Ozzie Isaacs | 3d4ebddb04 | |
Ozzie Isaacs | de299e0d6a | |
Johnny A. dos Santos | 29cb8bfec4 | |
Ozzieisaacs | b7f3e00fbf | |
Ozzieisaacs | 27a18d60a7 | |
Ozzieisaacs | de8f6d3e8d | |
Ozzieisaacs | 6893635251 | |
ElQuimm | 94a38a3b47 | |
Michael Shavit | 7d99e21d0d | |
Michael Shavit | df3eb40e3c | |
Ozzieisaacs | ba6b5f8fd1 | |
Josh O'Brien | 8f518993a4 | |
Niktia Pchelin | dac48a2610 | |
Niktia Pchelin | 7c0d10da79 | |
Niktia Pchelin | d77b52af96 | |
Ozzieisaacs | 29f6463ed9 | |
Ozzieisaacs | ed0bdbf31d | |
Ozzieisaacs | b152d3e06d | |
Ozzieisaacs | 16cd57fe55 | |
Ozzieisaacs | 3f578122a3 | |
Ozzieisaacs | 51a27322be | |
OzzieIsaacs | 050feed5dc | |
Ozzieisaacs | e3ddc16657 | |
André Frimberger | 33cdf20cd5 | |
Ozzieisaacs | 317e59df4b | |
Ozzieisaacs | a9a6f5b97e | |
Ozzieisaacs | 8b1444ebc2 | |
Kyos | 509071949a | |
ElQuimm | 697d857549 | |
Ozzieisaacs | 2ea45b1fdc | |
Ozzieisaacs | 726595e117 | |
Kyos | 1666e32aaf | |
Kyos | 6a69bbe4b5 | |
André Frimberger | 7a608b4fb0 | |
Ozzieisaacs | 814ad87a42 | |
Ozzieisaacs | 3e4b5e23fa | |
Ozzieisaacs | ab24ed8088 | |
Ozzieisaacs | 50ba2e329a | |
Ozzie Isaacs | c1e2a98f46 | |
Ozzie Isaacs | e04aa80fd6 | |
Ozzie Isaacs | 482e977af4 | |
Ozzie Isaacs | 2535bbbcf1 | |
Ozzieisaacs | 6698773d81 | |
xcffl | aefaf47f4c | |
Josh O'Brien | 9b49125776 | |
Ozzieisaacs | b33a2ac90d | |
Ozzieisaacs | f67953c447 | |
Ozzieisaacs | 981632f599 | |
Ozzieisaacs | a6c453d826 | |
Ozzieisaacs | 4087e685f4 | |
Ozzieisaacs | 5255085de1 | |
ElQuimm | 9247ded710 | |
Jerzy Piątek | 0bb0cbaef0 | |
Lukáš Heroudek | 0f7d272e13 | |
Ozzieisaacs | 00dafe3121 | |
Ozzieisaacs | e44494aad0 | |
Ozzieisaacs | 4ab3dc2599 | |
Ozzieisaacs | acfad7a982 | |
Ozzieisaacs | b29b5b7ac1 | |
ElQuimm | 7803ffb995 | |
Ghighi Eftimie | fc79cdfaa2 | |
Michael Shavit | f9dbc6bc78 | |
Michael Shavit | dc7aaae235 | |
Michael Shavit | 9804a98af8 | |
Ghighi Eftimie | 647e954e8a | |
Ozzieisaacs | 004d9118bc | |
Ozzieisaacs | 594c8aad91 | |
Ozzieisaacs | 542a0008c9 | |
Ozzieisaacs | 24f7918aa4 | |
Ozzieisaacs | 2eec329bdf | |
Ozzieisaacs | 0411d4a8c9 | |
Ozzieisaacs | a986faea56 | |
Ozzieisaacs | ad71d0a03f | |
Ozzieisaacs | 0955c6d6fb | |
Michael Shavit | d30b44ee0f | |
Michael Shavit | a6f4db0f25 | |
Michael Shavit | 4547c328bc | |
Michael Shavit | 5027aeb3a0 | |
Michael Shavit | c0239a659c | |
Michael Shavit | e404da4192 | |
Simon Latapie | 69fa7d0091 | |
ZIzA | e1d6aec682 | |
ZIzA | 155795a18e | |
ZIzA | 8c4052e884 | |
Ozzie Isaacs | 3c63e2b7e4 | |
Simon Latapie | 9b119fa724 | |
ElQuimm | a17c1c063e | |
Ozzieisaacs | 6728f5da2d | |
Lukáš Heroudek | 16adeae5c3 | |
Lukáš Heroudek | 485eba94cc | |
Lukáš Heroudek | 5a074348ac | |
Ozzieisaacs | cd9bb56db5 | |
Сергей | 8150f934fd | |
Сергей | 4c8f3f7bae | |
Сергей | ab5f176f58 | |
Ozzieisaacs | b294ac00ed | |
Ozzie Isaacs | 165cbad67b | |
Ozzieisaacs | b30da58eb9 | |
Ozzieisaacs | b0fb6b858d | |
Jony | 53ce22ef5e | |
Ozzieisaacs | 8e7a52f44e | |
OzzieIsaacs | 05a35be019 | |
Jony | 4406220f70 | |
Jony | 51a6cff411 | |
Ozzieisaacs | 8f4253adbd | |
Ozzieisaacs | 3e404101bd | |
Ozzieisaacs | 65105d9dbe | |
Ozzieisaacs | 3a4d351a57 | |
Ozzieisaacs | ce66c752c4 | |
Lukáš Heroudek | 4e42a179fa | |
Ozzieisaacs | 973f555544 | |
Ozzieisaacs | 1d7e52c198 | |
Ozzieisaacs | 1b42dd1043 | |
Lukáš Heroudek | 77e0022252 | |
Simon Latapie | 56964a890b | |
Simon Latapie | cef41661dd | |
Ozzieisaacs | 68ca0b86da | |
Ozzieisaacs | 79a9ef4859 | |
Ozzieisaacs | 2798dd5916 | |
Ozzieisaacs | 8143c16c14 | |
Ozzieisaacs | 42435ab34a | |
Lukáš Heroudek | 434fb2e7cb | |
Lukáš Heroudek | bce70bf17c | |
Lukáš Heroudek | cde44178c4 | |
Ozzieisaacs | 661ed17d23 | |
Ozzieisaacs | c659f28334 | |
Ozzieisaacs | 218e35e3aa | |
Ozzieisaacs | cabad83418 | |
Ozzieisaacs | 24ae7350f5 | |
Ozzieisaacs | c60277f4d3 | |
Ozzieisaacs | 6a07cfba65 | |
Ozzieisaacs | 87c415830f | |
Ozzieisaacs | c78c63e1d5 | |
Ozzieisaacs | 56ee8c56ba | |
Ozzieisaacs | 48495f0d66 | |
Ozzieisaacs | 8ad84a7ceb | |
Ozzieisaacs | 32e818af6a | |
Ozzieisaacs | d9adb4fc94 | |
Ozzieisaacs | 513ac6cfb4 | |
Ozzieisaacs | 1da4efec86 | |
Ozzieisaacs | 1c630eb604 | |
Ozzieisaacs | 1c18a788f4 | |
Ozzieisaacs | 5887f0fe6b | |
ground7 | b782489a8c | |
Ozzieisaacs | 01381488f4 | |
ground7 | 6555d5869f | |
ground7 | 54c4f40188 | |
Ozzieisaacs | 62e8bee2a8 | |
Michael Shavit | 9ec3ddd492 | |
Michael Shavit | d81dbb13e4 | |
Michael Shavit | c238367b64 | |
Michael Shavit | cdcb8a50d1 | |
Michael Shavit | 520c695401 | |
Michael Shavit | b831b9d6b2 | |
Ozzieisaacs | bbe91f439a | |
Ozzieisaacs | b586a32843 | |
Ozzieisaacs | 288944db2c | |
Ozzieisaacs | f2c07d8f81 | |
Michael Shavit | d6a9746824 | |
Michael Shavit | f84274f1c5 | |
Michael Shavit | 2118d920f5 | |
Michael Shavit | 207004beff | |
Michael Shavit | 27d084ce39 | |
Ozzieisaacs | f705889c23 | |
Ozzieisaacs | 7098d08888 | |
Ozzieisaacs | eabc6e23be | |
Ozzieisaacs | b6d7207ec3 | |
Ozzieisaacs | c33623efee | |
Ozzieisaacs | 2215bf3d7f | |
Ozzieisaacs | 86fe970651 | |
Andrew Roberts | 3dc372c573 | |
Andrew Roberts | efcee0a7b7 | |
Andrew Roberts | 39b6b100f9 | |
Andrew Roberts | 9351ff032f | |
Andrew Roberts | f0760c07d8 | |
Andrew Roberts | 77b0954c70 | |
Andrew Roberts | af7dbbf1e4 | |
Andrew Roberts | b661c2fa92 | |
Ozzieisaacs | e308a74dc2 | |
Michael Shavit | 040d7d9ae3 | |
Michael Shavit | f9b1e84704 | |
Ozzieisaacs | eede894e72 | |
Michael Shavit | 55c0bb6d34 | |
Michael Shavit | 2b55b9b250 | |
Ozzieisaacs | 22add37b64 | |
Ozzieisaacs | 8a9695d48e | |
Ozzieisaacs | e0faad1e59 | |
Michael Shavit | fffa2d5a1b | |
Michael Shavit | 0926ae530c | |
Michael Shavit | 0b709f7dfb | |
Michael Shavit | b5da2c4199 | |
Michael Shavit | 9ede01f130 | |
Christian Keil | c61463447f | |
Michael Shavit | 55b54de6a0 | |
Michael Shavit | 5357867103 | |
zhiyue | 222797e631 | |
dalin | 92841b46d7 | |
dalin | 6fe60d5c5e | |
dalin | 4c2323fcc9 | |
Ozzieisaacs | fda0ab1e86 | |
Ozzieisaacs | 54079b36ae | |
Ozzieisaacs | f8a99c60d8 | |
Ozzieisaacs | 2f27a7b0ce | |
Ozzieisaacs | 8af178c19c | |
Ozzieisaacs | 78f9ee86b1 | |
Ozzieisaacs | ab5873984e | |
Ozzieisaacs | 62ea8b8913 | |
Ozzieisaacs | a4416c202d | |
Jony | 1f5edffccf | |
Ozzieisaacs | 651260022c | |
Ozzieisaacs | 2e4344f7ea | |
Jony | 3cb7e77b60 | |
Ghighi Eftimie | f782dc1857 | |
Jony | 7179f133bb | |
Ozzieisaacs | 88f31ddad1 | |
Ozzieisaacs | a7ab7fcf06 | |
Ozzieisaacs | 6f61e80c97 | |
Ozzieisaacs | d1afdb4aac | |
Ozzieisaacs | 1112dc82c9 | |
dependabot[bot] | e47b0c6433 | |
Jan Guzej | 94ae9937f0 | |
Jan Guzej | fadd085b57 | |
Ozzieisaacs | 5167ee520e | |
Ozzieisaacs | f758a1cc64 | |
Ozzieisaacs | 2145be6db2 | |
Ozzieisaacs | c740fe9124 | |
Ozzieisaacs | a371e40c66 | |
Ozzieisaacs | ccc6184342 | |
Ozzieisaacs | c8c2d6659c | |
Ozzieisaacs | 1413f26c85 | |
Jan Guzej | c7d7a7597c | |
Jan Guzej | fbb7663a2f | |
Kyos | c93dd32179 | |
Kyos | 7165826011 | |
Kyos | ada727a570 | |
gwenhael | 01b0f9534c | |
DenysNahurnyi | 0735283d45 | |
zelazna | 3764c33a3a | |
Ozzieisaacs | 9fc02f67c2 | |
Ozzieisaacs | 0c40e40dc3 | |
Ozzieisaacs | e31df16309 | |
Ozzieisaacs | d7ea5bb9d7 | |
Ozzieisaacs | 6cda5fee0d | |
Ozzieisaacs | 1fb45d769f | |
Ozzieisaacs | ca5e285c5a | |
Ozzieisaacs | fb0eebfc52 | |
Ozzieisaacs | dd90fb003e | |
Ozzieisaacs | 61cd044255 | |
Ozzieisaacs | 879d02081a | |
Ozzieisaacs | 051bc53aa2 | |
Ozzie Isaacs | a7fdbad8b4 | |
Angel Docampo | 5515772903 | |
Angel Docampo | ff900fd9c0 | |
Yamakuni | eec4be7a29 | |
Alex Viscreanu | 6a821b8a75 | |
Yamakuni | 1385ecb383 | |
Yamakuni | 74418f3139 | |
Yamakuni | 72def4b97b | |
Yamakuni | 564c3b4778 | |
Yamakuni | c9eff4a70c | |
Yamakuni | 879f63d1c1 | |
Yamakuni | 3fb458dd19 | |
Yamakuni | d9a73b4fa3 | |
Ozzieisaacs | 23b3bfd967 | |
Ozzieisaacs | f543d7f486 | |
Radosław Kierznowski | 6a058d2c52 | |
Radosław Kierznowski | c73698e8fd | |
Vincent Kriek | 38a255e069 | |
Ozzieisaacs | ff41775dbb | |
Ozzieisaacs | 7e530618b7 | |
Ozzieisaacs | d04a78afe6 | |
Ozzieisaacs | f566237be0 | |
Ozzieisaacs | 87fa4a57b5 | |
Dmitriy Istomin | a65ad9483c | |
W1ndst0rm | 4cbdccd39e | |
Ozzieisaacs | 9356148e2d | |
Ozzieisaacs | 4be55285d8 | |
Ozzieisaacs | 2caee35884 | |
Ozzieisaacs | 3eae2e9c2c | |
Ozzieisaacs | e9fb5d9f25 | |
Ozzieisaacs | 6261981656 | |
Ozzieisaacs | 82ca3f31f9 | |
Ozzieisaacs | 97f3aa8325 | |
Daniel Pavel | 7c503b4a31 | |
Daniel Pavel | 9f8cab99e3 | |
Ozzieisaacs | 5f25b81004 | |
Ozzieisaacs | 73bbffccaa | |
Radosław Kierznowski | 43ad7d6e29 | |
Radosław Kierznowski | 2c27d631b4 | |
Ozzieisaacs | eb31b4b00b | |
Ozzieisaacs | f59d9d5aa8 | |
Radosław Kierznowski | 746b7b1262 | |
Radosław Kierznowski | 2d67d55b73 | |
Ozzieisaacs | 5f228fbb40 | |
Ozzieisaacs | 12576393cf | |
Mainak | 7f43a2e104 | |
Ozzieisaacs | 00f17bb697 | |
Ozzieisaacs | cf00b4eebf | |
Ozzieisaacs | be5c67fddd | |
Ozzieisaacs | fc4dc36c65 | |
Ozzieisaacs | 97a0dccdec | |
Ozzieisaacs | 9f64a96502 | |
Ozzieisaacs | b9c3a3fcea | |
Ozzieisaacs | 6d43e0422a | |
Ozzieisaacs | 0d7e58ce79 | |
Ozzieisaacs | 3e008ef29b | |
Ozzieisaacs | 5c6be5d6d0 | |
Ozzieisaacs | 2bd4eff56f | |
Ozzieisaacs | 38f3c2d5b9 | |
Ozzieisaacs | c6542fdec6 | |
Ozzieisaacs | 26a7d9ef30 | |
Daniel Pavel | 847dbfc021 | |
Ozzieisaacs | 929f32335f | |
Ozzieisaacs | 61a8eccf18 | |
Ozzieisaacs | d01a0c6617 | |
Ozzieisaacs | d168e3bfdb | |
Ozzie Isaacs | aba88ae53a | |
Ozzie Isaacs | 67d0ddb180 | |
Ozzie Isaacs | b1bb1cfdfa | |
Daniel Pavel | 99c6247baf | |
Daniel Pavel | a334ef28e7 | |
Daniel Pavel | 63634961d4 | |
Ozzieisaacs | d82289e303 | |
Daniel Pavel | a836df9a5a | |
Ozzieisaacs | 8bfcdffeb6 | |
Ozzieisaacs | e411c0fded | |
Ozzieisaacs | 4708347c16 | |
Ozzieisaacs | 37736e11d5 | |
Ozzieisaacs | 792367e35e | |
Ozzieisaacs | be64961de5 | |
Ozzieisaacs | 405a3909b0 | |
Ozzieisaacs | b1cb7123a3 | |
Ozzieisaacs | e734bb120a | |
Daniel Pavel | 006e596c72 | |
Ozzieisaacs | 499a66dfb0 | |
Ozzieisaacs | f79d549910 | |
Krakinou | 11c8b47c39 | |
Krakinou | 00a29f3d88 | |
Krakinou | e5b9da5201 | |
Ozzieisaacs | ad44e58c7a | |
Ozzieisaacs | 572b5427c7 | |
Ozzieisaacs | 32af660f86 | |
Ozzieisaacs | 66283c542f | |
Ozzieisaacs | cc8a431532 | |
Ozzieisaacs | 5c7aeb2f2c | |
Ozzie Isaacs | c1d5f77fe8 | |
Daniel Pavel | e254565901 | |
Krakinou | 3d0beba261 | |
Krakinou | 147947662c | |
Heimen Stoffels | d9f69ca264 | |
Ozzieisaacs | cd546eb6d4 | |
Ozzieisaacs | f40fc5aa75 | |
Ozzieisaacs | 9b74d51f21 | |
Ozzieisaacs | d45b1b8ea5 | |
Ozzieisaacs | e67d707867 | |
Krakinou | 4437d7376d | |
Krakinou | 304db0d20e | |
Ozzieisaacs | 3f5c6c1fa5 | |
Ozzieisaacs | 26949970d8 | |
Ozzieisaacs | f5e3ed26b9 | |
Ozzieisaacs | 8e4539cf8e | |
Ozzieisaacs | c81d4edb7d | |
Ozzieisaacs | 546ed65e1d | |
Ozzieisaacs | 14b6202eec | |
Ozzieisaacs | 50973ffb72 | |
Krakinou | 9a5ab97d78 | |
Krakinou | 6190e64956 | |
Krakinou | 79286c9384 | |
yjouanique | c4e3f3670f | |
Daniel Pavel | b89ab9ff10 | |
Krakinou | 97d12b94f6 | |
Krakinou | e4d801bbaf | |
Ozzieisaacs | f736a15c12 | |
Ozzieisaacs | a02f949d23 | |
Ozzieisaacs | b4e0d039ef | |
Ozzieisaacs | bb0d5c5538 | |
Ozzieisaacs | f5b335e8e9 | |
Ozzieisaacs | 67bd5d41aa | |
Ozzieisaacs | 1879dcb24a | |
Ozzieisaacs | 87ca05f129 | |
Ozzieisaacs | 6662a58cb0 | |
Luke Murphy | b3a286c0b5 | |
Ozzieisaacs | 4fecce0a0d | |
Ozzieisaacs | 26e45f1f57 | |
Ozzieisaacs | df5d15d1a2 | |
Ozzieisaacs | ecedf92783 | |
Ozzieisaacs | d106ada9ed | |
Ozzieisaacs | ed91048a63 | |
Ozzieisaacs | f70c839014 | |
Ozzieisaacs | e6ff2f1d90 | |
subdiox | 867aa2f0bd | |
subdiox | 7982ed877c | |
Ozzieisaacs | 0c80f5c63a | |
Ozzieisaacs | 1030e195a5 | |
subdiox | c0d136ccd8 | |
Ozzieisaacs | 479b4b7d82 | |
Ozzieisaacs | a42ebdc096 | |
Ozzieisaacs | 49ba221e85 | |
Ozzieisaacs | 3b03aa30a6 | |
Ozzieisaacs | cb0403a924 | |
Marvin Marx | a2c7741e21 | |
Ozzieisaacs | 55bb8d4590 | |
Ozzieisaacs | b80bfa5260 | |
Ozzieisaacs | f941908f73 | |
Ozzieisaacs | 9b3b2acb49 | |
Ozzieisaacs | 406d1c76c9 | |
subdiox | c2bfb29726 | |
subdiox | 204de4aef6 | |
subdiox | 8b6d165d64 | |
Ozzieisaacs | bfd0e87a17 | |
Ozzieisaacs | 6a7b8c7870 | |
Ozzieisaacs | 2de4bfdcf2 | |
Ozzieisaacs | f1a65a2aeb | |
Ozzieisaacs | 05da2ae3c7 | |
Ozzieisaacs | 4ae9d4a749 | |
Ozzieisaacs | 2253708da7 | |
Ozzieisaacs | 51e591bd25 | |
Ozzieisaacs | 2a5f2ff7b3 | |
Iris W | 029d299067 | |
Iris W | b7e30644ab | |
Iris Wildthyme | cbdc9876b2 | |
Iris Wildthyme | 05d0f12608 | |
Ozzieisaacs | 1c9ff6421d | |
Ozzieisaacs | dc93222579 | |
Ethan Lin | 8143bc7873 | |
Ozzieisaacs | 772f978b45 | |
Ozzieisaacs | 0f1db18eae | |
Ozzieisaacs | 6940bb9b88 | |
Ozzieisaacs | 07649d04a3 | |
Ozzieisaacs | 39a3f70084 | |
tomjmul | 8e8486497f | |
Ozzieisaacs | e5593d9a7f | |
Ozzieisaacs | 3f2a9c8bae | |
Ozzieisaacs | 7c69589c5b | |
Ozzieisaacs | 149e9b4bd4 | |
Ozzieisaacs | a360b1759a | |
Ozzieisaacs | 765b817384 | |
Ozzieisaacs | baf83b2f5a | |
Ozzieisaacs | 9c1b3f136f | |
Ozzieisaacs | a66873a9e2 | |
Ozzieisaacs | 4a33278596 | |
Ozzieisaacs | 6d2270d931 | |
Ozzieisaacs | 1db1c2e7df | |
Ozzieisaacs | da3fcb9a72 | |
Ozzieisaacs | 9144a7ceb9 | |
Ozzieisaacs | feb6a71f95 | |
Ozzieisaacs | 1561a4abdf | |
Ozzieisaacs | f483ca3214 | |
Ozzieisaacs | 0224d45961 | |
Ozzieisaacs | 0be17ed157 | |
Ozzieisaacs | f0de822ce7 | |
Ozzieisaacs | fb23db57b4 | |
Ozzieisaacs | fda977b155 | |
Ozzieisaacs | 68a36597ab | |
Ozzieisaacs | de58d0a4d8 | |
jianyun.zhao | 54a006a420 | |
Ozzieisaacs | 246d7673a9 | |
Ozzieisaacs | eef4787b79 | |
Ozzieisaacs | 1de3929988 | |
Ozzieisaacs | cc3088c52f | |
Ozzieisaacs | 0facb8fffa | |
Ozzieisaacs | 1a7052b287 | |
AngelByDay | 38307ececb | |
Ozzieisaacs | e92497b34e | |
Ozzieisaacs | 3d5d95904a | |
Ozzieisaacs | a0be02e687 | |
Ozzieisaacs | 37007dafee | |
Ozzieisaacs | 4230226716 | |
Ozzieisaacs | 1dc6f44828 | |
Ozzieisaacs | c1ef1bcd19 | |
Ozzieisaacs | 1fc4bc5204 | |
Ozzieisaacs | f5235b1d4c | |
Ozzieisaacs | d6ee8f75e9 | |
Ozzieisaacs | a00d93a2d9 | |
Ozzieisaacs | 561d40f8ff | |
Ozzieisaacs | 36229076f7 | |
Ozzieisaacs | 7f34073955 | |
Ozzieisaacs | b75b91606c | |
Ozzieisaacs | 80582573f5 | |
Ozzieisaacs | 3683e4e7eb | |
Ozzieisaacs | 8ababf9f77 | |
Ozzieisaacs | 5971194678 | |
Krakinou | d48cdcc789 | |
Krakinou | d763168dec | |
Krakinou | 7ccc40cf5b | |
Krakinou | aafb267787 | |
Krakinou | 82e4f11334 | |
Krakinou | 2e37c14d94 | |
Krakinou | 91f0908059 | |
Krakinou | 8d284b151d | |
Krakinou | 30954cc27f | |
Jim Ma | 4b76b8400d | |
Jim Ma | 1abbcfa3c6 | |
otapi | 9b4ca22254 | |
otapi | c6d3613e57 | |
otapi | e0229c917c |
|
@ -1,4 +1,5 @@
|
||||||
updater.py ident export-subst
|
constants.py ident export-subst
|
||||||
/test export-ignore
|
/test export-ignore
|
||||||
|
/library export-ignore
|
||||||
cps/static/css/libs/* linguist-vendored
|
cps/static/css/libs/* linguist-vendored
|
||||||
cps/static/js/libs/* linguist-vendored
|
cps/static/js/libs/* linguist-vendored
|
||||||
|
|
|
@ -0,0 +1 @@
|
||||||
|
custom: ["https://PayPal.Me/calibreweb",]
|
|
@ -0,0 +1,55 @@
|
||||||
|
---
|
||||||
|
name: Bug/Problem report
|
||||||
|
about: Create a report to help us improve Calibre-Web
|
||||||
|
title: ''
|
||||||
|
labels: ''
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Short Notice from the maintainer
|
||||||
|
|
||||||
|
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
|
||||||
|
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
|
||||||
|
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
|
||||||
|
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
|
||||||
|
|
||||||
|
|
||||||
|
Please also have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
|
||||||
|
|
||||||
|
**Describe the bug/problem**
|
||||||
|
|
||||||
|
A clear and concise description of what the bug is. If you are asking for support, please check our [Wiki](https://github.com/janeczku/calibre-web/wiki) if your question is already answered there.
|
||||||
|
|
||||||
|
**To Reproduce**
|
||||||
|
|
||||||
|
Steps to reproduce the behavior:
|
||||||
|
1. Go to '...'
|
||||||
|
2. Click on '....'
|
||||||
|
3. Scroll down to '....'
|
||||||
|
4. See error
|
||||||
|
|
||||||
|
**Logfile**
|
||||||
|
|
||||||
|
Add content of calibre-web.log file or the relevant error, try to reproduce your problem with "debug" log-level to get more output.
|
||||||
|
|
||||||
|
**Expected behavior**
|
||||||
|
|
||||||
|
A clear and concise description of what you expected to happen.
|
||||||
|
|
||||||
|
**Screenshots**
|
||||||
|
|
||||||
|
If applicable, add screenshots to help explain your problem.
|
||||||
|
|
||||||
|
**Environment (please complete the following information):**
|
||||||
|
|
||||||
|
- OS: [e.g. Windows 10/Raspberry Pi OS]
|
||||||
|
- Python version: [e.g. python2.7]
|
||||||
|
- Calibre-Web version: [e.g. 0.6.8 or 087c4c59 (git rev-parse --short HEAD)]:
|
||||||
|
- Docker container: [None/LinuxServer]:
|
||||||
|
- Special Hardware: [e.g. Rasperry Pi Zero]
|
||||||
|
- Browser: [e.g. Chrome 83.0.4103.97, Safari 13.3.7, Firefox 68.0.1 ESR]
|
||||||
|
|
||||||
|
**Additional context**
|
||||||
|
Add any other context about the problem here. [e.g. access via reverse proxy, database background sync, special database location]
|
||||||
|
|
|
@ -0,0 +1 @@
|
||||||
|
blank_issues_enabled: false
|
|
@ -0,0 +1,29 @@
|
||||||
|
---
|
||||||
|
name: Feature request
|
||||||
|
about: Suggest an idea for Calibre-Web
|
||||||
|
title: ''
|
||||||
|
labels: ''
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# Short Notice from the maintainer
|
||||||
|
|
||||||
|
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
|
||||||
|
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
|
||||||
|
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
|
||||||
|
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
|
||||||
|
|
||||||
|
Please have a look at our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
|
||||||
|
|
||||||
|
**Is your feature request related to a problem? Please describe.**
|
||||||
|
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||||
|
|
||||||
|
**Describe the solution you'd like**
|
||||||
|
A clear and concise description of what you want to happen.
|
||||||
|
|
||||||
|
**Describe alternatives you've considered**
|
||||||
|
A clear and concise description of any alternative solutions or features you've considered.
|
||||||
|
|
||||||
|
**Additional context**
|
||||||
|
Add any other context or screenshots about the feature request here.
|
|
@ -6,26 +6,32 @@ __pycache__/
|
||||||
|
|
||||||
# Distribution / packaging
|
# Distribution / packaging
|
||||||
.Python
|
.Python
|
||||||
|
.python-version
|
||||||
env/
|
env/
|
||||||
|
venv/
|
||||||
eggs/
|
eggs/
|
||||||
|
dist/
|
||||||
|
executable/
|
||||||
|
build/
|
||||||
|
vendor/
|
||||||
.eggs/
|
.eggs/
|
||||||
*.egg-info/
|
*.egg-info/
|
||||||
.installed.cfg
|
.installed.cfg
|
||||||
*.egg
|
*.egg
|
||||||
|
.pylint.d
|
||||||
|
|
||||||
# calibre-web
|
# calibre-web
|
||||||
*.db
|
*.db
|
||||||
*.log
|
*.log
|
||||||
config.ini
|
cps/cache
|
||||||
cps/static/[0-9]*
|
|
||||||
|
|
||||||
.idea/
|
.idea/
|
||||||
*.bak
|
*.bak
|
||||||
*.log.*
|
*.log.*
|
||||||
tags
|
.key
|
||||||
|
|
||||||
settings.yaml
|
settings.yaml
|
||||||
gdrive_credentials
|
gdrive_credentials
|
||||||
|
|
||||||
vendor
|
|
||||||
client_secrets.json
|
client_secrets.json
|
||||||
|
gmail.json
|
||||||
|
/.key
|
||||||
|
|
|
@ -0,0 +1,46 @@
|
||||||
|
## How to contribute to Calibre-Web
|
||||||
|
|
||||||
|
First of all, we would like to thank you for reading this text. We are happy you are willing to contribute to Calibre-Web.
|
||||||
|
|
||||||
|
### **General**
|
||||||
|
|
||||||
|
**Communication language** is English. Google translated texts are not as bad as you might think, they are usually understandable, so don't worry if you generate your post that way.
|
||||||
|
|
||||||
|
**Calibre-Web** is not **Calibre**. If you are having a question regarding Calibre please post this at their [repository](https://github.com/kovidgoyal/calibre).
|
||||||
|
|
||||||
|
**Docker-Containers** of Calibre-Web are maintained by different persons than the people who drive Calibre-Web. If we come to the conclusion during our analysis that the problem is related to Docker, we might ask you to open a new issue at the repository of the Docker Container.
|
||||||
|
|
||||||
|
If you are having **Basic Installation Problems** with python or its dependencies, please consider using your favorite search engine to find a solution. In case you can't find a solution, we are happy to help you.
|
||||||
|
|
||||||
|
We can offer only very limited support regarding configuration of **Reverse-Proxy Installations**, **OPDS-Reader** or other programs in combination with Calibre-Web.
|
||||||
|
|
||||||
|
### **Translation**
|
||||||
|
|
||||||
|
Some of the user languages in Calibre-Web having missing translations. We are happy to add the missing texts if you translate them. Create a Pull Request, create an issue with the .po file attached, or write an email to "ozzie.fernandez.isaacs@googlemail.com" with attached translation file. To display all book languages in your native language an additional file is used (iso_language_names.py). The content of this file is auto-generated with the corresponding translations of Calibre, please do not edit this file on your own.
|
||||||
|
|
||||||
|
### **Documentation**
|
||||||
|
|
||||||
|
The Calibre-Web documentation is hosted in the Github [Wiki](https://github.com/janeczku/calibre-web/wiki). The Wiki is open to everybody, if you find a problem, feel free to correct it. If information is missing, you are welcome to add it. The content will be reviewed time by time. Please try to be consistent with the form with the other Wiki pages (e.g. the project name is Calibre-Web with 2 capital letters and a dash in between).
|
||||||
|
|
||||||
|
### **Reporting a bug**
|
||||||
|
|
||||||
|
Do not open up a GitHub issue if the bug is a **security vulnerability** in Calibre-Web. Instead, please write an email to "ozzie.fernandez.isaacs@googlemail.com".
|
||||||
|
|
||||||
|
Ensure the **bug was not already reported** by searching on GitHub under [Issues](https://github.com/janeczku/calibre-web/issues). Please also check if a solution for your problem can be found in the [wiki](https://github.com/janeczku/calibre-web/wiki).
|
||||||
|
|
||||||
|
If you're unable to find an **open issue** addressing the problem, open a [new one](https://github.com/janeczku/calibre-web/issues/new/choose). Be sure to include a **title** and **clear description**, as much relevant information as possible, the **issue form** helps you providing the right information. Deleting the form and just pasting the stack trace doesn't speed up fixing the problem. If your issue could be resolved, consider closing the issue.
|
||||||
|
|
||||||
|
### **Feature Request**
|
||||||
|
|
||||||
|
If there is a feature missing in Calibre-Web and you can't find a feature request in the [Issues](https://github.com/janeczku/calibre-web/issues) section, you could create a [feature request](https://github.com/janeczku/calibre-web/issues/new?assignees=&labels=&template=feature_request.md&title=).
|
||||||
|
We will not extend Calibre-Web with any more login abilities or add further files storages, or file syncing ability. Furthermore Calibre-Web is made for home usage for company in-house usage, so requests regarding any sorts of social interaction capability, payment routines, search engine or web site analytics integration will not be implemented.
|
||||||
|
|
||||||
|
### **Contributing code to Calibre-Web**
|
||||||
|
|
||||||
|
Open a new GitHub pull request with the patch. Ensure the PR description clearly describes the problem and solution. Include the relevant issue number if applicable.
|
||||||
|
|
||||||
|
In case your code enhances features of Calibre-Web: Create your pull request for the development branch if your enhancement consists of more than some lines of code in a local section of Calibre-Webs code. This makes it easier to test it and check all implication before it's made public.
|
||||||
|
|
||||||
|
Please check if your code runs with python 3, python 2 is no longer supported. If possible and the feature is related to operating system functions, try to check it on Windows and Linux.
|
||||||
|
Calibre-Web is automatically tested on Linux in combination with python 3.8. The code for testing is in a [separate repo](https://github.com/OzzieIsaacs/calibre-web-test) on Github. It uses unit tests and performs real system tests with selenium; it would be great if you could consider also writing some tests.
|
||||||
|
A static code analysis is done by Codacy, but it's partly broken and doesn't run automatically. You could check your code with ESLint before contributing, a configuration file can be found in the projects root folder.
|
|
@ -0,0 +1 @@
|
||||||
|
graft src/calibreweb
|
|
@ -0,0 +1,125 @@
|
||||||
|
# Short Notice from the maintainer
|
||||||
|
|
||||||
|
After 6 years of more or less intensive programming on Calibre-Web, I need a break.
|
||||||
|
The last few months, maintaining Calibre-Web has felt more like work than a hobby. I felt pressured and teased by people to solve "their" problems and merge PRs for "their" Calibre-Web.
|
||||||
|
I have turned off all notifications from Github/Discord and will now concentrate undisturbed on the development of “my” Calibre-Web over the next few weeks/months.
|
||||||
|
I will look into the issues and maybe also the PRs from time to time, but don't expect a quick response from me.
|
||||||
|
|
||||||
|
# Calibre-Web
|
||||||
|
|
||||||
|
Calibre-Web is a web app that offers a clean and intuitive interface for browsing, reading, and downloading eBooks using a valid [Calibre](https://calibre-ebook.com) database.
|
||||||
|
|
||||||
|
[![License](https://img.shields.io/github/license/janeczku/calibre-web?style=flat-square)](https://github.com/janeczku/calibre-web/blob/master/LICENSE)
|
||||||
|
![Commit Activity](https://img.shields.io/github/commit-activity/w/janeczku/calibre-web?logo=github&style=flat-square&label=commits)
|
||||||
|
[![All Releases](https://img.shields.io/github/downloads/janeczku/calibre-web/total?logo=github&style=flat-square)](https://github.com/janeczku/calibre-web/releases)
|
||||||
|
[![PyPI](https://img.shields.io/pypi/v/calibreweb?logo=pypi&logoColor=fff&style=flat-square)](https://pypi.org/project/calibreweb/)
|
||||||
|
[![PyPI - Downloads](https://img.shields.io/pypi/dm/calibreweb?logo=pypi&logoColor=fff&style=flat-square)](https://pypi.org/project/calibreweb/)
|
||||||
|
[![Discord](https://img.shields.io/discord/838810113564344381?label=Discord&logo=discord&style=flat-square)](https://discord.gg/h2VsJ2NEfB)
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary><strong>Table of Contents</strong> (click to expand)</summary>
|
||||||
|
|
||||||
|
1. [About](#calibre-web)
|
||||||
|
2. [Features](#features)
|
||||||
|
3. [Installation](#installation)
|
||||||
|
- [Installation via pip (recommended)](#installation-via-pip-recommended)
|
||||||
|
- [Quick start](#quick-start)
|
||||||
|
- [Requirements](#requirements)
|
||||||
|
4. [Docker Images](#docker-images)
|
||||||
|
5. [Contributor Recognition](#contributor-recognition)
|
||||||
|
6. [Contact](#contact)
|
||||||
|
7. [Contributing to Calibre-Web](#contributing-to-calibre-web)
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
|
||||||
|
*This software is a fork of [library](https://github.com/mutschler/calibreserver) and licensed under the GPL v3 License.*
|
||||||
|
|
||||||
|
![Main screen](https://github.com/janeczku/calibre-web/wiki/images/main_screen.png)
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- Modern and responsive Bootstrap 3 HTML5 interface
|
||||||
|
- Full graphical setup
|
||||||
|
- Comprehensive user management with fine-grained per-user permissions
|
||||||
|
- Admin interface
|
||||||
|
- Multilingual user interface supporting 20+ languages ([supported languages](https://github.com/janeczku/calibre-web/wiki/Translation-Status))
|
||||||
|
- OPDS feed for eBook reader apps
|
||||||
|
- Advanced search and filtering options
|
||||||
|
- Custom book collection (shelves) creation
|
||||||
|
- eBook metadata editing and deletion support
|
||||||
|
- Metadata download from various sources (extensible via plugins)
|
||||||
|
- eBook conversion through Calibre binaries
|
||||||
|
- eBook download restriction to logged-in users
|
||||||
|
- Public user registration support
|
||||||
|
- Send eBooks to E-Readers with a single click
|
||||||
|
- Sync Kobo devices with your Calibre library
|
||||||
|
- In-browser eBook reading support for multiple formats
|
||||||
|
- Upload new books in various formats, including audio formats
|
||||||
|
- Calibre Custom Columns support
|
||||||
|
- Content hiding based on categories and Custom Column content per user
|
||||||
|
- Self-update capability
|
||||||
|
- "Magic Link" login for easy access on eReaders
|
||||||
|
- LDAP, Google/GitHub OAuth, and proxy authentication support
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
#### Installation via pip (recommended)
|
||||||
|
1. Create a virtual environment for Calibre-Web to avoid conflicts with existing Python dependencies
|
||||||
|
2. Install Calibre-Web via pip: `pip install calibreweb` (or `pip3` depending on your OS/distro)
|
||||||
|
3. Install optional features via pip as needed, see [this page](https://github.com/janeczku/calibre-web/wiki/Dependencies-in-Calibre-Web-Linux-and-Windows) for details
|
||||||
|
4. Start Calibre-Web by typing `cps`
|
||||||
|
|
||||||
|
*Note: Raspberry Pi OS users may encounter issues during installation. If so, please update pip (`./venv/bin/python3 -m pip install --upgrade pip`) and/or install cargo (`sudo apt install cargo`) before retrying the installation.*
|
||||||
|
|
||||||
|
Refer to the Wiki for additional installation examples: [manual installation](https://github.com/janeczku/calibre-web/wiki/Manual-installation), [Linux Mint](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-in-Linux-Mint-19-or-20), [Cloud Provider](https://github.com/janeczku/calibre-web/wiki/How-To:-Install-Calibre-Web-on-a-Cloud-Provider).
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
1. Open your browser and navigate to `http://localhost:8083` or `http://localhost:8083/opds` for the OPDS catalog
|
||||||
|
2. Log in with the default admin credentials
|
||||||
|
3. If you don't have a Calibre database, you can use [this database](https://github.com/janeczku/calibre-web/raw/master/library/metadata.db) (move it out of the Calibre-Web folder to prevent overwriting during updates)
|
||||||
|
4. Set `Location of Calibre database` to the path of the folder containing your Calibre library (metadata.db) and click "Save"
|
||||||
|
5. Optionally, use Google Drive to host your Calibre library by following the [Google Drive integration guide](https://github.com/janeczku/calibre-web/wiki/G-Drive-Setup#using-google-drive-integration)
|
||||||
|
6. Configure your Calibre-Web instance via the admin page, referring to the [Basic Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#basic-configuration) and [UI Configuration](https://github.com/janeczku/calibre-web/wiki/Configuration#ui-configuration) guides
|
||||||
|
|
||||||
|
#### Default Admin Login:
|
||||||
|
- **Username:** admin
|
||||||
|
- **Password:** admin123
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
- Python 3.5+
|
||||||
|
- [Imagemagick](https://imagemagick.org/script/download.php) for cover extraction from EPUBs (Windows users may need to install [Ghostscript](https://ghostscript.com/releases/gsdnld.html) for PDF cover extraction)
|
||||||
|
- Optional: [Calibre desktop program](https://calibre-ebook.com/download) for on-the-fly conversion and metadata editing (set "calibre's converter tool" path on the setup page)
|
||||||
|
- Optional: [Kepubify tool](https://github.com/pgaskin/kepubify/releases/latest) for Kobo device support (place the binary in `/opt/kepubify` on Linux or `C:\Program Files\kepubify` on Windows)
|
||||||
|
|
||||||
|
## Docker Images
|
||||||
|
|
||||||
|
Pre-built Docker images are available in the following Docker Hub repositories (maintained by the LinuxServer team):
|
||||||
|
|
||||||
|
#### **LinuxServer - x64, aarch64**
|
||||||
|
- [Docker Hub](https://hub.docker.com/r/linuxserver/calibre-web)
|
||||||
|
- [GitHub](https://github.com/linuxserver/docker-calibre-web)
|
||||||
|
- [GitHub - Optional Calibre layer](https://github.com/linuxserver/docker-mods/tree/universal-calibre)
|
||||||
|
|
||||||
|
Include the environment variable `DOCKER_MODS=linuxserver/mods:universal-calibre` in your Docker run/compose file to add the Calibre `ebook-convert` binary (x64 only). Omit this variable for a lightweight image.
|
||||||
|
|
||||||
|
Both the Calibre-Web and Calibre-Mod images are automatically rebuilt on new releases and updates.
|
||||||
|
|
||||||
|
- Set "path to convertertool" to `/usr/bin/ebook-convert`
|
||||||
|
- Set "path to unrar" to `/usr/bin/unrar`
|
||||||
|
|
||||||
|
## Contributor Recognition
|
||||||
|
|
||||||
|
We would like to thank all the [contributors](https://github.com/janeczku/calibre-web/graphs/contributors) and maintainers of Calibre-Web for their valuable input and dedication to the project. Your contributions are greatly appreciated.
|
||||||
|
|
||||||
|
## Contact
|
||||||
|
|
||||||
|
Join us on [Discord](https://discord.gg/h2VsJ2NEfB)
|
||||||
|
|
||||||
|
For more information, How To's, and FAQs, please visit the [Wiki](https://github.com/janeczku/calibre-web/wiki)
|
||||||
|
|
||||||
|
## Contributing to Calibre-Web
|
||||||
|
|
||||||
|
Check out our [Contributing Guidelines](https://github.com/janeczku/calibre-web/blob/master/CONTRIBUTING.md)
|
|
@ -0,0 +1,52 @@
|
||||||
|
# Security Policy
|
||||||
|
|
||||||
|
## Reporting a Vulnerability
|
||||||
|
|
||||||
|
Please report security issues to ozzie.fernandez.isaacs@googlemail.com
|
||||||
|
|
||||||
|
## Supported Versions
|
||||||
|
|
||||||
|
To receive fixes for security vulnerabilities it is required to always upgrade to the latest version of Calibre-Web. See https://github.com/janeczku/calibre-web/releases/latest for the latest release.
|
||||||
|
|
||||||
|
## History
|
||||||
|
|
||||||
|
| Fixed in | Description |CVE number |
|
||||||
|
|---------------|--------------------------------------------------------------------------------------------------------------------|---------|
|
||||||
|
| 3rd July 2018 | Guest access acts as a backdoor ||
|
||||||
|
| V 0.6.7 | Hardcoded secret key for sessions |CVE-2020-12627 |
|
||||||
|
| V 0.6.13 | Calibre-Web Metadata cross site scripting |CVE-2021-25964|
|
||||||
|
| V 0.6.13 | Name of Shelves are only visible to users who can access the corresponding shelf Thanks to @ibarrionuevo ||
|
||||||
|
| V 0.6.13 | JavaScript could get executed in the description field. Thanks to @ranjit-git and Hagai Wechsler (WhiteSource) ||
|
||||||
|
| V 0.6.13 | JavaScript could get executed in a custom column of type "comment" field ||
|
||||||
|
| V 0.6.13 | JavaScript could get executed after converting a book to another format with a title containing javascript code ||
|
||||||
|
| V 0.6.13 | JavaScript could get executed after converting a book to another format with a username containing javascript code ||
|
||||||
|
| V 0.6.13 | JavaScript could get executed in the description series, categories or publishers title ||
|
||||||
|
| V 0.6.13 | JavaScript could get executed in the shelf title ||
|
||||||
|
| V 0.6.13 | Login with the old session cookie after logout. Thanks to @ibarrionuevo ||
|
||||||
|
| V 0.6.14 | CSRF was possible. Thanks to @mik317 and Hagai Wechsler (WhiteSource) |CVE-2021-25965|
|
||||||
|
| V 0.6.14 | Migrated some routes to POST-requests (CSRF protection). Thanks to @scara31 |CVE-2021-4164|
|
||||||
|
| V 0.6.15 | Fix for "javascript:" script links in identifier. Thanks to @scara31 |CVE-2021-4170|
|
||||||
|
| V 0.6.15 | Cross-Site Scripting vulnerability on uploaded cover file names. Thanks to @ibarrionuevo ||
|
||||||
|
| V 0.6.15 | Creating public shelfs is now denied if user is missing the edit public shelf right. Thanks to @ibarrionuevo ||
|
||||||
|
| V 0.6.15 | Changed error message in case of trying to delete a shelf unauthorized. Thanks to @ibarrionuevo ||
|
||||||
|
| V 0.6.16 | JavaScript could get executed on authors page. Thanks to @alicaz |CVE-2022-0352|
|
||||||
|
| V 0.6.16 | Localhost can no longer be used to upload covers. Thanks to @scara31 |CVE-2022-0339|
|
||||||
|
| V 0.6.16 | Another case where public shelfs could be created without permission is prevented. Thanks to @nhiephon |CVE-2022-0273|
|
||||||
|
| V 0.6.16 | It's prevented to get the name of a private shelfs. Thanks to @nhiephon |CVE-2022-0405|
|
||||||
|
| V 0.6.17 | The SSRF Protection can no longer be bypassed via an HTTP redirect. Thanks to @416e6e61 |CVE-2022-0767|
|
||||||
|
| V 0.6.17 | The SSRF Protection can no longer be bypassed via 0.0.0.0 and it's ipv6 equivalent. Thanks to @r0hanSH |CVE-2022-0766|
|
||||||
|
| V 0.6.18 | Possible SQL Injection is prevented in user table Thanks to Iman Sharafaldin (Forward Security) |CVE-2022-30765|
|
||||||
|
| V 0.6.18 | The SSRF protection no longer can be bypassed by IPV6/IPV4 embedding. Thanks to @416e6e61 |CVE-2022-0939|
|
||||||
|
| V 0.6.18 | The SSRF protection no longer can be bypassed to connect to other servers in the local network. Thanks to @michaellrowley |CVE-2022-0990|
|
||||||
|
| V 0.6.20 | Credentials for emails are now stored encrypted ||
|
||||||
|
| V 0.6.20 | Login is rate limited ||
|
||||||
|
| V 0.6.20 | Passwordstrength can be forced ||
|
||||||
|
| V 0.6.21 | SMTP server credentials are no longer returned to client ||
|
||||||
|
| V 0.6.21 | Cross-site scripting (XSS) stored in href bypasses filter using data wrapper no longer possible ||
|
||||||
|
| V 0.6.21 | Cross-site scripting (XSS) is no longer possible via pathchooser ||
|
||||||
|
| V 0.6.21 | Error Handling at non existent rating, language, and user downloaded books was fixed ||
|
||||||
|
|
||||||
|
|
||||||
|
## Statement regarding Log4j (CVE-2021-44228 and related)
|
||||||
|
|
||||||
|
Calibre-web is not affected by bugs related to Log4j. Calibre-Web is a python program, therefore not using Java, and not using the Java logging feature log4j.
|
|
@ -1,3 +1,4 @@
|
||||||
[python: **.py]
|
[python: **.py]
|
||||||
|
|
||||||
|
# has to be executed with jinja2 >=2.9 to have autoescape enabled automatically
|
||||||
[jinja2: **/templates/**.*ml]
|
[jinja2: **/templates/**.*ml]
|
||||||
extensions=jinja2.ext.autoescape,jinja2.ext.with_
|
|
51
cps.py
51
cps.py
|
@ -1,21 +1,54 @@
|
||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2022 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
import os
|
import os
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
base_path = os.path.dirname(os.path.abspath(__file__))
|
|
||||||
# Insert local directories into path
|
|
||||||
sys.path.append(base_path)
|
|
||||||
sys.path.append(os.path.join(base_path, 'cps'))
|
|
||||||
sys.path.append(os.path.join(base_path, 'vendor'))
|
|
||||||
|
|
||||||
from cps.server import Server
|
# Add local path to sys.path, so we can import cps
|
||||||
|
path = os.path.dirname(os.path.abspath(__file__))
|
||||||
|
sys.path.insert(0, path)
|
||||||
|
|
||||||
|
from cps.main import main
|
||||||
|
|
||||||
|
|
||||||
|
def hide_console_windows():
|
||||||
|
import ctypes
|
||||||
|
import os
|
||||||
|
|
||||||
|
hwnd = ctypes.windll.kernel32.GetConsoleWindow()
|
||||||
|
if hwnd != 0:
|
||||||
|
try:
|
||||||
|
import win32process
|
||||||
|
except ImportError:
|
||||||
|
print("To hide console window install 'pywin32' using 'pip install pywin32'")
|
||||||
|
return
|
||||||
|
ctypes.windll.user32.ShowWindow(hwnd, 0)
|
||||||
|
ctypes.windll.kernel32.CloseHandle(hwnd)
|
||||||
|
_, pid = win32process.GetWindowThreadProcessId(hwnd)
|
||||||
|
os.system('taskkill /PID ' + str(pid) + ' /f')
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
Server.startServer()
|
if os.name == "nt":
|
||||||
|
hide_console_windows()
|
||||||
|
main()
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,52 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2019 OzzieIsaacs, cervinko, jkrehm, bodybybuddha, ok11,
|
||||||
|
# andy29485, idalin, Kyosfonica, wuqi, Kennyl, lemmsh,
|
||||||
|
# falgh1, grunjol, csitko, ytils, xybydy, trasba, vrabe,
|
||||||
|
# ruben-herold, marblepebble, JackED42, SiphonSquirrel,
|
||||||
|
# apetresc, nanu-c, mutschler, GammaC0de, vuolter
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
|
||||||
|
from flask_login import LoginManager, confirm_login
|
||||||
|
from flask import session, current_app
|
||||||
|
from flask_login.utils import decode_cookie
|
||||||
|
from flask_login.signals import user_loaded_from_cookie
|
||||||
|
|
||||||
|
class MyLoginManager(LoginManager):
|
||||||
|
def _session_protection_failed(self):
|
||||||
|
sess = session._get_current_object()
|
||||||
|
ident = self._session_identifier_generator()
|
||||||
|
if(sess and not (len(sess) == 1
|
||||||
|
and sess.get('csrf_token', None))) and ident != sess.get('_id', None):
|
||||||
|
return super(). _session_protection_failed()
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _load_user_from_remember_cookie(self, cookie):
|
||||||
|
user_id = decode_cookie(cookie)
|
||||||
|
if user_id is not None:
|
||||||
|
session["_user_id"] = user_id
|
||||||
|
session["_fresh"] = False
|
||||||
|
user = None
|
||||||
|
if self._user_callback:
|
||||||
|
user = self._user_callback(user_id)
|
||||||
|
if user is not None:
|
||||||
|
app = current_app._get_current_object()
|
||||||
|
user_loaded_from_cookie.send(app, user=user)
|
||||||
|
# if session was restored from remember me cookie make login valid
|
||||||
|
confirm_login()
|
||||||
|
return user
|
||||||
|
return None
|
212
cps/__init__.py
212
cps/__init__.py
|
@ -1,2 +1,212 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2019 OzzieIsaacs, cervinko, jkrehm, bodybybuddha, ok11,
|
||||||
|
# andy29485, idalin, Kyosfonica, wuqi, Kennyl, lemmsh,
|
||||||
|
# falgh1, grunjol, csitko, ytils, xybydy, trasba, vrabe,
|
||||||
|
# ruben-herold, marblepebble, JackED42, SiphonSquirrel,
|
||||||
|
# apetresc, nanu-c, mutschler
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
__package__ = "cps"
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import mimetypes
|
||||||
|
|
||||||
|
from flask import Flask
|
||||||
|
from .MyLoginManager import MyLoginManager
|
||||||
|
from flask_principal import Principal
|
||||||
|
|
||||||
|
from . import logger
|
||||||
|
from .cli import CliParameter
|
||||||
|
from .constants import CONFIG_DIR
|
||||||
|
from .reverseproxy import ReverseProxied
|
||||||
|
from .server import WebServer
|
||||||
|
from .dep_check import dependency_check
|
||||||
|
from .updater import Updater
|
||||||
|
from .babel import babel, get_locale
|
||||||
|
from . import config_sql
|
||||||
|
from . import cache_buster
|
||||||
|
from . import ub, db
|
||||||
|
|
||||||
|
try:
|
||||||
|
from flask_limiter import Limiter
|
||||||
|
limiter_present = True
|
||||||
|
except ImportError:
|
||||||
|
limiter_present = False
|
||||||
|
try:
|
||||||
|
from flask_wtf.csrf import CSRFProtect
|
||||||
|
wtf_present = True
|
||||||
|
except ImportError:
|
||||||
|
wtf_present = False
|
||||||
|
|
||||||
|
|
||||||
|
mimetypes.init()
|
||||||
|
mimetypes.add_type('application/xhtml+xml', '.xhtml')
|
||||||
|
mimetypes.add_type('application/epub+zip', '.epub')
|
||||||
|
mimetypes.add_type('application/fb2+zip', '.fb2')
|
||||||
|
mimetypes.add_type('application/x-mobipocket-ebook', '.mobi')
|
||||||
|
mimetypes.add_type('application/x-mobipocket-ebook', '.prc')
|
||||||
|
mimetypes.add_type('application/vnd.amazon.ebook', '.azw')
|
||||||
|
mimetypes.add_type('application/x-mobi8-ebook', '.azw3')
|
||||||
|
mimetypes.add_type('application/x-cbr', '.cbr')
|
||||||
|
mimetypes.add_type('application/x-cbz', '.cbz')
|
||||||
|
mimetypes.add_type('application/x-cbt', '.cbt')
|
||||||
|
mimetypes.add_type('application/x-cb7', '.cb7')
|
||||||
|
mimetypes.add_type('image/vnd.djv', '.djv')
|
||||||
|
mimetypes.add_type('application/mpeg', '.mpeg')
|
||||||
|
mimetypes.add_type('application/mpeg', '.mp3')
|
||||||
|
mimetypes.add_type('application/mp4', '.m4a')
|
||||||
|
mimetypes.add_type('application/mp4', '.m4b')
|
||||||
|
mimetypes.add_type('application/ogg', '.ogg')
|
||||||
|
mimetypes.add_type('application/ogg', '.oga')
|
||||||
|
mimetypes.add_type('text/css', '.css')
|
||||||
|
mimetypes.add_type('text/javascript; charset=UTF-8', '.js')
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
app = Flask(__name__)
|
||||||
|
app.config.update(
|
||||||
|
SESSION_COOKIE_HTTPONLY=True,
|
||||||
|
SESSION_COOKIE_SAMESITE='Lax',
|
||||||
|
REMEMBER_COOKIE_SAMESITE='Lax', # will be available in flask-login 0.5.1 earliest
|
||||||
|
WTF_CSRF_SSL_STRICT=False
|
||||||
|
)
|
||||||
|
|
||||||
|
lm = MyLoginManager()
|
||||||
|
|
||||||
|
cli_param = CliParameter()
|
||||||
|
|
||||||
|
config = config_sql.ConfigSQL()
|
||||||
|
|
||||||
|
if wtf_present:
|
||||||
|
csrf = CSRFProtect()
|
||||||
|
else:
|
||||||
|
csrf = None
|
||||||
|
|
||||||
|
calibre_db = db.CalibreDB()
|
||||||
|
|
||||||
|
web_server = WebServer()
|
||||||
|
|
||||||
|
updater_thread = Updater()
|
||||||
|
|
||||||
|
if limiter_present:
|
||||||
|
limiter = Limiter(key_func=True, headers_enabled=True, auto_check=False, swallow_errors=False)
|
||||||
|
else:
|
||||||
|
limiter = None
|
||||||
|
|
||||||
|
def create_app():
|
||||||
|
if csrf:
|
||||||
|
csrf.init_app(app)
|
||||||
|
|
||||||
|
cli_param.init()
|
||||||
|
|
||||||
|
ub.init_db(cli_param.settings_path)
|
||||||
|
# pylint: disable=no-member
|
||||||
|
encrypt_key, error = config_sql.get_encryption_key(os.path.dirname(cli_param.settings_path))
|
||||||
|
|
||||||
|
config_sql.load_configuration(ub.session, encrypt_key)
|
||||||
|
config.init_config(ub.session, encrypt_key, cli_param)
|
||||||
|
|
||||||
|
if error:
|
||||||
|
log.error(error)
|
||||||
|
|
||||||
|
ub.password_change(cli_param.user_credentials)
|
||||||
|
|
||||||
|
if sys.version_info < (3, 0):
|
||||||
|
log.info(
|
||||||
|
'*** Python2 is EOL since end of 2019, this version of Calibre-Web is no longer supporting Python2, '
|
||||||
|
'please update your installation to Python3 ***')
|
||||||
|
print(
|
||||||
|
'*** Python2 is EOL since end of 2019, this version of Calibre-Web is no longer supporting Python2, '
|
||||||
|
'please update your installation to Python3 ***')
|
||||||
|
web_server.stop(True)
|
||||||
|
sys.exit(5)
|
||||||
|
|
||||||
|
lm.login_view = 'web.login'
|
||||||
|
lm.anonymous_user = ub.Anonymous
|
||||||
|
lm.session_protection = 'strong' if config.config_session == 1 else "basic"
|
||||||
|
|
||||||
|
db.CalibreDB.update_config(config)
|
||||||
|
db.CalibreDB.setup_db(config.config_calibre_dir, cli_param.settings_path)
|
||||||
|
calibre_db.init_db()
|
||||||
|
|
||||||
|
updater_thread.init_updater(config, web_server)
|
||||||
|
# Perform dry run of updater and exit afterward
|
||||||
|
if cli_param.dry_run:
|
||||||
|
updater_thread.dry_run()
|
||||||
|
sys.exit(0)
|
||||||
|
updater_thread.start()
|
||||||
|
requirements = dependency_check()
|
||||||
|
for res in requirements:
|
||||||
|
if res['found'] == "not installed":
|
||||||
|
message = ('Cannot import {name} module, it is needed to run calibre-web, '
|
||||||
|
'please install it using "pip install {name}"').format(name=res["name"])
|
||||||
|
log.info(message)
|
||||||
|
print("*** " + message + " ***")
|
||||||
|
web_server.stop(True)
|
||||||
|
sys.exit(8)
|
||||||
|
for res in requirements + dependency_check(True):
|
||||||
|
log.info('*** "{}" version does not meet the requirements. '
|
||||||
|
'Should: {}, Found: {}, please consider installing required version ***'
|
||||||
|
.format(res['name'],
|
||||||
|
res['target'],
|
||||||
|
res['found']))
|
||||||
|
app.wsgi_app = ReverseProxied(app.wsgi_app)
|
||||||
|
|
||||||
|
if os.environ.get('FLASK_DEBUG'):
|
||||||
|
cache_buster.init_cache_busting(app)
|
||||||
|
log.info('Starting Calibre Web...')
|
||||||
|
Principal(app)
|
||||||
|
lm.init_app(app)
|
||||||
|
app.secret_key = os.getenv('SECRET_KEY', config_sql.get_flask_session_key(ub.session))
|
||||||
|
|
||||||
|
web_server.init_app(app, config)
|
||||||
|
if hasattr(babel, "localeselector"):
|
||||||
|
babel.init_app(app)
|
||||||
|
babel.localeselector(get_locale)
|
||||||
|
else:
|
||||||
|
babel.init_app(app, locale_selector=get_locale)
|
||||||
|
|
||||||
|
from . import services
|
||||||
|
|
||||||
|
if services.ldap:
|
||||||
|
services.ldap.init_app(app, config)
|
||||||
|
if services.goodreads_support:
|
||||||
|
services.goodreads_support.connect(config.config_goodreads_api_key,
|
||||||
|
config.config_use_goodreads)
|
||||||
|
config.store_calibre_uuid(calibre_db, db.Library_Id)
|
||||||
|
# Configure rate limiter
|
||||||
|
# https://limits.readthedocs.io/en/stable/storage.html
|
||||||
|
app.config.update(RATELIMIT_ENABLED=config.config_ratelimiter)
|
||||||
|
if config.config_limiter_uri != "" and not cli_param.memory_backend:
|
||||||
|
app.config.update(RATELIMIT_STORAGE_URI=config.config_limiter_uri)
|
||||||
|
if config.config_limiter_options != "":
|
||||||
|
app.config.update(RATELIMIT_STORAGE_OPTIONS=config.config_limiter_options)
|
||||||
|
try:
|
||||||
|
limiter.init_app(app)
|
||||||
|
except Exception as e:
|
||||||
|
log.error('Wrong Flask Limiter configuration, falling back to default: {}'.format(e))
|
||||||
|
app.config.update(RATELIMIT_STORAGE_URI=None)
|
||||||
|
limiter.init_app(app)
|
||||||
|
|
||||||
|
# Register scheduled tasks
|
||||||
|
from .schedule import register_scheduled_tasks, register_startup_tasks
|
||||||
|
register_scheduled_tasks(config.schedule_reconnect)
|
||||||
|
register_startup_tasks()
|
||||||
|
|
||||||
|
return app
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,84 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2019 OzzieIsaacs, cervinko, jkrehm, bodybybuddha, ok11,
|
||||||
|
# andy29485, idalin, Kyosfonica, wuqi, Kennyl, lemmsh,
|
||||||
|
# falgh1, grunjol, csitko, ytils, xybydy, trasba, vrabe,
|
||||||
|
# ruben-herold, marblepebble, JackED42, SiphonSquirrel,
|
||||||
|
# apetresc, nanu-c, mutschler
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import platform
|
||||||
|
import sqlite3
|
||||||
|
from collections import OrderedDict
|
||||||
|
|
||||||
|
import flask
|
||||||
|
import flask_login
|
||||||
|
import jinja2
|
||||||
|
from flask_babel import gettext as _
|
||||||
|
|
||||||
|
from . import db, calibre_db, converter, uploader, constants, dep_check
|
||||||
|
from .render_template import render_title_template
|
||||||
|
|
||||||
|
|
||||||
|
about = flask.Blueprint('about', __name__)
|
||||||
|
|
||||||
|
modules = dict()
|
||||||
|
req = dep_check.load_dependencies(False)
|
||||||
|
opt = dep_check.load_dependencies(True)
|
||||||
|
for i in (req + opt):
|
||||||
|
modules[i[1]] = i[0]
|
||||||
|
modules['Jinja2'] = jinja2.__version__
|
||||||
|
modules['pySqlite'] = sqlite3.version
|
||||||
|
modules['SQLite'] = sqlite3.sqlite_version
|
||||||
|
sorted_modules = OrderedDict((sorted(modules.items(), key=lambda x: x[0].casefold())))
|
||||||
|
|
||||||
|
|
||||||
|
def collect_stats():
|
||||||
|
if constants.NIGHTLY_VERSION[0] == "$Format:%H$":
|
||||||
|
calibre_web_version = constants.STABLE_VERSION['version'].replace("b", " Beta")
|
||||||
|
else:
|
||||||
|
calibre_web_version = (constants.STABLE_VERSION['version'].replace("b", " Beta") + ' - '
|
||||||
|
+ constants.NIGHTLY_VERSION[0].replace('%', '%%') + ' - '
|
||||||
|
+ constants.NIGHTLY_VERSION[1].replace('%', '%%'))
|
||||||
|
|
||||||
|
if getattr(sys, 'frozen', False):
|
||||||
|
calibre_web_version += " - Exe-Version"
|
||||||
|
elif constants.HOME_CONFIG:
|
||||||
|
calibre_web_version += " - pyPi"
|
||||||
|
|
||||||
|
_VERSIONS = {'Calibre Web': calibre_web_version}
|
||||||
|
_VERSIONS.update(OrderedDict(
|
||||||
|
Python=sys.version,
|
||||||
|
Platform='{0[0]} {0[2]} {0[3]} {0[4]} {0[5]}'.format(platform.uname()),
|
||||||
|
))
|
||||||
|
_VERSIONS.update(uploader.get_magick_version())
|
||||||
|
_VERSIONS['Unrar'] = converter.get_unrar_version()
|
||||||
|
_VERSIONS['Ebook converter'] = converter.get_calibre_version()
|
||||||
|
_VERSIONS['Kepubify'] = converter.get_kepubify_version()
|
||||||
|
_VERSIONS.update(sorted_modules)
|
||||||
|
return _VERSIONS
|
||||||
|
|
||||||
|
|
||||||
|
@about.route("/stats")
|
||||||
|
@flask_login.login_required
|
||||||
|
def stats():
|
||||||
|
counter = calibre_db.session.query(db.Books).count()
|
||||||
|
authors = calibre_db.session.query(db.Authors).count()
|
||||||
|
categories = calibre_db.session.query(db.Tags).count()
|
||||||
|
series = calibre_db.session.query(db.Series).count()
|
||||||
|
return render_title_template('stats.html', bookcounter=counter, authorcounter=authors, versions=collect_stats(),
|
||||||
|
categorycounter=categories, seriecounter=series, title=_("Statistics"), page="stat")
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,40 @@
|
||||||
|
from babel import negotiate_locale
|
||||||
|
from flask_babel import Babel, Locale
|
||||||
|
from babel.core import UnknownLocaleError
|
||||||
|
from flask import request
|
||||||
|
from flask_login import current_user
|
||||||
|
|
||||||
|
from . import logger
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
babel = Babel()
|
||||||
|
|
||||||
|
def get_locale():
|
||||||
|
# if a user is logged in, use the locale from the user settings
|
||||||
|
if current_user is not None and hasattr(current_user, "locale"):
|
||||||
|
# if the account is the guest account bypass the config lang settings
|
||||||
|
if current_user.name != 'Guest':
|
||||||
|
return current_user.locale
|
||||||
|
|
||||||
|
preferred = list()
|
||||||
|
if request.accept_languages:
|
||||||
|
for x in request.accept_languages.values():
|
||||||
|
try:
|
||||||
|
preferred.append(str(Locale.parse(x.replace('-', '_'))))
|
||||||
|
except (UnknownLocaleError, ValueError) as e:
|
||||||
|
log.debug('Could not parse locale "%s": %s', x, e)
|
||||||
|
|
||||||
|
return negotiate_locale(preferred or ['en'], get_available_translations())
|
||||||
|
|
||||||
|
|
||||||
|
def get_user_locale_language(user_language):
|
||||||
|
return Locale.parse(user_language).get_language_name(get_locale())
|
||||||
|
|
||||||
|
|
||||||
|
def get_available_locale():
|
||||||
|
return [Locale('en')] + babel.list_translations()
|
||||||
|
|
||||||
|
|
||||||
|
def get_available_translations():
|
||||||
|
return set(str(item) for item in get_available_locale())
|
|
@ -1,217 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
|
||||||
# Copyright (C) 2016-2019 lemmsh cervinko Kennyl matthazinski OzzieIsaacs
|
|
||||||
#
|
|
||||||
# This program is free software: you can redistribute it and/or modify
|
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation, either version 3 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
|
||||||
# This program is distributed in the hope that it will be useful,
|
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
|
||||||
# You should have received a copy of the GNU General Public License
|
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import uploader
|
|
||||||
import os
|
|
||||||
from flask_babel import gettext as _
|
|
||||||
import comic
|
|
||||||
|
|
||||||
try:
|
|
||||||
from lxml.etree import LXML_VERSION as lxmlversion
|
|
||||||
except ImportError:
|
|
||||||
lxmlversion = None
|
|
||||||
|
|
||||||
__author__ = 'lemmsh'
|
|
||||||
|
|
||||||
logger = logging.getLogger("book_formats")
|
|
||||||
|
|
||||||
try:
|
|
||||||
from wand.image import Image
|
|
||||||
from wand import version as ImageVersion
|
|
||||||
from wand.exceptions import PolicyError
|
|
||||||
use_generic_pdf_cover = False
|
|
||||||
except (ImportError, RuntimeError) as e:
|
|
||||||
logger.warning('cannot import Image, generating pdf covers for pdf uploads will not work: %s', e)
|
|
||||||
use_generic_pdf_cover = True
|
|
||||||
try:
|
|
||||||
from PyPDF2 import PdfFileReader
|
|
||||||
from PyPDF2 import __version__ as PyPdfVersion
|
|
||||||
use_pdf_meta = True
|
|
||||||
except ImportError as e:
|
|
||||||
logger.warning('cannot import PyPDF2, extracting pdf metadata will not work: %s', e)
|
|
||||||
use_pdf_meta = False
|
|
||||||
|
|
||||||
try:
|
|
||||||
import epub
|
|
||||||
use_epub_meta = True
|
|
||||||
except ImportError as e:
|
|
||||||
logger.warning('cannot import epub, extracting epub metadata will not work: %s', e)
|
|
||||||
use_epub_meta = False
|
|
||||||
|
|
||||||
try:
|
|
||||||
import fb2
|
|
||||||
use_fb2_meta = True
|
|
||||||
except ImportError as e:
|
|
||||||
logger.warning('cannot import fb2, extracting fb2 metadata will not work: %s', e)
|
|
||||||
use_fb2_meta = False
|
|
||||||
|
|
||||||
try:
|
|
||||||
from PIL import Image
|
|
||||||
from PIL import __version__ as PILversion
|
|
||||||
use_PIL = True
|
|
||||||
except ImportError:
|
|
||||||
use_PIL = False
|
|
||||||
|
|
||||||
|
|
||||||
def process(tmp_file_path, original_file_name, original_file_extension):
|
|
||||||
meta = None
|
|
||||||
try:
|
|
||||||
if ".PDF" == original_file_extension.upper():
|
|
||||||
meta = pdf_meta(tmp_file_path, original_file_name, original_file_extension)
|
|
||||||
if ".EPUB" == original_file_extension.upper() and use_epub_meta is True:
|
|
||||||
meta = epub.get_epub_info(tmp_file_path, original_file_name, original_file_extension)
|
|
||||||
if ".FB2" == original_file_extension.upper() and use_fb2_meta is True:
|
|
||||||
meta = fb2.get_fb2_info(tmp_file_path, original_file_extension)
|
|
||||||
if original_file_extension.upper() in ['.CBZ', '.CBT']:
|
|
||||||
meta = comic.get_comic_info(tmp_file_path, original_file_name, original_file_extension)
|
|
||||||
|
|
||||||
except Exception as ex:
|
|
||||||
logger.warning('cannot parse metadata, using default: %s', ex)
|
|
||||||
|
|
||||||
if meta and meta.title.strip() and meta.author.strip():
|
|
||||||
return meta
|
|
||||||
else:
|
|
||||||
return default_meta(tmp_file_path, original_file_name, original_file_extension)
|
|
||||||
|
|
||||||
|
|
||||||
def default_meta(tmp_file_path, original_file_name, original_file_extension):
|
|
||||||
return uploader.BookMeta(
|
|
||||||
file_path=tmp_file_path,
|
|
||||||
extension=original_file_extension,
|
|
||||||
title=original_file_name,
|
|
||||||
author=u"Unknown",
|
|
||||||
cover=None,
|
|
||||||
description="",
|
|
||||||
tags="",
|
|
||||||
series="",
|
|
||||||
series_id="",
|
|
||||||
languages="")
|
|
||||||
|
|
||||||
|
|
||||||
def pdf_meta(tmp_file_path, original_file_name, original_file_extension):
|
|
||||||
|
|
||||||
if use_pdf_meta:
|
|
||||||
pdf = PdfFileReader(open(tmp_file_path, 'rb'), strict=False)
|
|
||||||
doc_info = pdf.getDocumentInfo()
|
|
||||||
else:
|
|
||||||
doc_info = None
|
|
||||||
|
|
||||||
if doc_info is not None:
|
|
||||||
author = doc_info.author if doc_info.author else u"Unknown"
|
|
||||||
title = doc_info.title if doc_info.title else original_file_name
|
|
||||||
subject = doc_info.subject
|
|
||||||
else:
|
|
||||||
author = u"Unknown"
|
|
||||||
title = original_file_name
|
|
||||||
subject = ""
|
|
||||||
return uploader.BookMeta(
|
|
||||||
file_path=tmp_file_path,
|
|
||||||
extension=original_file_extension,
|
|
||||||
title=title,
|
|
||||||
author=author,
|
|
||||||
cover=pdf_preview(tmp_file_path, original_file_name),
|
|
||||||
description=subject,
|
|
||||||
tags="",
|
|
||||||
series="",
|
|
||||||
series_id="",
|
|
||||||
languages="")
|
|
||||||
|
|
||||||
|
|
||||||
def pdf_preview(tmp_file_path, tmp_dir):
|
|
||||||
if use_generic_pdf_cover:
|
|
||||||
return None
|
|
||||||
else:
|
|
||||||
if use_PIL:
|
|
||||||
try:
|
|
||||||
input1 = PdfFileReader(open(tmp_file_path, 'rb'), strict=False)
|
|
||||||
page0 = input1.getPage(0)
|
|
||||||
xObject = page0['/Resources']['/XObject'].getObject()
|
|
||||||
|
|
||||||
for obj in xObject:
|
|
||||||
if xObject[obj]['/Subtype'] == '/Image':
|
|
||||||
size = (xObject[obj]['/Width'], xObject[obj]['/Height'])
|
|
||||||
data = xObject[obj]._data # xObject[obj].getData()
|
|
||||||
if xObject[obj]['/ColorSpace'] == '/DeviceRGB':
|
|
||||||
mode = "RGB"
|
|
||||||
else:
|
|
||||||
mode = "P"
|
|
||||||
if '/Filter' in xObject[obj]:
|
|
||||||
if xObject[obj]['/Filter'] == '/FlateDecode':
|
|
||||||
img = Image.frombytes(mode, size, data)
|
|
||||||
cover_file_name = os.path.splitext(tmp_file_path)[0] + ".cover.png"
|
|
||||||
img.save(filename=os.path.join(tmp_dir, cover_file_name))
|
|
||||||
return cover_file_name
|
|
||||||
# img.save(obj[1:] + ".png")
|
|
||||||
elif xObject[obj]['/Filter'] == '/DCTDecode':
|
|
||||||
cover_file_name = os.path.splitext(tmp_file_path)[0] + ".cover.jpg"
|
|
||||||
img = open(cover_file_name, "wb")
|
|
||||||
img.write(data)
|
|
||||||
img.close()
|
|
||||||
return cover_file_name
|
|
||||||
elif xObject[obj]['/Filter'] == '/JPXDecode':
|
|
||||||
cover_file_name = os.path.splitext(tmp_file_path)[0] + ".cover.jp2"
|
|
||||||
img = open(cover_file_name, "wb")
|
|
||||||
img.write(data)
|
|
||||||
img.close()
|
|
||||||
return cover_file_name
|
|
||||||
else:
|
|
||||||
img = Image.frombytes(mode, size, data)
|
|
||||||
cover_file_name = os.path.splitext(tmp_file_path)[0] + ".cover.png"
|
|
||||||
img.save(filename=os.path.join(tmp_dir, cover_file_name))
|
|
||||||
return cover_file_name
|
|
||||||
except Exception as ex:
|
|
||||||
print(ex)
|
|
||||||
try:
|
|
||||||
cover_file_name = os.path.splitext(tmp_file_path)[0] + ".cover.jpg"
|
|
||||||
with Image(filename=tmp_file_path + "[0]", resolution=150) as img:
|
|
||||||
img.compression_quality = 88
|
|
||||||
img.save(filename=os.path.join(tmp_dir, cover_file_name))
|
|
||||||
return cover_file_name
|
|
||||||
except PolicyError as ex:
|
|
||||||
logger.warning('Pdf extraction forbidden by Imagemagick policy: %s', ex)
|
|
||||||
return None
|
|
||||||
except Exception as ex:
|
|
||||||
logger.warning('Cannot extract cover image, using default: %s', ex)
|
|
||||||
return None
|
|
||||||
|
|
||||||
def get_versions():
|
|
||||||
if not use_generic_pdf_cover:
|
|
||||||
IVersion = ImageVersion.MAGICK_VERSION
|
|
||||||
WVersion = ImageVersion.VERSION
|
|
||||||
else:
|
|
||||||
IVersion = _(u'not installed')
|
|
||||||
WVersion = _(u'not installed')
|
|
||||||
if use_pdf_meta:
|
|
||||||
PVersion='v'+PyPdfVersion
|
|
||||||
else:
|
|
||||||
PVersion=_(u'not installed')
|
|
||||||
if lxmlversion:
|
|
||||||
XVersion = 'v'+'.'.join(map(str, lxmlversion))
|
|
||||||
else:
|
|
||||||
XVersion = _(u'not installed')
|
|
||||||
if use_PIL:
|
|
||||||
PILVersion = 'v' + PILversion
|
|
||||||
else:
|
|
||||||
PILVersion = _(u'not installed')
|
|
||||||
return {'Image Magick': IVersion,
|
|
||||||
'PyPdf': PVersion,
|
|
||||||
'lxml':XVersion,
|
|
||||||
'Wand': WVersion,
|
|
||||||
'Pillow': PILVersion}
|
|
|
@ -1,3 +1,5 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
# Copyright (C) 2016-2019 jkrehm andy29485 OzzieIsaacs
|
# Copyright (C) 2016-2019 jkrehm andy29485 OzzieIsaacs
|
||||||
#
|
#
|
||||||
|
@ -17,8 +19,13 @@
|
||||||
# Inspired by https://github.com/ChrisTM/Flask-CacheBust
|
# Inspired by https://github.com/ChrisTM/Flask-CacheBust
|
||||||
# Uses query strings so CSS font files are found without having to resort to absolute URLs
|
# Uses query strings so CSS font files are found without having to resort to absolute URLs
|
||||||
|
|
||||||
import hashlib
|
|
||||||
import os
|
import os
|
||||||
|
import hashlib
|
||||||
|
|
||||||
|
from . import logger
|
||||||
|
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
|
||||||
def init_cache_busting(app):
|
def init_cache_busting(app):
|
||||||
|
@ -34,28 +41,32 @@ def init_cache_busting(app):
|
||||||
|
|
||||||
hash_table = {} # map of file hashes
|
hash_table = {} # map of file hashes
|
||||||
|
|
||||||
app.logger.debug('Computing cache-busting values...')
|
log.debug('Computing cache-busting values...')
|
||||||
# compute file hashes
|
# compute file hashes
|
||||||
for dirpath, __, filenames in os.walk(static_folder):
|
for dirpath, __, filenames in os.walk(static_folder):
|
||||||
for filename in filenames:
|
for filename in filenames:
|
||||||
# compute version component
|
# compute version component
|
||||||
rooted_filename = os.path.join(dirpath, filename)
|
rooted_filename = os.path.join(dirpath, filename)
|
||||||
with open(rooted_filename, 'rb') as f:
|
try:
|
||||||
file_hash = hashlib.md5(f.read()).hexdigest()[:7]
|
with open(rooted_filename, 'rb') as f:
|
||||||
|
file_hash = hashlib.md5(f.read()).hexdigest()[:7] # nosec
|
||||||
|
# save version to tables
|
||||||
|
file_path = rooted_filename.replace(static_folder, "")
|
||||||
|
file_path = file_path.replace("\\", "/") # Convert Windows path to web path
|
||||||
|
hash_table[file_path] = file_hash
|
||||||
|
except PermissionError:
|
||||||
|
log.error("No permission to access {} file.".format(rooted_filename))
|
||||||
|
|
||||||
# save version to tables
|
log.debug('Finished computing cache-busting values')
|
||||||
file_path = rooted_filename.replace(static_folder, "")
|
|
||||||
file_path = file_path.replace("\\", "/") # Convert Windows path to web path
|
|
||||||
hash_table[file_path] = file_hash
|
|
||||||
app.logger.debug('Finished computing cache-busting values')
|
|
||||||
|
|
||||||
def bust_filename(filename):
|
def bust_filename(file_name):
|
||||||
return hash_table.get(filename, "")
|
return hash_table.get(file_name, "")
|
||||||
|
|
||||||
def unbust_filename(filename):
|
def unbust_filename(file_name):
|
||||||
return filename.split("?", 1)[0]
|
return file_name.split("?", 1)[0]
|
||||||
|
|
||||||
@app.url_defaults
|
@app.url_defaults
|
||||||
|
# pylint: disable=unused-variable
|
||||||
def reverse_to_cache_busted_url(endpoint, values):
|
def reverse_to_cache_busted_url(endpoint, values):
|
||||||
"""
|
"""
|
||||||
Make `url_for` produce busted filenames when using the 'static' endpoint.
|
Make `url_for` produce busted filenames when using the 'static' endpoint.
|
||||||
|
|
|
@ -0,0 +1,53 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2019 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
from . import logger
|
||||||
|
from lxml.etree import ParserError
|
||||||
|
|
||||||
|
try:
|
||||||
|
# at least bleach 6.0 is needed -> incomplatible change from list arguments to set arguments
|
||||||
|
from bleach import clean_text as clean_html
|
||||||
|
BLEACH = True
|
||||||
|
except ImportError:
|
||||||
|
try:
|
||||||
|
BLEACH = False
|
||||||
|
from nh3 import clean as clean_html
|
||||||
|
except ImportError:
|
||||||
|
try:
|
||||||
|
BLEACH = False
|
||||||
|
from lxml.html.clean import clean_html
|
||||||
|
except ImportError:
|
||||||
|
clean_html = None
|
||||||
|
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
|
||||||
|
def clean_string(unsafe_text, book_id=0):
|
||||||
|
try:
|
||||||
|
if BLEACH:
|
||||||
|
safe_text = clean_html(unsafe_text, tags=set(), attributes=set())
|
||||||
|
else:
|
||||||
|
safe_text = clean_html(unsafe_text)
|
||||||
|
except ParserError as e:
|
||||||
|
log.error("Comments of book {} are corrupted: {}".format(book_id, e))
|
||||||
|
safe_text = ""
|
||||||
|
except TypeError as e:
|
||||||
|
log.error("Comments can't be parsed, maybe 'lxml' is too new, try installing 'bleach': {}".format(e))
|
||||||
|
safe_text = ""
|
||||||
|
return safe_text
|
150
cps/cli.py
150
cps/cli.py
|
@ -1,7 +1,5 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
|
||||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
# Copyright (C) 2018 OzzieIsaacs
|
# Copyright (C) 2018 OzzieIsaacs
|
||||||
#
|
#
|
||||||
|
@ -18,52 +16,120 @@
|
||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
import argparse
|
|
||||||
import os
|
|
||||||
import sys
|
import sys
|
||||||
|
import os
|
||||||
|
import argparse
|
||||||
|
import socket
|
||||||
|
|
||||||
parser = argparse.ArgumentParser(description='Calibre Web is a web app'
|
from .constants import CONFIG_DIR as _CONFIG_DIR
|
||||||
' providing a interface for browsing, reading and downloading eBooks\n', prog='cps.py')
|
from .constants import STABLE_VERSION as _STABLE_VERSION
|
||||||
parser.add_argument('-p', metavar='path', help='path and name to settings db, e.g. /opt/cw.db')
|
from .constants import NIGHTLY_VERSION as _NIGHTLY_VERSION
|
||||||
parser.add_argument('-g', metavar='path', help='path and name to gdrive db, e.g. /opt/gd.db')
|
from .constants import DEFAULT_SETTINGS_FILE, DEFAULT_GDRIVE_FILE
|
||||||
parser.add_argument('-c', metavar='path', help='path and name to SSL certfile, e.g. /opt/test.cert, works only in combination with keyfile')
|
|
||||||
parser.add_argument('-k', metavar='path', help='path and name to SSL keyfile, e.g. /opt/test.key, works only in combination with certfile')
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
generalPath = os.path.normpath(os.getenv("CALIBRE_DBPATH",
|
|
||||||
os.path.dirname(os.path.realpath(__file__)) + os.sep + ".." + os.sep))
|
|
||||||
if args.p:
|
|
||||||
settingspath = args.p
|
|
||||||
else:
|
|
||||||
settingspath = os.path.join(generalPath, "app.db")
|
|
||||||
|
|
||||||
if args.g:
|
def version_info():
|
||||||
gdpath = args.g
|
if _NIGHTLY_VERSION[1].startswith('$Format'):
|
||||||
else:
|
return "Calibre-Web version: %s - unknown git-clone" % _STABLE_VERSION['version'].replace("b", " Beta")
|
||||||
gdpath = os.path.join(generalPath, "gdrive.db")
|
return "Calibre-Web version: %s -%s" % (_STABLE_VERSION['version'].replace("b", " Beta"), _NIGHTLY_VERSION[1])
|
||||||
|
|
||||||
certfilepath = None
|
|
||||||
keyfilepath = None
|
|
||||||
if args.c:
|
|
||||||
if os.path.isfile(args.c):
|
|
||||||
certfilepath = args.c
|
|
||||||
else:
|
|
||||||
print("Certfilepath is invalid. Exiting...")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
if args.c is "":
|
class CliParameter(object):
|
||||||
certfilepath = ""
|
|
||||||
|
|
||||||
if args.k:
|
def init(self):
|
||||||
if os.path.isfile(args.k):
|
self.arg_parser()
|
||||||
keyfilepath = args.k
|
|
||||||
else:
|
|
||||||
print("Keyfilepath is invalid. Exiting...")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
if (args.k and not args.c) or (not args.k and args.c):
|
def arg_parser(self):
|
||||||
print("Certfile and Keyfile have to be used together. Exiting...")
|
parser = argparse.ArgumentParser(description='Calibre Web is a web app providing '
|
||||||
sys.exit(1)
|
'a interface for browsing, reading and downloading eBooks\n',
|
||||||
|
prog='cps.py')
|
||||||
|
parser.add_argument('-p', metavar='path', help='path and name to settings db, e.g. /opt/cw.db')
|
||||||
|
parser.add_argument('-g', metavar='path', help='path and name to gdrive db, e.g. /opt/gd.db')
|
||||||
|
parser.add_argument('-c', metavar='path', help='path and name to SSL certfile, e.g. /opt/test.cert, '
|
||||||
|
'works only in combination with keyfile')
|
||||||
|
parser.add_argument('-k', metavar='path', help='path and name to SSL keyfile, e.g. /opt/test.key, '
|
||||||
|
'works only in combination with certfile')
|
||||||
|
parser.add_argument('-o', metavar='path', help='path and name Calibre-Web logfile')
|
||||||
|
parser.add_argument('-v', '--version', action='version', help='Shows version number and exits Calibre-Web',
|
||||||
|
version=version_info())
|
||||||
|
parser.add_argument('-i', metavar='ip-address', help='Server IP-Address to listen')
|
||||||
|
parser.add_argument('-m', action='store_true', help='Use Memory-backend as limiter backend, use this parameter in case of miss configured backend')
|
||||||
|
parser.add_argument('-s', metavar='user:pass',
|
||||||
|
help='Sets specific username to new password and exits Calibre-Web')
|
||||||
|
parser.add_argument('-f', action='store_true', help='Flag is depreciated and will be removed in next version')
|
||||||
|
parser.add_argument('-l', action='store_true', help='Allow loading covers from localhost')
|
||||||
|
parser.add_argument('-d', action='store_true', help='Dry run of updater to check file permissions in advance '
|
||||||
|
'and exits Calibre-Web')
|
||||||
|
parser.add_argument('-r', action='store_true', help='Enable public database reconnect route under /reconnect')
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
if args.k is "":
|
self.logpath = args.o or ""
|
||||||
keyfilepath = ""
|
self.settings_path = args.p or os.path.join(_CONFIG_DIR, DEFAULT_SETTINGS_FILE)
|
||||||
|
self.gd_path = args.g or os.path.join(_CONFIG_DIR, DEFAULT_GDRIVE_FILE)
|
||||||
|
|
||||||
|
if os.path.isdir(self.settings_path):
|
||||||
|
self.settings_path = os.path.join(self.settings_path, DEFAULT_SETTINGS_FILE)
|
||||||
|
|
||||||
|
if os.path.isdir(self.gd_path):
|
||||||
|
self.gd_path = os.path.join(self.gd_path, DEFAULT_GDRIVE_FILE)
|
||||||
|
|
||||||
|
# handle and check parameter for ssl encryption
|
||||||
|
self.certfilepath = None
|
||||||
|
self.keyfilepath = None
|
||||||
|
if args.c:
|
||||||
|
if os.path.isfile(args.c):
|
||||||
|
self.certfilepath = args.c
|
||||||
|
else:
|
||||||
|
print("Certfile path is invalid. Exiting...")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
if args.c == "":
|
||||||
|
self.certfilepath = ""
|
||||||
|
|
||||||
|
if args.k:
|
||||||
|
if os.path.isfile(args.k):
|
||||||
|
self.keyfilepath = args.k
|
||||||
|
else:
|
||||||
|
print("Keyfile path is invalid. Exiting...")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
if (args.k and not args.c) or (not args.k and args.c):
|
||||||
|
print("Certfile and Keyfile have to be used together. Exiting...")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
if args.k == "":
|
||||||
|
self.keyfilepath = ""
|
||||||
|
|
||||||
|
# overwrite limiter backend
|
||||||
|
self.memory_backend = args.m or None
|
||||||
|
# dry run updater
|
||||||
|
self.dry_run = args.d or None
|
||||||
|
# enable reconnect endpoint for docker database reconnect
|
||||||
|
self.reconnect_enable = args.r or os.environ.get("CALIBRE_RECONNECT", None)
|
||||||
|
# load covers from localhost
|
||||||
|
self.allow_localhost = args.l or os.environ.get("CALIBRE_LOCALHOST", None)
|
||||||
|
# handle and check ip address argument
|
||||||
|
self.ip_address = args.i or None
|
||||||
|
if self.ip_address:
|
||||||
|
try:
|
||||||
|
# try to parse the given ip address with socket
|
||||||
|
if hasattr(socket, 'inet_pton'):
|
||||||
|
if ':' in self.ip_address:
|
||||||
|
socket.inet_pton(socket.AF_INET6, self.ip_address)
|
||||||
|
else:
|
||||||
|
socket.inet_pton(socket.AF_INET, self.ip_address)
|
||||||
|
else:
|
||||||
|
# on Windows python < 3.4, inet_pton is not available
|
||||||
|
# inet_atom only handles IPv4 addresses
|
||||||
|
socket.inet_aton(self.ip_address)
|
||||||
|
except socket.error as err:
|
||||||
|
print(self.ip_address, ':', err)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
# handle and check user password argument
|
||||||
|
self.user_credentials = args.s or None
|
||||||
|
if self.user_credentials and ":" not in self.user_credentials:
|
||||||
|
print("No valid 'username:password' format")
|
||||||
|
sys.exit(3)
|
||||||
|
|
||||||
|
if args.f:
|
||||||
|
print("Warning: -f flag is depreciated and will be removed in next version")
|
||||||
|
|
176
cps/comic.py
176
cps/comic.py
|
@ -1,8 +1,7 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
# Copyright (C) 2018 OzzieIsaacs
|
# Copyright (C) 2018-2022 OzzieIsaacs
|
||||||
#
|
#
|
||||||
# This program is free software: you can redistribute it and/or modify
|
# This program is free software: you can redistribute it and/or modify
|
||||||
# it under the terms of the GNU General Public License as published by
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
@ -17,21 +16,60 @@
|
||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
import zipfile
|
|
||||||
import tarfile
|
|
||||||
import os
|
import os
|
||||||
import uploader
|
|
||||||
|
from . import logger, isoLanguages, cover
|
||||||
|
from .constants import BookMeta
|
||||||
|
|
||||||
|
try:
|
||||||
|
from wand.image import Image
|
||||||
|
use_IM = True
|
||||||
|
except (ImportError, RuntimeError) as e:
|
||||||
|
use_IM = False
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
try:
|
||||||
|
from comicapi.comicarchive import ComicArchive, MetaDataStyle
|
||||||
|
use_comic_meta = True
|
||||||
|
try:
|
||||||
|
from comicapi import __version__ as comic_version
|
||||||
|
except ImportError:
|
||||||
|
comic_version = ''
|
||||||
|
try:
|
||||||
|
from comicapi.comicarchive import load_archive_plugins
|
||||||
|
import comicapi.utils
|
||||||
|
comicapi.utils.add_rar_paths()
|
||||||
|
except ImportError:
|
||||||
|
load_archive_plugins = None
|
||||||
|
except (ImportError, LookupError) as e:
|
||||||
|
log.debug('Cannot import comicapi, extracting comic metadata will not work: %s', e)
|
||||||
|
import zipfile
|
||||||
|
import tarfile
|
||||||
|
try:
|
||||||
|
import rarfile
|
||||||
|
use_rarfile = True
|
||||||
|
except (ImportError, SyntaxError) as e:
|
||||||
|
log.debug('Cannot import rarfile, extracting cover files from rar files will not work: %s', e)
|
||||||
|
use_rarfile = False
|
||||||
|
try:
|
||||||
|
import py7zr
|
||||||
|
use_7zip = True
|
||||||
|
except (ImportError, SyntaxError) as e:
|
||||||
|
log.debug('Cannot import py7zr, extracting cover files from CB7 files will not work: %s', e)
|
||||||
|
use_7zip = False
|
||||||
|
use_comic_meta = False
|
||||||
|
|
||||||
|
|
||||||
def extractCover(tmp_file_name, original_file_extension):
|
def _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_executable):
|
||||||
cover_data = None
|
cover_data = extension = None
|
||||||
if original_file_extension.upper() == '.CBZ':
|
if original_file_extension.upper() == '.CBZ':
|
||||||
cf = zipfile.ZipFile(tmp_file_name)
|
cf = zipfile.ZipFile(tmp_file_name)
|
||||||
for name in cf.namelist():
|
for name in cf.namelist():
|
||||||
ext = os.path.splitext(name)
|
ext = os.path.splitext(name)
|
||||||
if len(ext) > 1:
|
if len(ext) > 1:
|
||||||
extension = ext[1].lower()
|
extension = ext[1].lower()
|
||||||
if extension == '.jpg':
|
if extension in cover.COVER_EXTENSIONS:
|
||||||
cover_data = cf.read(name)
|
cover_data = cf.read(name)
|
||||||
break
|
break
|
||||||
elif original_file_extension.upper() == '.CBT':
|
elif original_file_extension.upper() == '.CBT':
|
||||||
|
@ -40,33 +78,111 @@ def extractCover(tmp_file_name, original_file_extension):
|
||||||
ext = os.path.splitext(name)
|
ext = os.path.splitext(name)
|
||||||
if len(ext) > 1:
|
if len(ext) > 1:
|
||||||
extension = ext[1].lower()
|
extension = ext[1].lower()
|
||||||
if extension == '.jpg':
|
if extension in cover.COVER_EXTENSIONS:
|
||||||
cover_data = cf.extractfile(name).read()
|
cover_data = cf.extractfile(name).read()
|
||||||
break
|
break
|
||||||
|
elif original_file_extension.upper() == '.CBR' and use_rarfile:
|
||||||
|
try:
|
||||||
|
rarfile.UNRAR_TOOL = rar_executable
|
||||||
|
cf = rarfile.RarFile(tmp_file_name)
|
||||||
|
for name in cf.namelist():
|
||||||
|
ext = os.path.splitext(name)
|
||||||
|
if len(ext) > 1:
|
||||||
|
extension = ext[1].lower()
|
||||||
|
if extension in cover.COVER_EXTENSIONS:
|
||||||
|
cover_data = cf.read([name])
|
||||||
|
break
|
||||||
|
except Exception as ex:
|
||||||
|
log.error('Rarfile failed with error: {}'.format(ex))
|
||||||
|
elif original_file_extension.upper() == '.CB7' and use_7zip:
|
||||||
|
cf = py7zr.SevenZipFile(tmp_file_name)
|
||||||
|
for name in cf.getnames():
|
||||||
|
ext = os.path.splitext(name)
|
||||||
|
if len(ext) > 1:
|
||||||
|
extension = ext[1].lower()
|
||||||
|
if extension in cover.COVER_EXTENSIONS:
|
||||||
|
try:
|
||||||
|
cover_data = cf.read([name])[name].read()
|
||||||
|
except (py7zr.Bad7zFile, OSError) as ex:
|
||||||
|
log.error('7Zip file failed with error: {}'.format(ex))
|
||||||
|
break
|
||||||
|
return cover_data, extension
|
||||||
|
|
||||||
prefix = os.path.dirname(tmp_file_name)
|
|
||||||
if cover_data:
|
def _extract_cover(tmp_file_name, original_file_extension, rar_executable):
|
||||||
tmp_cover_name = prefix + '/cover' + extension
|
cover_data = extension = None
|
||||||
image = open(tmp_cover_name, 'wb')
|
if use_comic_meta:
|
||||||
image.write(cover_data)
|
try:
|
||||||
image.close()
|
archive = ComicArchive(tmp_file_name, rar_exe_path=rar_executable)
|
||||||
|
except TypeError:
|
||||||
|
archive = ComicArchive(tmp_file_name)
|
||||||
|
name_list = archive.getPageNameList if hasattr(archive, "getPageNameList") else archive.get_page_name_list
|
||||||
|
for index, name in enumerate(name_list()):
|
||||||
|
ext = os.path.splitext(name)
|
||||||
|
if len(ext) > 1:
|
||||||
|
extension = ext[1].lower()
|
||||||
|
if extension in cover.COVER_EXTENSIONS:
|
||||||
|
get_page = archive.getPage if hasattr(archive, "getPageNameList") else archive.get_page
|
||||||
|
cover_data = get_page(index)
|
||||||
|
break
|
||||||
else:
|
else:
|
||||||
tmp_cover_name = None
|
cover_data, extension = _extract_cover_from_archive(original_file_extension, tmp_file_name, rar_executable)
|
||||||
return tmp_cover_name
|
return cover.cover_processing(tmp_file_name, cover_data, extension)
|
||||||
|
|
||||||
|
|
||||||
def get_comic_info(tmp_file_path, original_file_name, original_file_extension):
|
def get_comic_info(tmp_file_path, original_file_name, original_file_extension, rar_executable):
|
||||||
|
if use_comic_meta:
|
||||||
|
try:
|
||||||
|
archive = ComicArchive(tmp_file_path, rar_exe_path=rar_executable)
|
||||||
|
except TypeError:
|
||||||
|
load_archive_plugins(force=True, rar=rar_executable)
|
||||||
|
archive = ComicArchive(tmp_file_path)
|
||||||
|
if hasattr(archive, "seemsToBeAComicArchive"):
|
||||||
|
seems_archive = archive.seemsToBeAComicArchive
|
||||||
|
else:
|
||||||
|
seems_archive = archive.seems_to_be_a_comic_archive
|
||||||
|
if seems_archive():
|
||||||
|
has_metadata = archive.hasMetadata if hasattr(archive, "hasMetadata") else archive.has_metadata
|
||||||
|
if has_metadata(MetaDataStyle.CIX):
|
||||||
|
style = MetaDataStyle.CIX
|
||||||
|
elif has_metadata(MetaDataStyle.CBI):
|
||||||
|
style = MetaDataStyle.CBI
|
||||||
|
else:
|
||||||
|
style = None
|
||||||
|
|
||||||
coverfile = extractCover(tmp_file_path, original_file_extension)
|
read_metadata = archive.readMetadata if hasattr(archive, "readMetadata") else archive.read_metadata
|
||||||
|
loaded_metadata = read_metadata(style)
|
||||||
|
|
||||||
return uploader.BookMeta(
|
lang = loaded_metadata.language or ""
|
||||||
file_path=tmp_file_path,
|
loaded_metadata.language = isoLanguages.get_lang3(lang)
|
||||||
extension=original_file_extension,
|
|
||||||
title=original_file_name,
|
return BookMeta(
|
||||||
author=u"Unknown",
|
file_path=tmp_file_path,
|
||||||
cover=coverfile,
|
extension=original_file_extension,
|
||||||
description="",
|
title=loaded_metadata.title or original_file_name,
|
||||||
tags="",
|
author=" & ".join([credit["person"]
|
||||||
series="",
|
for credit in loaded_metadata.credits if credit["role"] == "Writer"]) or 'Unknown',
|
||||||
series_id="",
|
cover=_extract_cover(tmp_file_path, original_file_extension, rar_executable),
|
||||||
languages="")
|
description=loaded_metadata.comments or "",
|
||||||
|
tags="",
|
||||||
|
series=loaded_metadata.series or "",
|
||||||
|
series_id=loaded_metadata.issue or "",
|
||||||
|
languages=loaded_metadata.language,
|
||||||
|
publisher="",
|
||||||
|
pubdate="",
|
||||||
|
identifiers=[])
|
||||||
|
|
||||||
|
return BookMeta(
|
||||||
|
file_path=tmp_file_path,
|
||||||
|
extension=original_file_extension,
|
||||||
|
title=original_file_name,
|
||||||
|
author='Unknown',
|
||||||
|
cover=_extract_cover(tmp_file_path, original_file_extension, rar_executable),
|
||||||
|
description="",
|
||||||
|
tags="",
|
||||||
|
series="",
|
||||||
|
series_id="",
|
||||||
|
languages="",
|
||||||
|
publisher="",
|
||||||
|
pubdate="",
|
||||||
|
identifiers=[])
|
||||||
|
|
|
@ -0,0 +1,575 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2019 OzzieIsaacs, pwr
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import json
|
||||||
|
|
||||||
|
from sqlalchemy import Column, String, Integer, SmallInteger, Boolean, BLOB, JSON
|
||||||
|
from sqlalchemy.exc import OperationalError
|
||||||
|
from sqlalchemy.sql.expression import text
|
||||||
|
from sqlalchemy import exists
|
||||||
|
from cryptography.fernet import Fernet
|
||||||
|
import cryptography.exceptions
|
||||||
|
from base64 import urlsafe_b64decode
|
||||||
|
try:
|
||||||
|
# Compatibility with sqlalchemy 2.0
|
||||||
|
from sqlalchemy.orm import declarative_base
|
||||||
|
except ImportError:
|
||||||
|
from sqlalchemy.ext.declarative import declarative_base
|
||||||
|
|
||||||
|
from . import constants, logger
|
||||||
|
from .subproc_wrapper import process_wait
|
||||||
|
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
_Base = declarative_base()
|
||||||
|
|
||||||
|
|
||||||
|
class _Flask_Settings(_Base):
|
||||||
|
__tablename__ = 'flask_settings'
|
||||||
|
|
||||||
|
id = Column(Integer, primary_key=True)
|
||||||
|
flask_session_key = Column(BLOB, default=b"")
|
||||||
|
|
||||||
|
def __init__(self, key):
|
||||||
|
self.flask_session_key = key
|
||||||
|
|
||||||
|
|
||||||
|
# Baseclass for representing settings in app.db with email server settings and Calibre database settings
|
||||||
|
# (application settings)
|
||||||
|
class _Settings(_Base):
|
||||||
|
__tablename__ = 'settings'
|
||||||
|
|
||||||
|
id = Column(Integer, primary_key=True)
|
||||||
|
mail_server = Column(String, default=constants.DEFAULT_MAIL_SERVER)
|
||||||
|
mail_port = Column(Integer, default=25)
|
||||||
|
mail_use_ssl = Column(SmallInteger, default=0)
|
||||||
|
mail_login = Column(String, default='mail@example.com')
|
||||||
|
mail_password_e = Column(String)
|
||||||
|
mail_password = Column(String)
|
||||||
|
mail_from = Column(String, default='automailer <mail@example.com>')
|
||||||
|
mail_size = Column(Integer, default=25*1024*1024)
|
||||||
|
mail_server_type = Column(SmallInteger, default=0)
|
||||||
|
mail_gmail_token = Column(JSON, default={})
|
||||||
|
|
||||||
|
config_calibre_dir = Column(String)
|
||||||
|
config_calibre_uuid = Column(String)
|
||||||
|
config_calibre_split = Column(Boolean, default=False)
|
||||||
|
config_calibre_split_dir = Column(String)
|
||||||
|
config_port = Column(Integer, default=constants.DEFAULT_PORT)
|
||||||
|
config_external_port = Column(Integer, default=constants.DEFAULT_PORT)
|
||||||
|
config_certfile = Column(String)
|
||||||
|
config_keyfile = Column(String)
|
||||||
|
config_trustedhosts = Column(String, default='')
|
||||||
|
config_calibre_web_title = Column(String, default='Calibre-Web')
|
||||||
|
config_books_per_page = Column(Integer, default=60)
|
||||||
|
config_random_books = Column(Integer, default=4)
|
||||||
|
config_authors_max = Column(Integer, default=0)
|
||||||
|
config_read_column = Column(Integer, default=0)
|
||||||
|
config_title_regex = Column(String, default=r'^(A|The|An|Der|Die|Das|Den|Ein|Eine|Einen|Dem|Des|Einem|Eines|Le|La|Les|L\'|Un|Une)\s+')
|
||||||
|
config_theme = Column(Integer, default=0)
|
||||||
|
|
||||||
|
config_log_level = Column(SmallInteger, default=logger.DEFAULT_LOG_LEVEL)
|
||||||
|
config_logfile = Column(String, default=logger.DEFAULT_LOG_FILE)
|
||||||
|
config_access_log = Column(SmallInteger, default=0)
|
||||||
|
config_access_logfile = Column(String, default=logger.DEFAULT_ACCESS_LOG)
|
||||||
|
|
||||||
|
config_uploading = Column(SmallInteger, default=0)
|
||||||
|
config_anonbrowse = Column(SmallInteger, default=0)
|
||||||
|
config_public_reg = Column(SmallInteger, default=0)
|
||||||
|
config_remote_login = Column(Boolean, default=False)
|
||||||
|
config_kobo_sync = Column(Boolean, default=False)
|
||||||
|
|
||||||
|
config_default_role = Column(SmallInteger, default=0)
|
||||||
|
config_default_show = Column(SmallInteger, default=constants.ADMIN_USER_SIDEBAR)
|
||||||
|
config_default_language = Column(String(3), default="all")
|
||||||
|
config_default_locale = Column(String(2), default="en")
|
||||||
|
config_columns_to_ignore = Column(String)
|
||||||
|
|
||||||
|
config_denied_tags = Column(String, default="")
|
||||||
|
config_allowed_tags = Column(String, default="")
|
||||||
|
config_restricted_column = Column(SmallInteger, default=0)
|
||||||
|
config_denied_column_value = Column(String, default="")
|
||||||
|
config_allowed_column_value = Column(String, default="")
|
||||||
|
|
||||||
|
config_use_google_drive = Column(Boolean, default=False)
|
||||||
|
config_google_drive_folder = Column(String)
|
||||||
|
config_google_drive_watch_changes_response = Column(JSON, default={})
|
||||||
|
|
||||||
|
config_use_goodreads = Column(Boolean, default=False)
|
||||||
|
config_goodreads_api_key = Column(String)
|
||||||
|
config_register_email = Column(Boolean, default=False)
|
||||||
|
config_login_type = Column(Integer, default=0)
|
||||||
|
|
||||||
|
config_kobo_proxy = Column(Boolean, default=False)
|
||||||
|
|
||||||
|
config_ldap_provider_url = Column(String, default='example.org')
|
||||||
|
config_ldap_port = Column(SmallInteger, default=389)
|
||||||
|
config_ldap_authentication = Column(SmallInteger, default=constants.LDAP_AUTH_SIMPLE)
|
||||||
|
config_ldap_serv_username = Column(String, default='cn=admin,dc=example,dc=org')
|
||||||
|
config_ldap_serv_password_e = Column(String)
|
||||||
|
config_ldap_serv_password = Column(String)
|
||||||
|
config_ldap_encryption = Column(SmallInteger, default=0)
|
||||||
|
config_ldap_cacert_path = Column(String, default="")
|
||||||
|
config_ldap_cert_path = Column(String, default="")
|
||||||
|
config_ldap_key_path = Column(String, default="")
|
||||||
|
config_ldap_dn = Column(String, default='dc=example,dc=org')
|
||||||
|
config_ldap_user_object = Column(String, default='uid=%s')
|
||||||
|
config_ldap_member_user_object = Column(String, default='')
|
||||||
|
config_ldap_openldap = Column(Boolean, default=True)
|
||||||
|
config_ldap_group_object_filter = Column(String, default='(&(objectclass=posixGroup)(cn=%s))')
|
||||||
|
config_ldap_group_members_field = Column(String, default='memberUid')
|
||||||
|
config_ldap_group_name = Column(String, default='calibreweb')
|
||||||
|
|
||||||
|
config_kepubifypath = Column(String, default=None)
|
||||||
|
config_converterpath = Column(String, default=None)
|
||||||
|
config_binariesdir = Column(String, default=None)
|
||||||
|
config_calibre = Column(String)
|
||||||
|
config_rarfile_location = Column(String, default=None)
|
||||||
|
config_upload_formats = Column(String, default=','.join(constants.EXTENSIONS_UPLOAD))
|
||||||
|
config_unicode_filename = Column(Boolean, default=False)
|
||||||
|
config_embed_metadata = Column(Boolean, default=True)
|
||||||
|
|
||||||
|
config_updatechannel = Column(Integer, default=constants.UPDATE_STABLE)
|
||||||
|
|
||||||
|
config_reverse_proxy_login_header_name = Column(String)
|
||||||
|
config_allow_reverse_proxy_header_login = Column(Boolean, default=False)
|
||||||
|
|
||||||
|
schedule_start_time = Column(Integer, default=4)
|
||||||
|
schedule_duration = Column(Integer, default=10)
|
||||||
|
schedule_generate_book_covers = Column(Boolean, default=False)
|
||||||
|
schedule_generate_series_covers = Column(Boolean, default=False)
|
||||||
|
schedule_reconnect = Column(Boolean, default=False)
|
||||||
|
schedule_metadata_backup = Column(Boolean, default=False)
|
||||||
|
|
||||||
|
config_password_policy = Column(Boolean, default=True)
|
||||||
|
config_password_min_length = Column(Integer, default=8)
|
||||||
|
config_password_number = Column(Boolean, default=True)
|
||||||
|
config_password_lower = Column(Boolean, default=True)
|
||||||
|
config_password_upper = Column(Boolean, default=True)
|
||||||
|
config_password_character = Column(Boolean, default=True)
|
||||||
|
config_password_special = Column(Boolean, default=True)
|
||||||
|
config_session = Column(Integer, default=1)
|
||||||
|
config_ratelimiter = Column(Boolean, default=True)
|
||||||
|
config_limiter_uri = Column(String, default="")
|
||||||
|
config_limiter_options = Column(String, default="")
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return self.__class__.__name__
|
||||||
|
|
||||||
|
|
||||||
|
# Class holds all application specific settings in calibre-web
|
||||||
|
class ConfigSQL(object):
|
||||||
|
# pylint: disable=no-member
|
||||||
|
def __init__(self):
|
||||||
|
self.__dict__["dirty"] = list()
|
||||||
|
|
||||||
|
def init_config(self, session, secret_key, cli):
|
||||||
|
self._session = session
|
||||||
|
self._settings = None
|
||||||
|
self.db_configured = None
|
||||||
|
self.config_calibre_dir = None
|
||||||
|
self._fernet = Fernet(secret_key)
|
||||||
|
self.cli = cli
|
||||||
|
self.load()
|
||||||
|
|
||||||
|
change = False
|
||||||
|
|
||||||
|
if self.config_binariesdir == None: # pylint: disable=access-member-before-definition
|
||||||
|
change = True
|
||||||
|
self.config_binariesdir = autodetect_calibre_binaries()
|
||||||
|
self.config_converterpath = autodetect_converter_binary(self.config_binariesdir)
|
||||||
|
|
||||||
|
if self.config_kepubifypath == None: # pylint: disable=access-member-before-definition
|
||||||
|
change = True
|
||||||
|
self.config_kepubifypath = autodetect_kepubify_binary()
|
||||||
|
|
||||||
|
if self.config_rarfile_location == None: # pylint: disable=access-member-before-definition
|
||||||
|
change = True
|
||||||
|
self.config_rarfile_location = autodetect_unrar_binary()
|
||||||
|
if change:
|
||||||
|
self.save()
|
||||||
|
|
||||||
|
def _read_from_storage(self):
|
||||||
|
if self._settings is None:
|
||||||
|
log.debug("_ConfigSQL._read_from_storage")
|
||||||
|
self._settings = self._session.query(_Settings).first()
|
||||||
|
return self._settings
|
||||||
|
|
||||||
|
def get_config_certfile(self):
|
||||||
|
if self.cli.certfilepath:
|
||||||
|
return self.cli.certfilepath
|
||||||
|
if self.cli.certfilepath == "":
|
||||||
|
return None
|
||||||
|
return self.config_certfile
|
||||||
|
|
||||||
|
def get_config_keyfile(self):
|
||||||
|
if self.cli.keyfilepath:
|
||||||
|
return self.cli.keyfilepath
|
||||||
|
if self.cli.certfilepath == "":
|
||||||
|
return None
|
||||||
|
return self.config_keyfile
|
||||||
|
|
||||||
|
def get_config_ipaddress(self):
|
||||||
|
return self.cli.ip_address or ""
|
||||||
|
|
||||||
|
def _has_role(self, role_flag):
|
||||||
|
return constants.has_flag(self.config_default_role, role_flag)
|
||||||
|
|
||||||
|
def role_admin(self):
|
||||||
|
return self._has_role(constants.ROLE_ADMIN)
|
||||||
|
|
||||||
|
def role_download(self):
|
||||||
|
return self._has_role(constants.ROLE_DOWNLOAD)
|
||||||
|
|
||||||
|
def role_viewer(self):
|
||||||
|
return self._has_role(constants.ROLE_VIEWER)
|
||||||
|
|
||||||
|
def role_upload(self):
|
||||||
|
return self._has_role(constants.ROLE_UPLOAD)
|
||||||
|
|
||||||
|
def role_edit(self):
|
||||||
|
return self._has_role(constants.ROLE_EDIT)
|
||||||
|
|
||||||
|
def role_passwd(self):
|
||||||
|
return self._has_role(constants.ROLE_PASSWD)
|
||||||
|
|
||||||
|
def role_edit_shelfs(self):
|
||||||
|
return self._has_role(constants.ROLE_EDIT_SHELFS)
|
||||||
|
|
||||||
|
def role_delete_books(self):
|
||||||
|
return self._has_role(constants.ROLE_DELETE_BOOKS)
|
||||||
|
|
||||||
|
def show_element_new_user(self, value):
|
||||||
|
return constants.has_flag(self.config_default_show, value)
|
||||||
|
|
||||||
|
def show_detail_random(self):
|
||||||
|
return self.show_element_new_user(constants.DETAIL_RANDOM)
|
||||||
|
|
||||||
|
def list_denied_tags(self):
|
||||||
|
mct = self.config_denied_tags or ""
|
||||||
|
return [t.strip() for t in mct.split(",")]
|
||||||
|
|
||||||
|
def list_allowed_tags(self):
|
||||||
|
mct = self.config_allowed_tags or ""
|
||||||
|
return [t.strip() for t in mct.split(",")]
|
||||||
|
|
||||||
|
def list_denied_column_values(self):
|
||||||
|
mct = self.config_denied_column_value or ""
|
||||||
|
return [t.strip() for t in mct.split(",")]
|
||||||
|
|
||||||
|
def list_allowed_column_values(self):
|
||||||
|
mct = self.config_allowed_column_value or ""
|
||||||
|
return [t.strip() for t in mct.split(",")]
|
||||||
|
|
||||||
|
def get_log_level(self):
|
||||||
|
return logger.get_level_name(self.config_log_level)
|
||||||
|
|
||||||
|
def get_mail_settings(self):
|
||||||
|
return {k: v for k, v in self.__dict__.items() if k.startswith('mail_')}
|
||||||
|
|
||||||
|
def get_mail_server_configured(self):
|
||||||
|
return bool((self.mail_server != constants.DEFAULT_MAIL_SERVER and self.mail_server_type == 0)
|
||||||
|
or (self.mail_gmail_token != {} and self.mail_server_type == 1))
|
||||||
|
|
||||||
|
def get_scheduled_task_settings(self):
|
||||||
|
return {k: v for k, v in self.__dict__.items() if k.startswith('schedule_')}
|
||||||
|
|
||||||
|
def set_from_dictionary(self, dictionary, field, convertor=None, default=None, encode=None):
|
||||||
|
"""Possibly updates a field of this object.
|
||||||
|
The new value, if present, is grabbed from the given dictionary, and optionally passed through a convertor.
|
||||||
|
|
||||||
|
:returns: `True` if the field has changed value
|
||||||
|
"""
|
||||||
|
new_value = dictionary.get(field, default)
|
||||||
|
if new_value is None:
|
||||||
|
return False
|
||||||
|
|
||||||
|
if field not in self.__dict__:
|
||||||
|
log.warning("_ConfigSQL trying to set unknown field '%s' = %r", field, new_value)
|
||||||
|
return False
|
||||||
|
|
||||||
|
if convertor is not None:
|
||||||
|
if encode:
|
||||||
|
new_value = convertor(new_value.encode(encode))
|
||||||
|
else:
|
||||||
|
new_value = convertor(new_value)
|
||||||
|
|
||||||
|
current_value = self.__dict__.get(field)
|
||||||
|
if current_value == new_value:
|
||||||
|
return False
|
||||||
|
|
||||||
|
setattr(self, field, new_value)
|
||||||
|
return True
|
||||||
|
|
||||||
|
def to_dict(self):
|
||||||
|
storage = {}
|
||||||
|
for k, v in self.__dict__.items():
|
||||||
|
if k[0] != '_' and not k.endswith("_e") and not k == "cli":
|
||||||
|
storage[k] = v
|
||||||
|
return storage
|
||||||
|
|
||||||
|
def load(self):
|
||||||
|
"""Load all configuration values from the underlying storage."""
|
||||||
|
s = self._read_from_storage() # type: _Settings
|
||||||
|
for k, v in s.__dict__.items():
|
||||||
|
if k[0] != '_':
|
||||||
|
if v is None:
|
||||||
|
# if the storage column has no value, apply the (possible) default
|
||||||
|
column = s.__class__.__dict__.get(k)
|
||||||
|
if column.default is not None:
|
||||||
|
v = column.default.arg
|
||||||
|
if k.endswith("_e") and v is not None:
|
||||||
|
try:
|
||||||
|
setattr(self, k, self._fernet.decrypt(v).decode())
|
||||||
|
except cryptography.fernet.InvalidToken:
|
||||||
|
setattr(self, k, "")
|
||||||
|
else:
|
||||||
|
setattr(self, k, v)
|
||||||
|
|
||||||
|
have_metadata_db = bool(self.config_calibre_dir)
|
||||||
|
if have_metadata_db:
|
||||||
|
db_file = os.path.join(self.config_calibre_dir, 'metadata.db')
|
||||||
|
have_metadata_db = os.path.isfile(db_file)
|
||||||
|
self.db_configured = have_metadata_db
|
||||||
|
constants.EXTENSIONS_UPLOAD = [x.lstrip().rstrip().lower() for x in self.config_upload_formats.split(',')]
|
||||||
|
from . import cli_param
|
||||||
|
if os.environ.get('FLASK_DEBUG'):
|
||||||
|
logfile = logger.setup(logger.LOG_TO_STDOUT, logger.logging.DEBUG)
|
||||||
|
else:
|
||||||
|
# pylint: disable=access-member-before-definition
|
||||||
|
logfile = logger.setup(cli_param.logpath or self.config_logfile, self.config_log_level)
|
||||||
|
if logfile != os.path.abspath(self.config_logfile):
|
||||||
|
if logfile != os.path.abspath(cli_param.logpath):
|
||||||
|
log.warning("Log path %s not valid, falling back to default", self.config_logfile)
|
||||||
|
self.config_logfile = logfile
|
||||||
|
s.config_logfile = logfile
|
||||||
|
self._session.merge(s)
|
||||||
|
try:
|
||||||
|
self._session.commit()
|
||||||
|
except OperationalError as e:
|
||||||
|
log.error('Database error: %s', e)
|
||||||
|
self._session.rollback()
|
||||||
|
self.__dict__["dirty"] = list()
|
||||||
|
|
||||||
|
def save(self):
|
||||||
|
"""Apply all configuration values to the underlying storage."""
|
||||||
|
s = self._read_from_storage() # type: _Settings
|
||||||
|
|
||||||
|
for k in self.dirty:
|
||||||
|
if k[0] == '_':
|
||||||
|
continue
|
||||||
|
if hasattr(s, k):
|
||||||
|
if k.endswith("_e"):
|
||||||
|
setattr(s, k, self._fernet.encrypt(self.__dict__[k].encode()))
|
||||||
|
else:
|
||||||
|
setattr(s, k, self.__dict__[k])
|
||||||
|
|
||||||
|
log.debug("_ConfigSQL updating storage")
|
||||||
|
self._session.merge(s)
|
||||||
|
try:
|
||||||
|
self._session.commit()
|
||||||
|
except OperationalError as e:
|
||||||
|
log.error('Database error: %s', e)
|
||||||
|
self._session.rollback()
|
||||||
|
self.load()
|
||||||
|
|
||||||
|
def invalidate(self, error=None):
|
||||||
|
if error:
|
||||||
|
log.error(error)
|
||||||
|
log.warning("invalidating configuration")
|
||||||
|
self.db_configured = False
|
||||||
|
self.save()
|
||||||
|
|
||||||
|
def get_book_path(self):
|
||||||
|
return self.config_calibre_split_dir if self.config_calibre_split_dir else self.config_calibre_dir
|
||||||
|
|
||||||
|
def store_calibre_uuid(self, calibre_db, Library_table):
|
||||||
|
try:
|
||||||
|
calibre_uuid = calibre_db.session.query(Library_table).one_or_none()
|
||||||
|
if self.config_calibre_uuid != calibre_uuid.uuid:
|
||||||
|
self.config_calibre_uuid = calibre_uuid.uuid
|
||||||
|
self.save()
|
||||||
|
except AttributeError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def __setattr__(self, attr_name, attr_value):
|
||||||
|
super().__setattr__(attr_name, attr_value)
|
||||||
|
self.__dict__["dirty"].append(attr_name)
|
||||||
|
|
||||||
|
|
||||||
|
def _encrypt_fields(session, secret_key):
|
||||||
|
try:
|
||||||
|
session.query(exists().where(_Settings.mail_password_e)).scalar()
|
||||||
|
except OperationalError:
|
||||||
|
with session.bind.connect() as conn:
|
||||||
|
conn.execute(text("ALTER TABLE settings ADD column 'mail_password_e' String"))
|
||||||
|
conn.execute(text("ALTER TABLE settings ADD column 'config_ldap_serv_password_e' String"))
|
||||||
|
session.commit()
|
||||||
|
crypter = Fernet(secret_key)
|
||||||
|
settings = session.query(_Settings.mail_password, _Settings.config_ldap_serv_password).first()
|
||||||
|
if settings.mail_password:
|
||||||
|
session.query(_Settings).update(
|
||||||
|
{_Settings.mail_password_e: crypter.encrypt(settings.mail_password.encode())})
|
||||||
|
if settings.config_ldap_serv_password:
|
||||||
|
session.query(_Settings).update(
|
||||||
|
{_Settings.config_ldap_serv_password_e:
|
||||||
|
crypter.encrypt(settings.config_ldap_serv_password.encode())})
|
||||||
|
session.commit()
|
||||||
|
|
||||||
|
|
||||||
|
def _migrate_table(session, orm_class, secret_key=None):
|
||||||
|
if secret_key:
|
||||||
|
_encrypt_fields(session, secret_key)
|
||||||
|
changed = False
|
||||||
|
|
||||||
|
for column_name, column in orm_class.__dict__.items():
|
||||||
|
if column_name[0] != '_':
|
||||||
|
try:
|
||||||
|
session.query(column).first()
|
||||||
|
except OperationalError as err:
|
||||||
|
log.debug("%s: %s", column_name, err.args[0])
|
||||||
|
if column.default is None:
|
||||||
|
column_default = ""
|
||||||
|
else:
|
||||||
|
if isinstance(column.default.arg, bool):
|
||||||
|
column_default = "DEFAULT {}".format(int(column.default.arg))
|
||||||
|
else:
|
||||||
|
column_default = "DEFAULT `{}`".format(column.default.arg)
|
||||||
|
if isinstance(column.type, JSON):
|
||||||
|
column_type = "JSON"
|
||||||
|
else:
|
||||||
|
column_type = column.type
|
||||||
|
alter_table = text("ALTER TABLE %s ADD COLUMN `%s` %s %s" % (orm_class.__tablename__,
|
||||||
|
column_name,
|
||||||
|
column_type,
|
||||||
|
column_default))
|
||||||
|
log.debug(alter_table)
|
||||||
|
session.execute(alter_table)
|
||||||
|
changed = True
|
||||||
|
except json.decoder.JSONDecodeError as e:
|
||||||
|
log.error("Database corrupt column: {}".format(column_name))
|
||||||
|
log.debug(e)
|
||||||
|
|
||||||
|
if changed:
|
||||||
|
try:
|
||||||
|
session.commit()
|
||||||
|
except OperationalError:
|
||||||
|
session.rollback()
|
||||||
|
|
||||||
|
|
||||||
|
def autodetect_calibre_binaries():
|
||||||
|
if sys.platform == "win32":
|
||||||
|
calibre_path = ["C:\\program files\\calibre\\",
|
||||||
|
"C:\\program files(x86)\\calibre\\",
|
||||||
|
"C:\\program files(x86)\\calibre2\\",
|
||||||
|
"C:\\program files\\calibre2\\"]
|
||||||
|
else:
|
||||||
|
calibre_path = ["/opt/calibre/"]
|
||||||
|
for element in calibre_path:
|
||||||
|
supported_binary_paths = [os.path.join(element, binary)
|
||||||
|
for binary in constants.SUPPORTED_CALIBRE_BINARIES.values()]
|
||||||
|
if all(os.path.isfile(binary_path) and os.access(binary_path, os.X_OK)
|
||||||
|
for binary_path in supported_binary_paths):
|
||||||
|
values = [process_wait([binary_path, "--version"],
|
||||||
|
pattern=r'\(calibre (.*)\)') for binary_path in supported_binary_paths]
|
||||||
|
if all(values):
|
||||||
|
version = values[0].group(1)
|
||||||
|
log.debug("calibre version %s", version)
|
||||||
|
return element
|
||||||
|
return ""
|
||||||
|
|
||||||
|
|
||||||
|
def autodetect_converter_binary(calibre_path):
|
||||||
|
if sys.platform == "win32":
|
||||||
|
converter_path = os.path.join(calibre_path, "ebook-convert.exe")
|
||||||
|
else:
|
||||||
|
converter_path = os.path.join(calibre_path, "ebook-convert")
|
||||||
|
if calibre_path and os.path.isfile(converter_path) and os.access(converter_path, os.X_OK):
|
||||||
|
return converter_path
|
||||||
|
return ""
|
||||||
|
|
||||||
|
|
||||||
|
def autodetect_unrar_binary():
|
||||||
|
if sys.platform == "win32":
|
||||||
|
calibre_path = ["C:\\program files\\WinRar\\unRAR.exe",
|
||||||
|
"C:\\program files(x86)\\WinRar\\unRAR.exe"]
|
||||||
|
else:
|
||||||
|
calibre_path = ["/usr/bin/unrar"]
|
||||||
|
for element in calibre_path:
|
||||||
|
if os.path.isfile(element) and os.access(element, os.X_OK):
|
||||||
|
return element
|
||||||
|
return ""
|
||||||
|
|
||||||
|
|
||||||
|
def autodetect_kepubify_binary():
|
||||||
|
if sys.platform == "win32":
|
||||||
|
calibre_path = ["C:\\program files\\kepubify\\kepubify-windows-64Bit.exe",
|
||||||
|
"C:\\program files(x86)\\kepubify\\kepubify-windows-64Bit.exe"]
|
||||||
|
else:
|
||||||
|
calibre_path = ["/opt/kepubify/kepubify-linux-64bit", "/opt/kepubify/kepubify-linux-32bit"]
|
||||||
|
for element in calibre_path:
|
||||||
|
if os.path.isfile(element) and os.access(element, os.X_OK):
|
||||||
|
return element
|
||||||
|
return ""
|
||||||
|
|
||||||
|
|
||||||
|
def _migrate_database(session, secret_key):
|
||||||
|
# make sure the table is created, if it does not exist
|
||||||
|
_Base.metadata.create_all(session.bind)
|
||||||
|
_migrate_table(session, _Settings, secret_key)
|
||||||
|
_migrate_table(session, _Flask_Settings)
|
||||||
|
|
||||||
|
|
||||||
|
def load_configuration(session, secret_key):
|
||||||
|
_migrate_database(session, secret_key)
|
||||||
|
if not session.query(_Settings).count():
|
||||||
|
session.add(_Settings())
|
||||||
|
session.commit()
|
||||||
|
|
||||||
|
|
||||||
|
def get_flask_session_key(_session):
|
||||||
|
flask_settings = _session.query(_Flask_Settings).one_or_none()
|
||||||
|
if flask_settings == None:
|
||||||
|
flask_settings = _Flask_Settings(os.urandom(32))
|
||||||
|
_session.add(flask_settings)
|
||||||
|
_session.commit()
|
||||||
|
return flask_settings.flask_session_key
|
||||||
|
|
||||||
|
|
||||||
|
def get_encryption_key(key_path):
|
||||||
|
key_file = os.path.join(key_path, ".key")
|
||||||
|
generate = True
|
||||||
|
error = ""
|
||||||
|
if os.path.exists(key_file) and os.path.getsize(key_file) > 32:
|
||||||
|
with open(key_file, "rb") as f:
|
||||||
|
key = f.read()
|
||||||
|
try:
|
||||||
|
urlsafe_b64decode(key)
|
||||||
|
generate = False
|
||||||
|
except ValueError:
|
||||||
|
pass
|
||||||
|
if generate:
|
||||||
|
key = Fernet.generate_key()
|
||||||
|
try:
|
||||||
|
with open(key_file, "wb") as f:
|
||||||
|
f.write(key)
|
||||||
|
except PermissionError as e:
|
||||||
|
error = e
|
||||||
|
return key, error
|
|
@ -0,0 +1,198 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2019 OzzieIsaacs, pwr
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
from collections import namedtuple
|
||||||
|
from sqlalchemy import __version__ as sql_version
|
||||||
|
|
||||||
|
sqlalchemy_version2 = ([int(x) for x in sql_version.split('.')] >= [2, 0, 0])
|
||||||
|
|
||||||
|
# APP_MODE - production, development, or test
|
||||||
|
APP_MODE = os.environ.get('APP_MODE', 'production')
|
||||||
|
|
||||||
|
# if installed via pip this variable is set to true (empty file with name .HOMEDIR present)
|
||||||
|
HOME_CONFIG = os.path.isfile(os.path.join(os.path.dirname(os.path.abspath(__file__)), '.HOMEDIR'))
|
||||||
|
|
||||||
|
# In executables updater is not available, so variable is set to False there
|
||||||
|
UPDATER_AVAILABLE = True
|
||||||
|
|
||||||
|
# Base dir is parent of current file, necessary if called from different folder
|
||||||
|
BASE_DIR = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), os.pardir))
|
||||||
|
# if executable file the files should be placed in the parent dir (parallel to the exe file)
|
||||||
|
|
||||||
|
STATIC_DIR = os.path.join(BASE_DIR, 'cps', 'static')
|
||||||
|
TEMPLATES_DIR = os.path.join(BASE_DIR, 'cps', 'templates')
|
||||||
|
TRANSLATIONS_DIR = os.path.join(BASE_DIR, 'cps', 'translations')
|
||||||
|
|
||||||
|
# Cache dir - use CACHE_DIR environment variable, otherwise use the default directory: cps/cache
|
||||||
|
DEFAULT_CACHE_DIR = os.path.join(BASE_DIR, 'cps', 'cache')
|
||||||
|
CACHE_DIR = os.environ.get('CACHE_DIR', DEFAULT_CACHE_DIR)
|
||||||
|
|
||||||
|
if HOME_CONFIG:
|
||||||
|
home_dir = os.path.join(os.path.expanduser("~"), ".calibre-web")
|
||||||
|
if not os.path.exists(home_dir):
|
||||||
|
os.makedirs(home_dir)
|
||||||
|
CONFIG_DIR = os.environ.get('CALIBRE_DBPATH', home_dir)
|
||||||
|
else:
|
||||||
|
CONFIG_DIR = os.environ.get('CALIBRE_DBPATH', BASE_DIR)
|
||||||
|
if getattr(sys, 'frozen', False):
|
||||||
|
CONFIG_DIR = os.path.abspath(os.path.join(CONFIG_DIR, os.pardir))
|
||||||
|
|
||||||
|
|
||||||
|
DEFAULT_SETTINGS_FILE = "app.db"
|
||||||
|
DEFAULT_GDRIVE_FILE = "gdrive.db"
|
||||||
|
|
||||||
|
ROLE_USER = 0 << 0
|
||||||
|
ROLE_ADMIN = 1 << 0
|
||||||
|
ROLE_DOWNLOAD = 1 << 1
|
||||||
|
ROLE_UPLOAD = 1 << 2
|
||||||
|
ROLE_EDIT = 1 << 3
|
||||||
|
ROLE_PASSWD = 1 << 4
|
||||||
|
ROLE_ANONYMOUS = 1 << 5
|
||||||
|
ROLE_EDIT_SHELFS = 1 << 6
|
||||||
|
ROLE_DELETE_BOOKS = 1 << 7
|
||||||
|
ROLE_VIEWER = 1 << 8
|
||||||
|
|
||||||
|
ALL_ROLES = {
|
||||||
|
"admin_role": ROLE_ADMIN,
|
||||||
|
"download_role": ROLE_DOWNLOAD,
|
||||||
|
"upload_role": ROLE_UPLOAD,
|
||||||
|
"edit_role": ROLE_EDIT,
|
||||||
|
"passwd_role": ROLE_PASSWD,
|
||||||
|
"edit_shelf_role": ROLE_EDIT_SHELFS,
|
||||||
|
"delete_role": ROLE_DELETE_BOOKS,
|
||||||
|
"viewer_role": ROLE_VIEWER,
|
||||||
|
}
|
||||||
|
|
||||||
|
DETAIL_RANDOM = 1 << 0
|
||||||
|
SIDEBAR_LANGUAGE = 1 << 1
|
||||||
|
SIDEBAR_SERIES = 1 << 2
|
||||||
|
SIDEBAR_CATEGORY = 1 << 3
|
||||||
|
SIDEBAR_HOT = 1 << 4
|
||||||
|
SIDEBAR_RANDOM = 1 << 5
|
||||||
|
SIDEBAR_AUTHOR = 1 << 6
|
||||||
|
SIDEBAR_BEST_RATED = 1 << 7
|
||||||
|
SIDEBAR_READ_AND_UNREAD = 1 << 8
|
||||||
|
SIDEBAR_RECENT = 1 << 9
|
||||||
|
SIDEBAR_SORTED = 1 << 10
|
||||||
|
MATURE_CONTENT = 1 << 11
|
||||||
|
SIDEBAR_PUBLISHER = 1 << 12
|
||||||
|
SIDEBAR_RATING = 1 << 13
|
||||||
|
SIDEBAR_FORMAT = 1 << 14
|
||||||
|
SIDEBAR_ARCHIVED = 1 << 15
|
||||||
|
SIDEBAR_DOWNLOAD = 1 << 16
|
||||||
|
SIDEBAR_LIST = 1 << 17
|
||||||
|
|
||||||
|
sidebar_settings = {
|
||||||
|
"detail_random": DETAIL_RANDOM,
|
||||||
|
"sidebar_language": SIDEBAR_LANGUAGE,
|
||||||
|
"sidebar_series": SIDEBAR_SERIES,
|
||||||
|
"sidebar_category": SIDEBAR_CATEGORY,
|
||||||
|
"sidebar_random": SIDEBAR_RANDOM,
|
||||||
|
"sidebar_author": SIDEBAR_AUTHOR,
|
||||||
|
"sidebar_best_rated": SIDEBAR_BEST_RATED,
|
||||||
|
"sidebar_read_and_unread": SIDEBAR_READ_AND_UNREAD,
|
||||||
|
"sidebar_recent": SIDEBAR_RECENT,
|
||||||
|
"sidebar_sorted": SIDEBAR_SORTED,
|
||||||
|
"sidebar_publisher": SIDEBAR_PUBLISHER,
|
||||||
|
"sidebar_rating": SIDEBAR_RATING,
|
||||||
|
"sidebar_format": SIDEBAR_FORMAT,
|
||||||
|
"sidebar_archived": SIDEBAR_ARCHIVED,
|
||||||
|
"sidebar_download": SIDEBAR_DOWNLOAD,
|
||||||
|
"sidebar_list": SIDEBAR_LIST,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
ADMIN_USER_ROLES = sum(r for r in ALL_ROLES.values()) & ~ROLE_ANONYMOUS
|
||||||
|
ADMIN_USER_SIDEBAR = (SIDEBAR_LIST << 1) - 1
|
||||||
|
|
||||||
|
UPDATE_STABLE = 0 << 0
|
||||||
|
AUTO_UPDATE_STABLE = 1 << 0
|
||||||
|
UPDATE_NIGHTLY = 1 << 1
|
||||||
|
AUTO_UPDATE_NIGHTLY = 1 << 2
|
||||||
|
|
||||||
|
LOGIN_STANDARD = 0
|
||||||
|
LOGIN_LDAP = 1
|
||||||
|
LOGIN_OAUTH = 2
|
||||||
|
|
||||||
|
LDAP_AUTH_ANONYMOUS = 0
|
||||||
|
LDAP_AUTH_UNAUTHENTICATE = 1
|
||||||
|
LDAP_AUTH_SIMPLE = 0
|
||||||
|
|
||||||
|
DEFAULT_MAIL_SERVER = "mail.example.org"
|
||||||
|
|
||||||
|
DEFAULT_PASSWORD = "admin123" # nosec
|
||||||
|
DEFAULT_PORT = 8083
|
||||||
|
env_CALIBRE_PORT = os.environ.get("CALIBRE_PORT", DEFAULT_PORT)
|
||||||
|
try:
|
||||||
|
DEFAULT_PORT = int(env_CALIBRE_PORT)
|
||||||
|
except ValueError:
|
||||||
|
print('Environment variable CALIBRE_PORT has invalid value (%s), faling back to default (8083)' % env_CALIBRE_PORT)
|
||||||
|
del env_CALIBRE_PORT
|
||||||
|
|
||||||
|
|
||||||
|
EXTENSIONS_AUDIO = {'mp3', 'mp4', 'ogg', 'opus', 'wav', 'flac', 'm4a', 'm4b'}
|
||||||
|
EXTENSIONS_CONVERT_FROM = ['pdf', 'epub', 'mobi', 'azw3', 'docx', 'rtf', 'fb2', 'lit', 'lrf',
|
||||||
|
'txt', 'htmlz', 'rtf', 'odt', 'cbz', 'cbr', 'prc']
|
||||||
|
EXTENSIONS_CONVERT_TO = ['pdf', 'epub', 'mobi', 'azw3', 'docx', 'rtf', 'fb2',
|
||||||
|
'lit', 'lrf', 'txt', 'htmlz', 'rtf', 'odt']
|
||||||
|
EXTENSIONS_UPLOAD = {'txt', 'pdf', 'epub', 'kepub', 'mobi', 'azw', 'azw3', 'cbr', 'cbz', 'cbt', 'cb7', 'djvu', 'djv',
|
||||||
|
'prc', 'doc', 'docx', 'fb2', 'html', 'rtf', 'lit', 'odt', 'mp3', 'mp4', 'ogg',
|
||||||
|
'opus', 'wav', 'flac', 'm4a', 'm4b'}
|
||||||
|
|
||||||
|
_extension = ""
|
||||||
|
if sys.platform == "win32":
|
||||||
|
_extension = ".exe"
|
||||||
|
SUPPORTED_CALIBRE_BINARIES = {binary:binary + _extension for binary in ["ebook-convert", "calibredb"]}
|
||||||
|
|
||||||
|
|
||||||
|
def has_flag(value, bit_flag):
|
||||||
|
return bit_flag == (bit_flag & (value or 0))
|
||||||
|
|
||||||
|
def selected_roles(dictionary):
|
||||||
|
return sum(v for k, v in ALL_ROLES.items() if k in dictionary)
|
||||||
|
|
||||||
|
|
||||||
|
# :rtype: BookMeta
|
||||||
|
BookMeta = namedtuple('BookMeta', 'file_path, extension, title, author, cover, description, tags, series, '
|
||||||
|
'series_id, languages, publisher, pubdate, identifiers')
|
||||||
|
|
||||||
|
# python build process likes to have x.y.zbw -> b for beta and w a counting number
|
||||||
|
STABLE_VERSION = {'version': '0.6.22b'}
|
||||||
|
|
||||||
|
NIGHTLY_VERSION = dict()
|
||||||
|
NIGHTLY_VERSION[0] = '$Format:%H$'
|
||||||
|
NIGHTLY_VERSION[1] = '$Format:%cI$'
|
||||||
|
|
||||||
|
# CACHE
|
||||||
|
CACHE_TYPE_THUMBNAILS = 'thumbnails'
|
||||||
|
|
||||||
|
# Thumbnail Types
|
||||||
|
THUMBNAIL_TYPE_COVER = 1
|
||||||
|
THUMBNAIL_TYPE_SERIES = 2
|
||||||
|
THUMBNAIL_TYPE_AUTHOR = 3
|
||||||
|
|
||||||
|
# Thumbnails Sizes
|
||||||
|
COVER_THUMBNAIL_ORIGINAL = 0
|
||||||
|
COVER_THUMBNAIL_SMALL = 1
|
||||||
|
COVER_THUMBNAIL_MEDIUM = 2
|
||||||
|
COVER_THUMBNAIL_LARGE = 3
|
||||||
|
|
||||||
|
# clean-up the module namespace
|
||||||
|
del sys, os, namedtuple
|
|
@ -1,4 +1,3 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
@ -17,51 +16,47 @@
|
||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
|
||||||
import os
|
import os
|
||||||
import subprocess
|
|
||||||
import ub
|
|
||||||
import re
|
import re
|
||||||
from flask_babel import gettext as _
|
|
||||||
|
from flask_babel import lazy_gettext as N_
|
||||||
|
|
||||||
|
from . import config, logger
|
||||||
|
from .subproc_wrapper import process_wait
|
||||||
|
|
||||||
|
|
||||||
def versionKindle():
|
log = logger.create()
|
||||||
versions = _(u'not installed')
|
|
||||||
if os.path.exists(ub.config.config_converterpath):
|
# strings getting translated when used
|
||||||
|
_NOT_INSTALLED = N_('not installed')
|
||||||
|
_EXECUTION_ERROR = N_('Execution permissions missing')
|
||||||
|
|
||||||
|
|
||||||
|
def _get_command_version(path, pattern, argument=None):
|
||||||
|
if os.path.exists(path):
|
||||||
|
command = [path]
|
||||||
|
if argument:
|
||||||
|
command.append(argument)
|
||||||
try:
|
try:
|
||||||
p = subprocess.Popen(ub.config.config_converterpath, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
match = process_wait(command, pattern=pattern)
|
||||||
p.wait()
|
if isinstance(match, re.Match):
|
||||||
for lines in p.stdout.readlines():
|
return match.string
|
||||||
if isinstance(lines, bytes):
|
except Exception as ex:
|
||||||
lines = lines.decode('utf-8')
|
log.warning("%s: %s", path, ex)
|
||||||
if re.search('Amazon kindlegen\(', lines):
|
return _EXECUTION_ERROR
|
||||||
versions = lines
|
return _NOT_INSTALLED
|
||||||
except Exception:
|
|
||||||
versions = _(u'Excecution permissions missing')
|
|
||||||
return {'kindlegen' : versions}
|
|
||||||
|
|
||||||
|
|
||||||
def versionCalibre():
|
def get_calibre_version():
|
||||||
versions = _(u'not installed')
|
return _get_command_version(config.config_converterpath, r'ebook-convert.*\(calibre', '--version')
|
||||||
if os.path.exists(ub.config.config_converterpath):
|
|
||||||
try:
|
|
||||||
p = subprocess.Popen([ub.config.config_converterpath, '--version'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
|
||||||
p.wait()
|
|
||||||
for lines in p.stdout.readlines():
|
|
||||||
if isinstance(lines, bytes):
|
|
||||||
lines = lines.decode('utf-8')
|
|
||||||
if re.search('ebook-convert.*\(calibre', lines):
|
|
||||||
versions = lines
|
|
||||||
except Exception:
|
|
||||||
versions = _(u'Excecution permissions missing')
|
|
||||||
return {'Calibre converter' : versions}
|
|
||||||
|
|
||||||
|
|
||||||
def versioncheck():
|
def get_unrar_version():
|
||||||
if ub.config.config_ebookconverter == 1:
|
unrar_version = _get_command_version(config.config_rarfile_location, r'UNRAR.*\d')
|
||||||
return versionKindle()
|
if unrar_version == "not installed":
|
||||||
elif ub.config.config_ebookconverter == 2:
|
unrar_version = _get_command_version(config.config_rarfile_location, r'unrar.*\d', '-V')
|
||||||
return versionCalibre()
|
return unrar_version
|
||||||
else:
|
|
||||||
return {'ebook_converter':_(u'not configured')}
|
|
||||||
|
|
||||||
|
|
||||||
|
def get_kepubify_version():
|
||||||
|
return _get_command_version(config.config_kepubifypath, r'kepubify\s', '--version')
|
||||||
|
|
|
@ -0,0 +1,48 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2022 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
try:
|
||||||
|
from wand.image import Image
|
||||||
|
use_IM = True
|
||||||
|
except (ImportError, RuntimeError) as e:
|
||||||
|
use_IM = False
|
||||||
|
|
||||||
|
|
||||||
|
NO_JPEG_EXTENSIONS = ['.png', '.webp', '.bmp']
|
||||||
|
COVER_EXTENSIONS = ['.png', '.webp', '.bmp', '.jpg', '.jpeg']
|
||||||
|
|
||||||
|
|
||||||
|
def cover_processing(tmp_file_name, img, extension):
|
||||||
|
tmp_cover_name = os.path.join(os.path.dirname(tmp_file_name), 'cover.jpg')
|
||||||
|
if extension in NO_JPEG_EXTENSIONS:
|
||||||
|
if use_IM:
|
||||||
|
with Image(blob=img) as imgc:
|
||||||
|
imgc.format = 'jpeg'
|
||||||
|
imgc.transform_colorspace('rgb')
|
||||||
|
imgc.save(filename=tmp_cover_name)
|
||||||
|
return tmp_cover_name
|
||||||
|
else:
|
||||||
|
return None
|
||||||
|
if img:
|
||||||
|
with open(tmp_cover_name, 'wb') as f:
|
||||||
|
f.write(img)
|
||||||
|
return tmp_cover_name
|
||||||
|
else:
|
||||||
|
return None
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,80 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2012-2019 cervinko, idalin, SiphonSquirrel, ouzklcn, akushsky,
|
||||||
|
# OzzieIsaacs, bodybybuddha, jkrehm, matthazinski, janeczku
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import shutil
|
||||||
|
import glob
|
||||||
|
import zipfile
|
||||||
|
import json
|
||||||
|
from io import BytesIO
|
||||||
|
from flask_babel.speaklater import LazyString
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
from flask import send_file, __version__
|
||||||
|
|
||||||
|
from . import logger, config
|
||||||
|
from .about import collect_stats
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
class lazyEncoder(json.JSONEncoder):
|
||||||
|
def default(self, obj):
|
||||||
|
if isinstance(obj, LazyString):
|
||||||
|
return str(obj)
|
||||||
|
# Let the base class default method raise the TypeError
|
||||||
|
return json.JSONEncoder.default(self, obj)
|
||||||
|
|
||||||
|
def assemble_logfiles(file_name):
|
||||||
|
log_list = sorted(glob.glob(file_name + '*'), reverse=True)
|
||||||
|
wfd = BytesIO()
|
||||||
|
for f in log_list:
|
||||||
|
with open(f, 'rb') as fd:
|
||||||
|
shutil.copyfileobj(fd, wfd)
|
||||||
|
wfd.seek(0)
|
||||||
|
if int(__version__.split('.')[0]) < 2:
|
||||||
|
return send_file(wfd,
|
||||||
|
as_attachment=True,
|
||||||
|
attachment_filename=os.path.basename(file_name))
|
||||||
|
else:
|
||||||
|
return send_file(wfd,
|
||||||
|
as_attachment=True,
|
||||||
|
download_name=os.path.basename(file_name))
|
||||||
|
|
||||||
|
|
||||||
|
def send_debug():
|
||||||
|
file_list = glob.glob(logger.get_logfile(config.config_logfile) + '*')
|
||||||
|
file_list.extend(glob.glob(logger.get_accesslogfile(config.config_access_logfile) + '*'))
|
||||||
|
for element in [logger.LOG_TO_STDOUT, logger.LOG_TO_STDERR]:
|
||||||
|
if element in file_list:
|
||||||
|
file_list.remove(element)
|
||||||
|
memory_zip = BytesIO()
|
||||||
|
with zipfile.ZipFile(memory_zip, 'w', compression=zipfile.ZIP_DEFLATED) as zf:
|
||||||
|
zf.writestr('settings.txt', json.dumps(config.to_dict(), sort_keys=True, indent=2))
|
||||||
|
zf.writestr('libs.txt', json.dumps(collect_stats(), sort_keys=True, indent=2, cls=lazyEncoder))
|
||||||
|
for fp in file_list:
|
||||||
|
zf.write(fp, os.path.basename(fp))
|
||||||
|
memory_zip.seek(0)
|
||||||
|
if int(__version__.split('.')[0]) < 2:
|
||||||
|
return send_file(memory_zip,
|
||||||
|
as_attachment=True,
|
||||||
|
attachment_filename="Calibre-Web-debug-pack.zip")
|
||||||
|
else:
|
||||||
|
return send_file(memory_zip,
|
||||||
|
as_attachment=True,
|
||||||
|
download_name="Calibre-Web-debug-pack.zip")
|
|
@ -0,0 +1,109 @@
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
import json
|
||||||
|
|
||||||
|
from .constants import BASE_DIR
|
||||||
|
try:
|
||||||
|
from importlib.metadata import version
|
||||||
|
importlib = True
|
||||||
|
ImportNotFound = BaseException
|
||||||
|
except ImportError:
|
||||||
|
importlib = False
|
||||||
|
version = None
|
||||||
|
|
||||||
|
if not importlib:
|
||||||
|
try:
|
||||||
|
import pkg_resources
|
||||||
|
from pkg_resources import DistributionNotFound as ImportNotFound
|
||||||
|
pkgresources = True
|
||||||
|
except ImportError as e:
|
||||||
|
pkgresources = False
|
||||||
|
|
||||||
|
|
||||||
|
def load_dependencies(optional=False):
|
||||||
|
deps = list()
|
||||||
|
if getattr(sys, 'frozen', False):
|
||||||
|
pip_installed = os.path.join(BASE_DIR, ".pip_installed")
|
||||||
|
if os.path.exists(pip_installed):
|
||||||
|
with open(pip_installed) as f:
|
||||||
|
exe_deps = json.loads("".join(f.readlines()))
|
||||||
|
else:
|
||||||
|
return deps
|
||||||
|
if importlib or pkgresources:
|
||||||
|
if optional:
|
||||||
|
req_path = os.path.join(BASE_DIR, "optional-requirements.txt")
|
||||||
|
else:
|
||||||
|
req_path = os.path.join(BASE_DIR, "requirements.txt")
|
||||||
|
if os.path.exists(req_path):
|
||||||
|
with open(req_path, 'r') as f:
|
||||||
|
for line in f:
|
||||||
|
if not line.startswith('#') and not line == '\n' and not line.startswith('git'):
|
||||||
|
res = re.match(r'(.*?)([<=>\s]+)([\d\.]+),?\s?([<=>\s]+)?([\d\.]+)?', line.strip())
|
||||||
|
try:
|
||||||
|
if getattr(sys, 'frozen', False):
|
||||||
|
dep_version = exe_deps[res.group(1).lower().replace('_', '-')]
|
||||||
|
else:
|
||||||
|
if importlib:
|
||||||
|
dep_version = version(res.group(1))
|
||||||
|
else:
|
||||||
|
dep_version = pkg_resources.get_distribution(res.group(1)).version
|
||||||
|
except (ImportNotFound, KeyError):
|
||||||
|
if optional:
|
||||||
|
continue
|
||||||
|
dep_version = "not installed"
|
||||||
|
deps.append([dep_version, res.group(1), res.group(2), res.group(3), res.group(4), res.group(5)])
|
||||||
|
return deps
|
||||||
|
|
||||||
|
|
||||||
|
def dependency_check(optional=False):
|
||||||
|
d = list()
|
||||||
|
deps = load_dependencies(optional)
|
||||||
|
for dep in deps:
|
||||||
|
try:
|
||||||
|
dep_version_int = [int(x) if x.isnumeric() else 0 for x in dep[0].split('.')]
|
||||||
|
low_check = [int(x) for x in dep[3].split('.')]
|
||||||
|
high_check = [int(x) for x in dep[5].split('.')]
|
||||||
|
except AttributeError:
|
||||||
|
high_check = []
|
||||||
|
except ValueError:
|
||||||
|
d.append({'name': dep[1],
|
||||||
|
'target': "available",
|
||||||
|
'found': "Not available"
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
|
||||||
|
if dep[2].strip() == "==":
|
||||||
|
if dep_version_int != low_check:
|
||||||
|
d.append({'name': dep[1],
|
||||||
|
'found': dep[0],
|
||||||
|
"target": dep[2] + dep[3]})
|
||||||
|
continue
|
||||||
|
elif dep[2].strip() == ">=":
|
||||||
|
if dep_version_int < low_check:
|
||||||
|
d.append({'name': dep[1],
|
||||||
|
'found': dep[0],
|
||||||
|
"target": dep[2] + dep[3]})
|
||||||
|
continue
|
||||||
|
elif dep[2].strip() == ">":
|
||||||
|
if dep_version_int <= low_check:
|
||||||
|
d.append({'name': dep[1],
|
||||||
|
'found': dep[0],
|
||||||
|
"target": dep[2] + dep[3]})
|
||||||
|
continue
|
||||||
|
if dep[4] and dep[5]:
|
||||||
|
if dep[4].strip() == "<":
|
||||||
|
if dep_version_int >= high_check:
|
||||||
|
d.append(
|
||||||
|
{'name': dep[1],
|
||||||
|
'found': dep[0],
|
||||||
|
"target": dep[4] + dep[5]})
|
||||||
|
continue
|
||||||
|
elif dep[4].strip() == "<=":
|
||||||
|
if dep_version_int > high_check:
|
||||||
|
d.append(
|
||||||
|
{'name': dep[1],
|
||||||
|
'found': dep[0],
|
||||||
|
"target": dep[4] + dep[5]})
|
||||||
|
continue
|
||||||
|
return d
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,63 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2024 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
from uuid import uuid4
|
||||||
|
import os
|
||||||
|
|
||||||
|
from .file_helper import get_temp_dir
|
||||||
|
from .subproc_wrapper import process_open
|
||||||
|
from . import logger, config
|
||||||
|
from .constants import SUPPORTED_CALIBRE_BINARIES
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
|
||||||
|
def do_calibre_export(book_id, book_format):
|
||||||
|
try:
|
||||||
|
quotes = [3, 5, 7, 9]
|
||||||
|
tmp_dir = get_temp_dir()
|
||||||
|
calibredb_binarypath = get_calibre_binarypath("calibredb")
|
||||||
|
temp_file_name = str(uuid4())
|
||||||
|
my_env = os.environ.copy()
|
||||||
|
if config.config_calibre_split:
|
||||||
|
my_env['CALIBRE_OVERRIDE_DATABASE_PATH'] = os.path.join(config.config_calibre_dir, "metadata.db")
|
||||||
|
library_path = config.config_calibre_split_dir
|
||||||
|
else:
|
||||||
|
library_path = config.config_calibre_dir
|
||||||
|
opf_command = [calibredb_binarypath, 'export', '--dont-write-opf', '--with-library', library_path,
|
||||||
|
'--to-dir', tmp_dir, '--formats', book_format, "--template", "{}".format(temp_file_name),
|
||||||
|
str(book_id)]
|
||||||
|
p = process_open(opf_command, quotes, my_env)
|
||||||
|
_, err = p.communicate()
|
||||||
|
if err:
|
||||||
|
log.error('Metadata embedder encountered an error: %s', err)
|
||||||
|
return tmp_dir, temp_file_name
|
||||||
|
except OSError as ex:
|
||||||
|
# ToDo real error handling
|
||||||
|
log.error_or_exception(ex)
|
||||||
|
return None, None
|
||||||
|
|
||||||
|
|
||||||
|
def get_calibre_binarypath(binary):
|
||||||
|
binariesdir = config.config_binariesdir
|
||||||
|
if binariesdir:
|
||||||
|
try:
|
||||||
|
return os.path.join(binariesdir, SUPPORTED_CALIBRE_BINARIES[binary])
|
||||||
|
except KeyError as ex:
|
||||||
|
log.error("Binary not supported by Calibre-Web: %s", SUPPORTED_CALIBRE_BINARIES[binary])
|
||||||
|
pass
|
||||||
|
return ""
|
215
cps/epub.py
215
cps/epub.py
|
@ -1,4 +1,3 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
@ -17,25 +16,52 @@
|
||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import os
|
||||||
import zipfile
|
import zipfile
|
||||||
from lxml import etree
|
from lxml import etree
|
||||||
import os
|
|
||||||
import uploader
|
from . import isoLanguages, cover
|
||||||
import isoLanguages
|
from . import config, logger
|
||||||
|
from .helper import split_authors
|
||||||
|
from .epub_helper import get_content_opf, default_ns
|
||||||
|
from .constants import BookMeta
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
|
||||||
def extractCover(zipFile, coverFile, coverpath, tmp_file_name):
|
def _extract_cover(zip_file, cover_file, cover_path, tmp_file_name):
|
||||||
if coverFile is None:
|
if cover_file is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
cf = extension = None
|
||||||
|
zip_cover_path = os.path.join(cover_path, cover_file).replace('\\', '/')
|
||||||
|
|
||||||
|
prefix = os.path.splitext(tmp_file_name)[0]
|
||||||
|
tmp_cover_name = prefix + '.' + os.path.basename(zip_cover_path)
|
||||||
|
ext = os.path.splitext(tmp_cover_name)
|
||||||
|
if len(ext) > 1:
|
||||||
|
extension = ext[1].lower()
|
||||||
|
if extension in cover.COVER_EXTENSIONS:
|
||||||
|
cf = zip_file.read(zip_cover_path)
|
||||||
|
return cover.cover_processing(tmp_file_name, cf, extension)
|
||||||
|
|
||||||
|
def get_epub_layout(book, book_data):
|
||||||
|
file_path = os.path.normpath(os.path.join(config.get_book_path(),
|
||||||
|
book.path, book_data.name + "." + book_data.format.lower()))
|
||||||
|
|
||||||
|
try:
|
||||||
|
tree, __ = get_content_opf(file_path, default_ns)
|
||||||
|
p = tree.xpath('/pkg:package/pkg:metadata', namespaces=default_ns)[0]
|
||||||
|
|
||||||
|
layout = p.xpath('pkg:meta[@property="rendition:layout"]/text()', namespaces=default_ns)
|
||||||
|
except (etree.XMLSyntaxError, KeyError, IndexError, OSError) as e:
|
||||||
|
log.error("Could not parse epub metadata of book {} during kobo sync: {}".format(book.id, e))
|
||||||
|
layout = []
|
||||||
|
|
||||||
|
if len(layout) == 0:
|
||||||
return None
|
return None
|
||||||
else:
|
else:
|
||||||
zipCoverPath = os.path.join(coverpath, coverFile).replace('\\', '/')
|
return layout[0]
|
||||||
cf = zipFile.read(zipCoverPath)
|
|
||||||
prefix = os.path.splitext(tmp_file_name)[0]
|
|
||||||
tmp_cover_name = prefix + '.' + os.path.basename(zipCoverPath)
|
|
||||||
image = open(tmp_cover_name, 'wb')
|
|
||||||
image.write(cf)
|
|
||||||
image.close()
|
|
||||||
return tmp_cover_name
|
|
||||||
|
|
||||||
|
|
||||||
def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
|
def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
|
||||||
|
@ -45,48 +71,124 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
|
||||||
'dc': 'http://purl.org/dc/elements/1.1/'
|
'dc': 'http://purl.org/dc/elements/1.1/'
|
||||||
}
|
}
|
||||||
|
|
||||||
epubZip = zipfile.ZipFile(tmp_file_path)
|
tree, cf_name = get_content_opf(tmp_file_path, ns)
|
||||||
|
|
||||||
txt = epubZip.read('META-INF/container.xml')
|
cover_path = os.path.dirname(cf_name)
|
||||||
tree = etree.fromstring(txt)
|
|
||||||
cfname = tree.xpath('n:rootfiles/n:rootfile/@full-path', namespaces=ns)[0]
|
|
||||||
cf = epubZip.read(cfname)
|
|
||||||
tree = etree.fromstring(cf)
|
|
||||||
|
|
||||||
coverpath = os.path.dirname(cfname)
|
|
||||||
|
|
||||||
p = tree.xpath('/pkg:package/pkg:metadata', namespaces=ns)[0]
|
p = tree.xpath('/pkg:package/pkg:metadata', namespaces=ns)[0]
|
||||||
|
|
||||||
epub_metadata = {}
|
epub_metadata = {}
|
||||||
|
|
||||||
for s in ['title', 'description', 'creator', 'language', 'subject']:
|
for s in ['title', 'description', 'creator', 'language', 'subject', 'publisher', 'date']:
|
||||||
tmp = p.xpath('dc:%s/text()' % s, namespaces=ns)
|
tmp = p.xpath('dc:%s/text()' % s, namespaces=ns)
|
||||||
if len(tmp) > 0:
|
if len(tmp) > 0:
|
||||||
epub_metadata[s] = p.xpath('dc:%s/text()' % s, namespaces=ns)[0]
|
if s == 'creator':
|
||||||
|
epub_metadata[s] = ' & '.join(split_authors(tmp))
|
||||||
|
elif s == 'subject':
|
||||||
|
epub_metadata[s] = ', '.join(tmp)
|
||||||
|
elif s == 'date':
|
||||||
|
epub_metadata[s] = tmp[0][:10]
|
||||||
|
else:
|
||||||
|
epub_metadata[s] = tmp[0].strip()
|
||||||
else:
|
else:
|
||||||
epub_metadata[s] = "Unknown"
|
epub_metadata[s] = 'Unknown'
|
||||||
|
|
||||||
if epub_metadata['subject'] == "Unknown":
|
if epub_metadata['subject'] == 'Unknown':
|
||||||
epub_metadata['subject'] = ''
|
epub_metadata['subject'] = ''
|
||||||
|
|
||||||
if epub_metadata['description'] == "Unknown":
|
if epub_metadata['publisher'] == 'Unknown':
|
||||||
|
epub_metadata['publisher'] = ''
|
||||||
|
|
||||||
|
if epub_metadata['date'] == 'Unknown':
|
||||||
|
epub_metadata['date'] = ''
|
||||||
|
|
||||||
|
if epub_metadata['description'] == 'Unknown':
|
||||||
description = tree.xpath("//*[local-name() = 'description']/text()")
|
description = tree.xpath("//*[local-name() = 'description']/text()")
|
||||||
if len(description) > 0:
|
if len(description) > 0:
|
||||||
epub_metadata['description'] = description
|
epub_metadata['description'] = description
|
||||||
else:
|
else:
|
||||||
epub_metadata['description'] = ""
|
epub_metadata['description'] = ""
|
||||||
|
|
||||||
if epub_metadata['language'] == "Unknown":
|
lang = epub_metadata['language'].split('-', 1)[0].lower()
|
||||||
epub_metadata['language'] = ""
|
epub_metadata['language'] = isoLanguages.get_lang3(lang)
|
||||||
else:
|
|
||||||
lang = epub_metadata['language'].split('-', 1)[0].lower()
|
|
||||||
if len(lang) == 2:
|
|
||||||
epub_metadata['language'] = isoLanguages.get(part1=lang).name
|
|
||||||
elif len(lang) == 3:
|
|
||||||
epub_metadata['language'] = isoLanguages.get(part3=lang).name
|
|
||||||
else:
|
|
||||||
epub_metadata['language'] = ""
|
|
||||||
|
|
||||||
|
epub_metadata = parse_epub_series(ns, tree, epub_metadata)
|
||||||
|
|
||||||
|
epub_zip = zipfile.ZipFile(tmp_file_path)
|
||||||
|
cover_file = parse_epub_cover(ns, tree, epub_zip, cover_path, tmp_file_path)
|
||||||
|
|
||||||
|
identifiers = []
|
||||||
|
for node in p.xpath('dc:identifier', namespaces=ns):
|
||||||
|
try:
|
||||||
|
identifier_name = node.attrib.values()[-1]
|
||||||
|
except IndexError:
|
||||||
|
continue
|
||||||
|
identifier_value = node.text
|
||||||
|
if identifier_name in ('uuid', 'calibre') or identifier_value is None:
|
||||||
|
continue
|
||||||
|
identifiers.append([identifier_name, identifier_value])
|
||||||
|
|
||||||
|
if not epub_metadata['title']:
|
||||||
|
title = original_file_name
|
||||||
|
else:
|
||||||
|
title = epub_metadata['title']
|
||||||
|
|
||||||
|
return BookMeta(
|
||||||
|
file_path=tmp_file_path,
|
||||||
|
extension=original_file_extension,
|
||||||
|
title=title.encode('utf-8').decode('utf-8'),
|
||||||
|
author=epub_metadata['creator'].encode('utf-8').decode('utf-8'),
|
||||||
|
cover=cover_file,
|
||||||
|
description=epub_metadata['description'],
|
||||||
|
tags=epub_metadata['subject'].encode('utf-8').decode('utf-8'),
|
||||||
|
series=epub_metadata['series'].encode('utf-8').decode('utf-8'),
|
||||||
|
series_id=epub_metadata['series_id'].encode('utf-8').decode('utf-8'),
|
||||||
|
languages=epub_metadata['language'],
|
||||||
|
publisher=epub_metadata['publisher'].encode('utf-8').decode('utf-8'),
|
||||||
|
pubdate=epub_metadata['date'],
|
||||||
|
identifiers=identifiers)
|
||||||
|
|
||||||
|
|
||||||
|
def parse_epub_cover(ns, tree, epub_zip, cover_path, tmp_file_path):
|
||||||
|
cover_section = tree.xpath("/pkg:package/pkg:manifest/pkg:item[@id='cover-image']/@href", namespaces=ns)
|
||||||
|
for cs in cover_section:
|
||||||
|
cover_file = _extract_cover(epub_zip, cs, cover_path, tmp_file_path)
|
||||||
|
if cover_file:
|
||||||
|
return cover_file
|
||||||
|
|
||||||
|
meta_cover = tree.xpath("/pkg:package/pkg:metadata/pkg:meta[@name='cover']/@content", namespaces=ns)
|
||||||
|
if len(meta_cover) > 0:
|
||||||
|
cover_section = tree.xpath(
|
||||||
|
"/pkg:package/pkg:manifest/pkg:item[@id='"+meta_cover[0]+"']/@href", namespaces=ns)
|
||||||
|
if not cover_section:
|
||||||
|
cover_section = tree.xpath(
|
||||||
|
"/pkg:package/pkg:manifest/pkg:item[@properties='" + meta_cover[0] + "']/@href", namespaces=ns)
|
||||||
|
else:
|
||||||
|
cover_section = tree.xpath("/pkg:package/pkg:guide/pkg:reference/@href", namespaces=ns)
|
||||||
|
|
||||||
|
cover_file = None
|
||||||
|
for cs in cover_section:
|
||||||
|
if cs.endswith('.xhtml') or cs.endswith('.html'):
|
||||||
|
markup = epub_zip.read(os.path.join(cover_path, cs))
|
||||||
|
markup_tree = etree.fromstring(markup)
|
||||||
|
# no matter xhtml or html with no namespace
|
||||||
|
img_src = markup_tree.xpath("//*[local-name() = 'img']/@src")
|
||||||
|
# Alternative image source
|
||||||
|
if not len(img_src):
|
||||||
|
img_src = markup_tree.xpath("//attribute::*[contains(local-name(), 'href')]")
|
||||||
|
if len(img_src):
|
||||||
|
# img_src maybe start with "../"" so fullpath join then relpath to cwd
|
||||||
|
filename = os.path.relpath(os.path.join(os.path.dirname(os.path.join(cover_path, cover_section[0])),
|
||||||
|
img_src[0]))
|
||||||
|
cover_file = _extract_cover(epub_zip, filename, "", tmp_file_path)
|
||||||
|
else:
|
||||||
|
cover_file = _extract_cover(epub_zip, cs, cover_path, tmp_file_path)
|
||||||
|
if cover_file:
|
||||||
|
break
|
||||||
|
return cover_file
|
||||||
|
|
||||||
|
|
||||||
|
def parse_epub_series(ns, tree, epub_metadata):
|
||||||
series = tree.xpath("/pkg:package/pkg:metadata/pkg:meta[@name='calibre:series']/@content", namespaces=ns)
|
series = tree.xpath("/pkg:package/pkg:metadata/pkg:meta[@name='calibre:series']/@content", namespaces=ns)
|
||||||
if len(series) > 0:
|
if len(series) > 0:
|
||||||
epub_metadata['series'] = series[0]
|
epub_metadata['series'] = series[0]
|
||||||
|
@ -98,41 +200,4 @@ def get_epub_info(tmp_file_path, original_file_name, original_file_extension):
|
||||||
epub_metadata['series_id'] = series_id[0]
|
epub_metadata['series_id'] = series_id[0]
|
||||||
else:
|
else:
|
||||||
epub_metadata['series_id'] = '1'
|
epub_metadata['series_id'] = '1'
|
||||||
|
return epub_metadata
|
||||||
coversection = tree.xpath("/pkg:package/pkg:manifest/pkg:item[@id='cover-image']/@href", namespaces=ns)
|
|
||||||
coverfile = None
|
|
||||||
if len(coversection) > 0:
|
|
||||||
coverfile = extractCover(epubZip, coversection[0], coverpath, tmp_file_path)
|
|
||||||
else:
|
|
||||||
meta_cover = tree.xpath("/pkg:package/pkg:metadata/pkg:meta[@name='cover']/@content", namespaces=ns)
|
|
||||||
if len(meta_cover) > 0:
|
|
||||||
coversection = tree.xpath("/pkg:package/pkg:manifest/pkg:item[@id='"+meta_cover[0]+"']/@href", namespaces=ns)
|
|
||||||
if len(coversection) > 0:
|
|
||||||
filetype = coversection[0].rsplit('.', 1)[-1]
|
|
||||||
if filetype == "xhtml" or filetype == "html": # if cover is (x)html format
|
|
||||||
markup = epubZip.read(os.path.join(coverpath, coversection[0]))
|
|
||||||
markupTree = etree.fromstring(markup)
|
|
||||||
# no matter xhtml or html with no namespace
|
|
||||||
imgsrc = markupTree.xpath("//*[local-name() = 'img']/@src")
|
|
||||||
# imgsrc maybe startwith "../"" so fullpath join then relpath to cwd
|
|
||||||
filename = os.path.relpath(os.path.join(os.path.dirname(os.path.join(coverpath, coversection[0])), imgsrc[0]))
|
|
||||||
coverfile = extractCover(epubZip, filename, "", tmp_file_path)
|
|
||||||
else:
|
|
||||||
coverfile = extractCover(epubZip, coversection[0], coverpath, tmp_file_path)
|
|
||||||
|
|
||||||
if not epub_metadata['title']:
|
|
||||||
title = original_file_name
|
|
||||||
else:
|
|
||||||
title = epub_metadata['title']
|
|
||||||
|
|
||||||
return uploader.BookMeta(
|
|
||||||
file_path=tmp_file_path,
|
|
||||||
extension=original_file_extension,
|
|
||||||
title=title.encode('utf-8').decode('utf-8'),
|
|
||||||
author=epub_metadata['creator'].encode('utf-8').decode('utf-8'),
|
|
||||||
cover=coverfile,
|
|
||||||
description=epub_metadata['description'],
|
|
||||||
tags=epub_metadata['subject'].encode('utf-8').decode('utf-8'),
|
|
||||||
series=epub_metadata['series'].encode('utf-8').decode('utf-8'),
|
|
||||||
series_id=epub_metadata['series_id'].encode('utf-8').decode('utf-8'),
|
|
||||||
languages=epub_metadata['language'])
|
|
||||||
|
|
|
@ -0,0 +1,166 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018 lemmsh, Kennyl, Kyosfonica, matthazinski
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import zipfile
|
||||||
|
from lxml import etree
|
||||||
|
|
||||||
|
from . import isoLanguages
|
||||||
|
|
||||||
|
default_ns = {
|
||||||
|
'n': 'urn:oasis:names:tc:opendocument:xmlns:container',
|
||||||
|
'pkg': 'http://www.idpf.org/2007/opf',
|
||||||
|
}
|
||||||
|
|
||||||
|
OPF_NAMESPACE = "http://www.idpf.org/2007/opf"
|
||||||
|
PURL_NAMESPACE = "http://purl.org/dc/elements/1.1/"
|
||||||
|
|
||||||
|
OPF = "{%s}" % OPF_NAMESPACE
|
||||||
|
PURL = "{%s}" % PURL_NAMESPACE
|
||||||
|
|
||||||
|
etree.register_namespace("opf", OPF_NAMESPACE)
|
||||||
|
etree.register_namespace("dc", PURL_NAMESPACE)
|
||||||
|
|
||||||
|
OPF_NS = {None: OPF_NAMESPACE} # the default namespace (no prefix)
|
||||||
|
NSMAP = {'dc': PURL_NAMESPACE, 'opf': OPF_NAMESPACE}
|
||||||
|
|
||||||
|
|
||||||
|
def updateEpub(src, dest, filename, data, ):
|
||||||
|
# create a temp copy of the archive without filename
|
||||||
|
with zipfile.ZipFile(src, 'r') as zin:
|
||||||
|
with zipfile.ZipFile(dest, 'w') as zout:
|
||||||
|
zout.comment = zin.comment # preserve the comment
|
||||||
|
for item in zin.infolist():
|
||||||
|
if item.filename != filename:
|
||||||
|
zout.writestr(item, zin.read(item.filename))
|
||||||
|
|
||||||
|
# now add filename with its new data
|
||||||
|
with zipfile.ZipFile(dest, mode='a', compression=zipfile.ZIP_DEFLATED) as zf:
|
||||||
|
zf.writestr(filename, data)
|
||||||
|
|
||||||
|
|
||||||
|
def get_content_opf(file_path, ns=default_ns):
|
||||||
|
epubZip = zipfile.ZipFile(file_path)
|
||||||
|
txt = epubZip.read('META-INF/container.xml')
|
||||||
|
tree = etree.fromstring(txt)
|
||||||
|
cf_name = tree.xpath('n:rootfiles/n:rootfile/@full-path', namespaces=ns)[0]
|
||||||
|
cf = epubZip.read(cf_name)
|
||||||
|
|
||||||
|
return etree.fromstring(cf), cf_name
|
||||||
|
|
||||||
|
|
||||||
|
def create_new_metadata_backup(book, custom_columns, export_language, translated_cover_name, lang_type=3):
|
||||||
|
# generate root package element
|
||||||
|
package = etree.Element(OPF + "package", nsmap=OPF_NS)
|
||||||
|
package.set("unique-identifier", "uuid_id")
|
||||||
|
package.set("version", "2.0")
|
||||||
|
|
||||||
|
# generate metadata element and all sub elements of it
|
||||||
|
metadata = etree.SubElement(package, "metadata", nsmap=NSMAP)
|
||||||
|
identifier = etree.SubElement(metadata, PURL + "identifier", id="calibre_id", nsmap=NSMAP)
|
||||||
|
identifier.set(OPF + "scheme", "calibre")
|
||||||
|
identifier.text = str(book.id)
|
||||||
|
identifier2 = etree.SubElement(metadata, PURL + "identifier", id="uuid_id", nsmap=NSMAP)
|
||||||
|
identifier2.set(OPF + "scheme", "uuid")
|
||||||
|
identifier2.text = book.uuid
|
||||||
|
for i in book.identifiers:
|
||||||
|
identifier = etree.SubElement(metadata, PURL + "identifier", nsmap=NSMAP)
|
||||||
|
identifier.set(OPF + "scheme", i.format_type())
|
||||||
|
identifier.text = str(i.val)
|
||||||
|
title = etree.SubElement(metadata, PURL + "title", nsmap=NSMAP)
|
||||||
|
title.text = book.title
|
||||||
|
for author in book.authors:
|
||||||
|
creator = etree.SubElement(metadata, PURL + "creator", nsmap=NSMAP)
|
||||||
|
creator.text = str(author.name)
|
||||||
|
creator.set(OPF + "file-as", book.author_sort) # ToDo Check
|
||||||
|
creator.set(OPF + "role", "aut")
|
||||||
|
contributor = etree.SubElement(metadata, PURL + "contributor", nsmap=NSMAP)
|
||||||
|
contributor.text = "calibre (5.7.2) [https://calibre-ebook.com]"
|
||||||
|
contributor.set(OPF + "file-as", "calibre") # ToDo Check
|
||||||
|
contributor.set(OPF + "role", "bkp")
|
||||||
|
|
||||||
|
date = etree.SubElement(metadata, PURL + "date", nsmap=NSMAP)
|
||||||
|
date.text = '{d.year:04}-{d.month:02}-{d.day:02}T{d.hour:02}:{d.minute:02}:{d.second:02}'.format(d=book.pubdate)
|
||||||
|
if book.comments and book.comments[0].text:
|
||||||
|
for b in book.comments:
|
||||||
|
description = etree.SubElement(metadata, PURL + "description", nsmap=NSMAP)
|
||||||
|
description.text = b.text
|
||||||
|
for b in book.publishers:
|
||||||
|
publisher = etree.SubElement(metadata, PURL + "publisher", nsmap=NSMAP)
|
||||||
|
publisher.text = str(b.name)
|
||||||
|
if not book.languages:
|
||||||
|
language = etree.SubElement(metadata, PURL + "language", nsmap=NSMAP)
|
||||||
|
language.text = export_language
|
||||||
|
else:
|
||||||
|
for b in book.languages:
|
||||||
|
language = etree.SubElement(metadata, PURL + "language", nsmap=NSMAP)
|
||||||
|
language.text = str(b.lang_code) if lang_type == 3 else isoLanguages.get(part3=b.lang_code).part1
|
||||||
|
for b in book.tags:
|
||||||
|
subject = etree.SubElement(metadata, PURL + "subject", nsmap=NSMAP)
|
||||||
|
subject.text = str(b.name)
|
||||||
|
etree.SubElement(metadata, "meta", name="calibre:author_link_map",
|
||||||
|
content="{" + ", ".join(['"' + str(a.name) + '": ""' for a in book.authors]) + "}",
|
||||||
|
nsmap=NSMAP)
|
||||||
|
for b in book.series:
|
||||||
|
etree.SubElement(metadata, "meta", name="calibre:series",
|
||||||
|
content=str(str(b.name)),
|
||||||
|
nsmap=NSMAP)
|
||||||
|
if book.series:
|
||||||
|
etree.SubElement(metadata, "meta", name="calibre:series_index",
|
||||||
|
content=str(book.series_index),
|
||||||
|
nsmap=NSMAP)
|
||||||
|
if len(book.ratings) and book.ratings[0].rating > 0:
|
||||||
|
etree.SubElement(metadata, "meta", name="calibre:rating",
|
||||||
|
content=str(book.ratings[0].rating),
|
||||||
|
nsmap=NSMAP)
|
||||||
|
etree.SubElement(metadata, "meta", name="calibre:timestamp",
|
||||||
|
content='{d.year:04}-{d.month:02}-{d.day:02}T{d.hour:02}:{d.minute:02}:{d.second:02}'.format(
|
||||||
|
d=book.timestamp),
|
||||||
|
nsmap=NSMAP)
|
||||||
|
etree.SubElement(metadata, "meta", name="calibre:title_sort",
|
||||||
|
content=book.sort,
|
||||||
|
nsmap=NSMAP)
|
||||||
|
sequence = 0
|
||||||
|
for cc in custom_columns:
|
||||||
|
value = None
|
||||||
|
extra = None
|
||||||
|
cc_entry = getattr(book, "custom_column_" + str(cc.id))
|
||||||
|
if cc_entry.__len__():
|
||||||
|
value = [c.value for c in cc_entry] if cc.is_multiple else cc_entry[0].value
|
||||||
|
extra = cc_entry[0].extra if hasattr(cc_entry[0], "extra") else None
|
||||||
|
etree.SubElement(metadata, "meta", name="calibre:user_metadata:#{}".format(cc.label),
|
||||||
|
content=cc.to_json(value, extra, sequence),
|
||||||
|
nsmap=NSMAP)
|
||||||
|
sequence += 1
|
||||||
|
|
||||||
|
# generate guide element and all sub elements of it
|
||||||
|
# Title is translated from default export language
|
||||||
|
guide = etree.SubElement(package, "guide")
|
||||||
|
etree.SubElement(guide, "reference", type="cover", title=translated_cover_name, href="cover.jpg")
|
||||||
|
|
||||||
|
return package
|
||||||
|
|
||||||
|
def replace_metadata(tree, package):
|
||||||
|
rep_element = tree.xpath('/pkg:package/pkg:metadata', namespaces=default_ns)[0]
|
||||||
|
new_element = package.xpath('//metadata', namespaces=default_ns)[0]
|
||||||
|
tree.replace(rep_element, new_element)
|
||||||
|
return etree.tostring(tree,
|
||||||
|
xml_declaration=True,
|
||||||
|
encoding='utf-8',
|
||||||
|
pretty_print=True).decode('utf-8')
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,71 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2020 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import traceback
|
||||||
|
|
||||||
|
from flask import render_template
|
||||||
|
from werkzeug.exceptions import default_exceptions
|
||||||
|
try:
|
||||||
|
from werkzeug.exceptions import FailedDependency
|
||||||
|
except ImportError:
|
||||||
|
from werkzeug.exceptions import UnprocessableEntity as FailedDependency
|
||||||
|
|
||||||
|
from . import config, app, logger, services
|
||||||
|
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
# custom error page
|
||||||
|
def error_http(error):
|
||||||
|
return render_template('http_error.html',
|
||||||
|
error_code="Error {0}".format(error.code),
|
||||||
|
error_name=error.name,
|
||||||
|
issue=False,
|
||||||
|
unconfigured=not config.db_configured,
|
||||||
|
instance=config.config_calibre_web_title
|
||||||
|
), error.code
|
||||||
|
|
||||||
|
|
||||||
|
def internal_error(error):
|
||||||
|
return render_template('http_error.html',
|
||||||
|
error_code="500 Internal Server Error",
|
||||||
|
error_name='The server encountered an internal error and was unable to complete your '
|
||||||
|
'request. There is an error in the application.',
|
||||||
|
issue=True,
|
||||||
|
unconfigured=False,
|
||||||
|
error_stack=traceback.format_exc().split("\n"),
|
||||||
|
instance=config.config_calibre_web_title
|
||||||
|
), 500
|
||||||
|
|
||||||
|
def init_errorhandler():
|
||||||
|
# http error handling
|
||||||
|
for ex in default_exceptions:
|
||||||
|
if ex < 500:
|
||||||
|
app.register_error_handler(ex, error_http)
|
||||||
|
elif ex == 500:
|
||||||
|
app.register_error_handler(ex, internal_error)
|
||||||
|
|
||||||
|
|
||||||
|
if services.ldap:
|
||||||
|
# Only way of catching the LDAPException upon logging in with LDAP server down
|
||||||
|
@app.errorhandler(services.ldap.LDAPException)
|
||||||
|
# pylint: disable=unused-variable
|
||||||
|
def handle_exception(e):
|
||||||
|
log.debug('LDAP server not accessible while trying to login to opds feed')
|
||||||
|
return error_http(FailedDependency())
|
||||||
|
|
47
cps/fb2.py
47
cps/fb2.py
|
@ -1,4 +1,3 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
@ -18,7 +17,8 @@
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
from lxml import etree
|
from lxml import etree
|
||||||
import uploader
|
|
||||||
|
from .constants import BookMeta
|
||||||
|
|
||||||
|
|
||||||
def get_fb2_info(tmp_file_path, original_file_extension):
|
def get_fb2_info(tmp_file_path, original_file_extension):
|
||||||
|
@ -28,52 +28,55 @@ def get_fb2_info(tmp_file_path, original_file_extension):
|
||||||
'l': 'http://www.w3.org/1999/xlink',
|
'l': 'http://www.w3.org/1999/xlink',
|
||||||
}
|
}
|
||||||
|
|
||||||
fb2_file = open(tmp_file_path)
|
fb2_file = open(tmp_file_path, encoding="utf-8")
|
||||||
tree = etree.fromstring(fb2_file.read())
|
tree = etree.fromstring(fb2_file.read().encode())
|
||||||
|
|
||||||
authors = tree.xpath('/fb:FictionBook/fb:description/fb:title-info/fb:author', namespaces=ns)
|
authors = tree.xpath('/fb:FictionBook/fb:description/fb:title-info/fb:author', namespaces=ns)
|
||||||
|
|
||||||
def get_author(element):
|
def get_author(element):
|
||||||
last_name = element.xpath('fb:last-name/text()', namespaces=ns)
|
last_name = element.xpath('fb:last-name/text()', namespaces=ns)
|
||||||
if len(last_name):
|
if len(last_name):
|
||||||
last_name = last_name[0].encode('utf-8')
|
last_name = last_name[0]
|
||||||
else:
|
else:
|
||||||
last_name = u''
|
last_name = ''
|
||||||
middle_name = element.xpath('fb:middle-name/text()', namespaces=ns)
|
middle_name = element.xpath('fb:middle-name/text()', namespaces=ns)
|
||||||
if len(middle_name):
|
if len(middle_name):
|
||||||
middle_name = middle_name[0].encode('utf-8')
|
middle_name = middle_name[0]
|
||||||
else:
|
else:
|
||||||
middle_name = u''
|
middle_name = ''
|
||||||
first_name = element.xpath('fb:first-name/text()', namespaces=ns)
|
first_name = element.xpath('fb:first-name/text()', namespaces=ns)
|
||||||
if len(first_name):
|
if len(first_name):
|
||||||
first_name = first_name[0].encode('utf-8')
|
first_name = first_name[0]
|
||||||
else:
|
else:
|
||||||
first_name = u''
|
first_name = ''
|
||||||
return (first_name.decode('utf-8') + u' '
|
return (first_name + ' '
|
||||||
+ middle_name.decode('utf-8') + u' '
|
+ middle_name + ' '
|
||||||
+ last_name.decode('utf-8')).encode('utf-8')
|
+ last_name)
|
||||||
|
|
||||||
author = str(", ".join(map(get_author, authors)))
|
author = str(", ".join(map(get_author, authors)))
|
||||||
|
|
||||||
title = tree.xpath('/fb:FictionBook/fb:description/fb:title-info/fb:book-title/text()', namespaces=ns)
|
title = tree.xpath('/fb:FictionBook/fb:description/fb:title-info/fb:book-title/text()', namespaces=ns)
|
||||||
if len(title):
|
if len(title):
|
||||||
title = str(title[0].encode('utf-8'))
|
title = str(title[0])
|
||||||
else:
|
else:
|
||||||
title = u''
|
title = ''
|
||||||
description = tree.xpath('/fb:FictionBook/fb:description/fb:publish-info/fb:book-name/text()', namespaces=ns)
|
description = tree.xpath('/fb:FictionBook/fb:description/fb:publish-info/fb:book-name/text()', namespaces=ns)
|
||||||
if len(description):
|
if len(description):
|
||||||
description = str(description[0].encode('utf-8'))
|
description = str(description[0])
|
||||||
else:
|
else:
|
||||||
description = u''
|
description = ''
|
||||||
|
|
||||||
return uploader.BookMeta(
|
return BookMeta(
|
||||||
file_path=tmp_file_path,
|
file_path=tmp_file_path,
|
||||||
extension=original_file_extension,
|
extension=original_file_extension,
|
||||||
title=title.decode('utf-8'),
|
title=title,
|
||||||
author=author.decode('utf-8'),
|
author=author,
|
||||||
cover=None,
|
cover=None,
|
||||||
description=description.decode('utf-8'),
|
description=description,
|
||||||
tags="",
|
tags="",
|
||||||
series="",
|
series="",
|
||||||
series_id="",
|
series_id="",
|
||||||
languages="")
|
languages="",
|
||||||
|
publisher="",
|
||||||
|
pubdate="",
|
||||||
|
identifiers=[])
|
||||||
|
|
|
@ -0,0 +1,32 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2023 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
from tempfile import gettempdir
|
||||||
|
import os
|
||||||
|
import shutil
|
||||||
|
|
||||||
|
def get_temp_dir():
|
||||||
|
tmp_dir = os.path.join(gettempdir(), 'calibre_web')
|
||||||
|
if not os.path.isdir(tmp_dir):
|
||||||
|
os.mkdir(tmp_dir)
|
||||||
|
return tmp_dir
|
||||||
|
|
||||||
|
|
||||||
|
def del_temp_dir():
|
||||||
|
tmp_dir = os.path.join(gettempdir(), 'calibre_web')
|
||||||
|
shutil.rmtree(tmp_dir)
|
|
@ -0,0 +1,95 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2020 mmonkey
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
from . import logger
|
||||||
|
from .constants import CACHE_DIR
|
||||||
|
from os import makedirs, remove
|
||||||
|
from os.path import isdir, isfile, join
|
||||||
|
from shutil import rmtree
|
||||||
|
|
||||||
|
|
||||||
|
class FileSystem:
|
||||||
|
_instance = None
|
||||||
|
_cache_dir = CACHE_DIR
|
||||||
|
|
||||||
|
def __new__(cls):
|
||||||
|
if cls._instance is None:
|
||||||
|
cls._instance = super(FileSystem, cls).__new__(cls)
|
||||||
|
cls.log = logger.create()
|
||||||
|
return cls._instance
|
||||||
|
|
||||||
|
def get_cache_dir(self, cache_type=None):
|
||||||
|
if not isdir(self._cache_dir):
|
||||||
|
try:
|
||||||
|
makedirs(self._cache_dir)
|
||||||
|
except OSError:
|
||||||
|
self.log.info(f'Failed to create path {self._cache_dir} (Permission denied).')
|
||||||
|
raise
|
||||||
|
|
||||||
|
path = join(self._cache_dir, cache_type)
|
||||||
|
if cache_type and not isdir(path):
|
||||||
|
try:
|
||||||
|
makedirs(path)
|
||||||
|
except OSError:
|
||||||
|
self.log.info(f'Failed to create path {path} (Permission denied).')
|
||||||
|
raise
|
||||||
|
|
||||||
|
return path if cache_type else self._cache_dir
|
||||||
|
|
||||||
|
def get_cache_file_dir(self, filename, cache_type=None):
|
||||||
|
path = join(self.get_cache_dir(cache_type), filename[:2])
|
||||||
|
if not isdir(path):
|
||||||
|
try:
|
||||||
|
makedirs(path)
|
||||||
|
except OSError:
|
||||||
|
self.log.info(f'Failed to create path {path} (Permission denied).')
|
||||||
|
raise
|
||||||
|
|
||||||
|
return path
|
||||||
|
|
||||||
|
def get_cache_file_path(self, filename, cache_type=None):
|
||||||
|
return join(self.get_cache_file_dir(filename, cache_type), filename) if filename else None
|
||||||
|
|
||||||
|
def get_cache_file_exists(self, filename, cache_type=None):
|
||||||
|
path = self.get_cache_file_path(filename, cache_type)
|
||||||
|
return isfile(path)
|
||||||
|
|
||||||
|
def delete_cache_dir(self, cache_type=None):
|
||||||
|
if not cache_type and isdir(self._cache_dir):
|
||||||
|
try:
|
||||||
|
rmtree(self._cache_dir)
|
||||||
|
except OSError:
|
||||||
|
self.log.info(f'Failed to delete path {self._cache_dir} (Permission denied).')
|
||||||
|
raise
|
||||||
|
|
||||||
|
path = join(self._cache_dir, cache_type)
|
||||||
|
if cache_type and isdir(path):
|
||||||
|
try:
|
||||||
|
rmtree(path)
|
||||||
|
except OSError:
|
||||||
|
self.log.info(f'Failed to delete path {path} (Permission denied).')
|
||||||
|
raise
|
||||||
|
|
||||||
|
def delete_cache_file(self, filename, cache_type=None):
|
||||||
|
path = self.get_cache_file_path(filename, cache_type)
|
||||||
|
if isfile(path):
|
||||||
|
try:
|
||||||
|
remove(path)
|
||||||
|
except OSError:
|
||||||
|
self.log.info(f'Failed to delete path {path} (Permission denied).')
|
||||||
|
raise
|
|
@ -0,0 +1,156 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2019 OzzieIsaacs, cervinko, jkrehm, bodybybuddha, ok11,
|
||||||
|
# andy29485, idalin, Kyosfonica, wuqi, Kennyl, lemmsh,
|
||||||
|
# falgh1, grunjol, csitko, ytils, xybydy, trasba, vrabe,
|
||||||
|
# ruben-herold, marblepebble, JackED42, SiphonSquirrel,
|
||||||
|
# apetresc, nanu-c, mutschler
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import os
|
||||||
|
import hashlib
|
||||||
|
import json
|
||||||
|
from uuid import uuid4
|
||||||
|
from time import time
|
||||||
|
from shutil import move, copyfile
|
||||||
|
|
||||||
|
from flask import Blueprint, flash, request, redirect, url_for, abort
|
||||||
|
from flask_babel import gettext as _
|
||||||
|
from flask_login import login_required
|
||||||
|
|
||||||
|
from . import logger, gdriveutils, config, ub, calibre_db, csrf
|
||||||
|
from .admin import admin_required
|
||||||
|
from .file_helper import get_temp_dir
|
||||||
|
|
||||||
|
gdrive = Blueprint('gdrive', __name__, url_prefix='/gdrive')
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
try:
|
||||||
|
from googleapiclient.errors import HttpError
|
||||||
|
except ImportError as err:
|
||||||
|
log.debug("Cannot import googleapiclient, using GDrive will not work: %s", err)
|
||||||
|
|
||||||
|
current_milli_time = lambda: int(round(time() * 1000))
|
||||||
|
|
||||||
|
gdrive_watch_callback_token = 'target=calibreweb-watch_files' #nosec
|
||||||
|
|
||||||
|
|
||||||
|
@gdrive.route("/authenticate")
|
||||||
|
@login_required
|
||||||
|
@admin_required
|
||||||
|
def authenticate_google_drive():
|
||||||
|
try:
|
||||||
|
authUrl = gdriveutils.Gauth.Instance().auth.GetAuthUrl()
|
||||||
|
except gdriveutils.InvalidConfigError:
|
||||||
|
flash(_('Google Drive setup not completed, try to deactivate and activate Google Drive again'),
|
||||||
|
category="error")
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
return redirect(authUrl)
|
||||||
|
|
||||||
|
|
||||||
|
@gdrive.route("/callback")
|
||||||
|
def google_drive_callback():
|
||||||
|
auth_code = request.args.get('code')
|
||||||
|
if not auth_code:
|
||||||
|
abort(403)
|
||||||
|
try:
|
||||||
|
credentials = gdriveutils.Gauth.Instance().auth.flow.step2_exchange(auth_code)
|
||||||
|
with open(gdriveutils.CREDENTIALS, 'w') as f:
|
||||||
|
f.write(credentials.to_json())
|
||||||
|
except (ValueError, AttributeError) as error:
|
||||||
|
log.error(error)
|
||||||
|
return redirect(url_for('admin.db_configuration'))
|
||||||
|
|
||||||
|
|
||||||
|
@gdrive.route("/watch/subscribe")
|
||||||
|
@login_required
|
||||||
|
@admin_required
|
||||||
|
def watch_gdrive():
|
||||||
|
if not config.config_google_drive_watch_changes_response:
|
||||||
|
with open(gdriveutils.CLIENT_SECRETS, 'r') as settings:
|
||||||
|
filedata = json.load(settings)
|
||||||
|
address = filedata['web']['redirect_uris'][0].rstrip('/').replace('/gdrive/callback', '/gdrive/watch/callback')
|
||||||
|
notification_id = str(uuid4())
|
||||||
|
try:
|
||||||
|
result = gdriveutils.watchChange(gdriveutils.Gdrive.Instance().drive, notification_id,
|
||||||
|
'web_hook', address, gdrive_watch_callback_token, current_milli_time() + 604800*1000)
|
||||||
|
config.config_google_drive_watch_changes_response = result
|
||||||
|
config.save()
|
||||||
|
except HttpError as e:
|
||||||
|
reason=json.loads(e.content)['error']['errors'][0]
|
||||||
|
if reason['reason'] == 'push.webhookUrlUnauthorized':
|
||||||
|
flash(_('Callback domain is not verified, '
|
||||||
|
'please follow steps to verify domain in google developer console'), category="error")
|
||||||
|
else:
|
||||||
|
flash(reason['message'], category="error")
|
||||||
|
|
||||||
|
return redirect(url_for('admin.db_configuration'))
|
||||||
|
|
||||||
|
|
||||||
|
@gdrive.route("/watch/revoke")
|
||||||
|
@login_required
|
||||||
|
@admin_required
|
||||||
|
def revoke_watch_gdrive():
|
||||||
|
last_watch_response = config.config_google_drive_watch_changes_response
|
||||||
|
if last_watch_response:
|
||||||
|
try:
|
||||||
|
gdriveutils.stopChannel(gdriveutils.Gdrive.Instance().drive, last_watch_response['id'],
|
||||||
|
last_watch_response['resourceId'])
|
||||||
|
except (HttpError, AttributeError):
|
||||||
|
pass
|
||||||
|
config.config_google_drive_watch_changes_response = {}
|
||||||
|
config.save()
|
||||||
|
return redirect(url_for('admin.db_configuration'))
|
||||||
|
|
||||||
|
try:
|
||||||
|
@csrf.exempt
|
||||||
|
@gdrive.route("/watch/callback", methods=['GET', 'POST'])
|
||||||
|
def on_received_watch_confirmation():
|
||||||
|
if not config.config_google_drive_watch_changes_response:
|
||||||
|
return ''
|
||||||
|
if request.headers.get('X-Goog-Channel-Token') != gdrive_watch_callback_token \
|
||||||
|
or request.headers.get('X-Goog-Resource-State') != 'change' \
|
||||||
|
or not request.data:
|
||||||
|
return ''
|
||||||
|
|
||||||
|
log.debug('%r', request.headers)
|
||||||
|
log.debug('%r', request.data)
|
||||||
|
log.info('Change received from gdrive')
|
||||||
|
|
||||||
|
try:
|
||||||
|
j = json.loads(request.data)
|
||||||
|
log.info('Getting change details')
|
||||||
|
response = gdriveutils.getChangeById(gdriveutils.Gdrive.Instance().drive, j['id'])
|
||||||
|
log.debug('%r', response)
|
||||||
|
if response:
|
||||||
|
dbpath = os.path.join(config.config_calibre_dir, "metadata.db").encode()
|
||||||
|
if not response['deleted'] and response['file']['title'] == 'metadata.db' \
|
||||||
|
and response['file']['md5Checksum'] != hashlib.md5(dbpath): # nosec
|
||||||
|
tmp_dir = get_temp_dir()
|
||||||
|
|
||||||
|
log.info('Database file updated')
|
||||||
|
copyfile(dbpath, os.path.join(tmp_dir, "metadata.db_" + str(current_milli_time())))
|
||||||
|
log.info('Backing up existing and downloading updated metadata.db')
|
||||||
|
gdriveutils.downloadFile(None, "metadata.db", os.path.join(tmp_dir, "tmp_metadata.db"))
|
||||||
|
log.info('Setting up new DB')
|
||||||
|
# prevent error on windows, as os.rename does on existing files, also allow cross hdd move
|
||||||
|
move(os.path.join(tmp_dir, "tmp_metadata.db"), dbpath)
|
||||||
|
calibre_db.reconnect_db(config, ub.app_DB_path)
|
||||||
|
except Exception as ex:
|
||||||
|
log.error_or_exception(ex)
|
||||||
|
return ''
|
||||||
|
except AttributeError:
|
||||||
|
pass
|
|
@ -1,4 +1,3 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
@ -17,24 +16,69 @@
|
||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
try:
|
|
||||||
from pydrive.auth import GoogleAuth
|
|
||||||
from pydrive.drive import GoogleDrive
|
|
||||||
from pydrive.auth import RefreshError, InvalidConfigError
|
|
||||||
from apiclient import errors
|
|
||||||
gdrive_support = True
|
|
||||||
except ImportError:
|
|
||||||
gdrive_support = False
|
|
||||||
|
|
||||||
import os
|
import os
|
||||||
from ub import config
|
import json
|
||||||
import cli
|
|
||||||
import shutil
|
import shutil
|
||||||
|
import chardet
|
||||||
|
import ssl
|
||||||
|
|
||||||
from flask import Response, stream_with_context
|
from flask import Response, stream_with_context
|
||||||
from sqlalchemy import *
|
from sqlalchemy import create_engine
|
||||||
from sqlalchemy.ext.declarative import declarative_base
|
from sqlalchemy import Column, UniqueConstraint
|
||||||
from sqlalchemy.orm import *
|
from sqlalchemy import String, Integer
|
||||||
import web
|
from sqlalchemy.orm import sessionmaker, scoped_session
|
||||||
|
try:
|
||||||
|
# Compatibility with sqlalchemy 2.0
|
||||||
|
from sqlalchemy.orm import declarative_base
|
||||||
|
except ImportError:
|
||||||
|
from sqlalchemy.ext.declarative import declarative_base
|
||||||
|
from sqlalchemy.exc import OperationalError, InvalidRequestError, IntegrityError
|
||||||
|
from sqlalchemy.orm.exc import StaleDataError
|
||||||
|
|
||||||
|
try:
|
||||||
|
from httplib2 import __version__ as httplib2_version
|
||||||
|
except ImportError:
|
||||||
|
httplib2_version = "not installed"
|
||||||
|
|
||||||
|
try:
|
||||||
|
from apiclient import errors
|
||||||
|
from httplib2 import ServerNotFoundError
|
||||||
|
importError = None
|
||||||
|
gdrive_support = True
|
||||||
|
except ImportError as e:
|
||||||
|
importError = e
|
||||||
|
gdrive_support = False
|
||||||
|
try:
|
||||||
|
from pydrive2.auth import GoogleAuth
|
||||||
|
from pydrive2.drive import GoogleDrive
|
||||||
|
from pydrive2.auth import RefreshError
|
||||||
|
from pydrive2.files import ApiRequestError
|
||||||
|
except ImportError as err:
|
||||||
|
try:
|
||||||
|
from pydrive.auth import GoogleAuth
|
||||||
|
from pydrive.drive import GoogleDrive
|
||||||
|
from pydrive.auth import RefreshError
|
||||||
|
from pydrive.files import ApiRequestError
|
||||||
|
except ImportError as err:
|
||||||
|
importError = err
|
||||||
|
gdrive_support = False
|
||||||
|
|
||||||
|
from . import logger, cli_param, config
|
||||||
|
from .constants import CONFIG_DIR as _CONFIG_DIR
|
||||||
|
|
||||||
|
|
||||||
|
SETTINGS_YAML = os.path.join(_CONFIG_DIR, 'settings.yaml')
|
||||||
|
CREDENTIALS = os.path.join(_CONFIG_DIR, 'gdrive_credentials')
|
||||||
|
CLIENT_SECRETS = os.path.join(_CONFIG_DIR, 'client_secrets.json')
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
if gdrive_support:
|
||||||
|
logger.get('googleapiclient.discovery_cache').setLevel(logger.logging.ERROR)
|
||||||
|
if not logger.is_debug_enabled():
|
||||||
|
logger.get('googleapiclient.discovery').setLevel(logger.logging.ERROR)
|
||||||
|
else:
|
||||||
|
log.debug("Cannot import pydrive, httplib2, using gdrive will not work: {}".format(importError))
|
||||||
|
|
||||||
|
|
||||||
class Singleton:
|
class Singleton:
|
||||||
"""
|
"""
|
||||||
|
@ -67,6 +111,9 @@ class Singleton:
|
||||||
except AttributeError:
|
except AttributeError:
|
||||||
self._instance = self._decorated()
|
self._instance = self._decorated()
|
||||||
return self._instance
|
return self._instance
|
||||||
|
except (ImportError, NameError) as e:
|
||||||
|
log.debug(e)
|
||||||
|
return None
|
||||||
|
|
||||||
def __call__(self):
|
def __call__(self):
|
||||||
raise TypeError('Singletons must be accessed through `Instance()`.')
|
raise TypeError('Singletons must be accessed through `Instance()`.')
|
||||||
|
@ -78,7 +125,11 @@ class Singleton:
|
||||||
@Singleton
|
@Singleton
|
||||||
class Gauth:
|
class Gauth:
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.auth = GoogleAuth(settings_file=os.path.join(config.get_main_dir,'settings.yaml'))
|
try:
|
||||||
|
self.auth = GoogleAuth(settings_file=SETTINGS_YAML)
|
||||||
|
except NameError as error:
|
||||||
|
log.error(error)
|
||||||
|
self.auth = None
|
||||||
|
|
||||||
|
|
||||||
@Singleton
|
@Singleton
|
||||||
|
@ -87,11 +138,15 @@ class Gdrive:
|
||||||
self.drive = getDrive(gauth=Gauth.Instance().auth)
|
self.drive = getDrive(gauth=Gauth.Instance().auth)
|
||||||
|
|
||||||
|
|
||||||
engine = create_engine('sqlite:///{0}'.format(cli.gdpath), echo=False)
|
def is_gdrive_ready():
|
||||||
|
return os.path.exists(SETTINGS_YAML) and os.path.exists(CREDENTIALS)
|
||||||
|
|
||||||
|
|
||||||
|
engine = create_engine('sqlite:///{0}'.format(cli_param.gd_path), echo=False)
|
||||||
Base = declarative_base()
|
Base = declarative_base()
|
||||||
|
|
||||||
# Open session for database connection
|
# Open session for database connection
|
||||||
Session = sessionmaker()
|
Session = sessionmaker(autoflush=False)
|
||||||
Session.configure(bind=engine)
|
Session.configure(bind=engine)
|
||||||
session = scoped_session(Session)
|
session = scoped_session(Session)
|
||||||
|
|
||||||
|
@ -118,45 +173,28 @@ class PermissionAdded(Base):
|
||||||
return str(self.gdrive_id)
|
return str(self.gdrive_id)
|
||||||
|
|
||||||
|
|
||||||
def migrate():
|
if not os.path.exists(cli_param.gd_path):
|
||||||
if not engine.dialect.has_table(engine.connect(), "permissions_added"):
|
|
||||||
PermissionAdded.__table__.create(bind = engine)
|
|
||||||
for sql in session.execute("select sql from sqlite_master where type='table'"):
|
|
||||||
if 'CREATE TABLE gdrive_ids' in sql[0]:
|
|
||||||
currUniqueConstraint = 'UNIQUE (gdrive_id)'
|
|
||||||
if currUniqueConstraint in sql[0]:
|
|
||||||
sql=sql[0].replace(currUniqueConstraint, 'UNIQUE (gdrive_id, path)')
|
|
||||||
sql=sql.replace(GdriveId.__tablename__, GdriveId.__tablename__ + '2')
|
|
||||||
session.execute(sql)
|
|
||||||
session.execute("INSERT INTO gdrive_ids2 (id, gdrive_id, path) SELECT id, "
|
|
||||||
"gdrive_id, path FROM gdrive_ids;")
|
|
||||||
session.commit()
|
|
||||||
session.execute('DROP TABLE %s' % 'gdrive_ids')
|
|
||||||
session.execute('ALTER TABLE gdrive_ids2 RENAME to gdrive_ids')
|
|
||||||
break
|
|
||||||
|
|
||||||
if not os.path.exists(cli.gdpath):
|
|
||||||
try:
|
try:
|
||||||
Base.metadata.create_all(engine)
|
Base.metadata.create_all(engine)
|
||||||
except Exception:
|
except Exception as ex:
|
||||||
|
log.error("Error connect to database: {} - {}".format(cli_param.gd_path, ex))
|
||||||
raise
|
raise
|
||||||
migrate()
|
|
||||||
|
|
||||||
|
|
||||||
def getDrive(drive=None, gauth=None):
|
def getDrive(drive=None, gauth=None):
|
||||||
if not drive:
|
if not drive:
|
||||||
if not gauth:
|
if not gauth:
|
||||||
gauth = GoogleAuth(settings_file=os.path.join(config.get_main_dir,'settings.yaml'))
|
gauth = GoogleAuth(settings_file=SETTINGS_YAML)
|
||||||
# Try to load saved client credentials
|
# Try to load saved client credentials
|
||||||
gauth.LoadCredentialsFile(os.path.join(config.get_main_dir,'gdrive_credentials'))
|
gauth.LoadCredentialsFile(CREDENTIALS)
|
||||||
if gauth.access_token_expired:
|
if gauth.access_token_expired:
|
||||||
# Refresh them if expired
|
# Refresh them if expired
|
||||||
try:
|
try:
|
||||||
gauth.Refresh()
|
gauth.Refresh()
|
||||||
except RefreshError as e:
|
except RefreshError as e:
|
||||||
web.app.logger.error("Google Drive error: " + e.message)
|
log.error("Google Drive error: {}".format(e))
|
||||||
except Exception as e:
|
except Exception as ex:
|
||||||
web.app.logger.exception(e)
|
log.error_or_exception(ex)
|
||||||
else:
|
else:
|
||||||
# Initialize the saved creds
|
# Initialize the saved creds
|
||||||
gauth.Authorize()
|
gauth.Authorize()
|
||||||
|
@ -166,18 +204,22 @@ def getDrive(drive=None, gauth=None):
|
||||||
try:
|
try:
|
||||||
drive.auth.Refresh()
|
drive.auth.Refresh()
|
||||||
except RefreshError as e:
|
except RefreshError as e:
|
||||||
web.app.logger.error("Google Drive error: " + e.message)
|
log.error("Google Drive error: {}".format(e))
|
||||||
return drive
|
return drive
|
||||||
|
|
||||||
def listRootFolders():
|
def listRootFolders():
|
||||||
drive = getDrive(Gdrive.Instance().drive)
|
try:
|
||||||
folder = "'root' in parents and mimeType = 'application/vnd.google-apps.folder' and trashed = false"
|
drive = getDrive(Gdrive.Instance().drive)
|
||||||
fileList = drive.ListFile({'q': folder}).GetList()
|
folder = "'root' in parents and mimeType = 'application/vnd.google-apps.folder' and trashed = false"
|
||||||
|
fileList = drive.ListFile({'q': folder}).GetList()
|
||||||
|
except (ServerNotFoundError, ssl.SSLError, RefreshError) as e:
|
||||||
|
log.info("GDrive Error {}".format(e))
|
||||||
|
fileList = []
|
||||||
return fileList
|
return fileList
|
||||||
|
|
||||||
|
|
||||||
def getEbooksFolder(drive):
|
def getEbooksFolder(drive):
|
||||||
return getFolderInFolder('root',config.config_google_drive_folder,drive)
|
return getFolderInFolder('root', config.config_google_drive_folder, drive)
|
||||||
|
|
||||||
|
|
||||||
def getFolderInFolder(parentId, folderName, drive):
|
def getFolderInFolder(parentId, folderName, drive):
|
||||||
|
@ -203,11 +245,15 @@ def getEbooksFolderId(drive=None):
|
||||||
try:
|
try:
|
||||||
gDriveId.gdrive_id = getEbooksFolder(drive)['id']
|
gDriveId.gdrive_id = getEbooksFolder(drive)['id']
|
||||||
except Exception:
|
except Exception:
|
||||||
web.app.logger.error('Error gDrive, root ID not found')
|
log.error('Error gDrive, root ID not found')
|
||||||
gDriveId.path = '/'
|
gDriveId.path = '/'
|
||||||
session.merge(gDriveId)
|
session.merge(gDriveId)
|
||||||
session.commit()
|
try:
|
||||||
return
|
session.commit()
|
||||||
|
except OperationalError as ex:
|
||||||
|
log.error_or_exception('Database error: {}'.format(ex))
|
||||||
|
session.rollback()
|
||||||
|
return gDriveId.gdrive_id
|
||||||
|
|
||||||
|
|
||||||
def getFile(pathId, fileName, drive):
|
def getFile(pathId, fileName, drive):
|
||||||
|
@ -221,37 +267,47 @@ def getFile(pathId, fileName, drive):
|
||||||
|
|
||||||
def getFolderId(path, drive):
|
def getFolderId(path, drive):
|
||||||
# drive = getDrive(drive)
|
# drive = getDrive(drive)
|
||||||
currentFolderId = getEbooksFolderId(drive)
|
currentFolderId = None
|
||||||
sqlCheckPath = path if path[-1] == '/' else path + '/'
|
try:
|
||||||
storedPathName = session.query(GdriveId).filter(GdriveId.path == sqlCheckPath).first()
|
currentFolderId = getEbooksFolderId(drive)
|
||||||
|
sqlCheckPath = path if path[-1] == '/' else path + '/'
|
||||||
|
storedPathName = session.query(GdriveId).filter(GdriveId.path == sqlCheckPath).first()
|
||||||
|
|
||||||
if not storedPathName:
|
if not storedPathName:
|
||||||
dbChange = False
|
dbChange = False
|
||||||
s = path.split('/')
|
s = path.split('/')
|
||||||
for i, x in enumerate(s):
|
for i, x in enumerate(s):
|
||||||
if len(x) > 0:
|
if len(x) > 0:
|
||||||
currentPath = "/".join(s[:i+1])
|
currentPath = "/".join(s[:i+1])
|
||||||
if currentPath[-1] != '/':
|
if currentPath[-1] != '/':
|
||||||
currentPath = currentPath + '/'
|
currentPath = currentPath + '/'
|
||||||
storedPathName = session.query(GdriveId).filter(GdriveId.path == currentPath).first()
|
storedPathName = session.query(GdriveId).filter(GdriveId.path == currentPath).first()
|
||||||
if storedPathName:
|
if storedPathName:
|
||||||
currentFolderId = storedPathName.gdrive_id
|
currentFolderId = storedPathName.gdrive_id
|
||||||
else:
|
|
||||||
currentFolder = getFolderInFolder(currentFolderId, x, drive)
|
|
||||||
if currentFolder:
|
|
||||||
gDriveId = GdriveId()
|
|
||||||
gDriveId.gdrive_id = currentFolder['id']
|
|
||||||
gDriveId.path = currentPath
|
|
||||||
session.merge(gDriveId)
|
|
||||||
dbChange = True
|
|
||||||
currentFolderId = currentFolder['id']
|
|
||||||
else:
|
else:
|
||||||
currentFolderId = None
|
currentFolder = getFolderInFolder(currentFolderId, x, drive)
|
||||||
break
|
if currentFolder:
|
||||||
if dbChange:
|
gDriveId = GdriveId()
|
||||||
session.commit()
|
gDriveId.gdrive_id = currentFolder['id']
|
||||||
else:
|
gDriveId.path = currentPath
|
||||||
currentFolderId = storedPathName.gdrive_id
|
session.merge(gDriveId)
|
||||||
|
dbChange = True
|
||||||
|
currentFolderId = currentFolder['id']
|
||||||
|
else:
|
||||||
|
currentFolderId = None
|
||||||
|
break
|
||||||
|
if dbChange:
|
||||||
|
session.commit()
|
||||||
|
else:
|
||||||
|
currentFolderId = storedPathName.gdrive_id
|
||||||
|
except (OperationalError, IntegrityError, StaleDataError) as ex:
|
||||||
|
log.error_or_exception('Database error: {}'.format(ex))
|
||||||
|
session.rollback()
|
||||||
|
except ApiRequestError as ex:
|
||||||
|
log.error('{} {}'.format(ex.error['message'], path))
|
||||||
|
session.rollback()
|
||||||
|
except RefreshError as ex:
|
||||||
|
log.error(ex)
|
||||||
return currentFolderId
|
return currentFolderId
|
||||||
|
|
||||||
|
|
||||||
|
@ -269,7 +325,7 @@ def getFileFromEbooksFolder(path, fileName):
|
||||||
|
|
||||||
|
|
||||||
def moveGdriveFileRemote(origin_file_id, new_title):
|
def moveGdriveFileRemote(origin_file_id, new_title):
|
||||||
origin_file_id['title']= new_title
|
origin_file_id['title'] = new_title
|
||||||
origin_file_id.Upload()
|
origin_file_id.Upload()
|
||||||
|
|
||||||
|
|
||||||
|
@ -285,17 +341,28 @@ def moveGdriveFolderRemote(origin_file, target_folder):
|
||||||
children = drive.auth.service.children().list(folderId=previous_parents).execute()
|
children = drive.auth.service.children().list(folderId=previous_parents).execute()
|
||||||
gFileTargetDir = getFileFromEbooksFolder(None, target_folder)
|
gFileTargetDir = getFileFromEbooksFolder(None, target_folder)
|
||||||
if not gFileTargetDir:
|
if not gFileTargetDir:
|
||||||
# Folder is not existing, create, and move folder
|
|
||||||
gFileTargetDir = drive.CreateFile(
|
gFileTargetDir = drive.CreateFile(
|
||||||
{'title': target_folder, 'parents': [{"kind": "drive#fileLink", 'id': getEbooksFolderId()}],
|
{'title': target_folder, 'parents': [{"kind": "drive#fileLink", 'id': getEbooksFolderId()}],
|
||||||
"mimeType": "application/vnd.google-apps.folder"})
|
"mimeType": "application/vnd.google-apps.folder"})
|
||||||
gFileTargetDir.Upload()
|
gFileTargetDir.Upload()
|
||||||
# Move the file to the new folder
|
# Move the file to the new folder
|
||||||
drive.auth.service.files().update(fileId=origin_file['id'],
|
drive.auth.service.files().update(fileId=origin_file['id'],
|
||||||
addParents=gFileTargetDir['id'],
|
addParents=gFileTargetDir['id'],
|
||||||
removeParents=previous_parents,
|
removeParents=previous_parents,
|
||||||
fields='id, parents').execute()
|
fields='id, parents').execute()
|
||||||
# if previous_parents has no childs anymore, delete original fileparent
|
|
||||||
|
elif gFileTargetDir['title'] != target_folder:
|
||||||
|
# Folder is not existing, create, and move folder
|
||||||
|
drive.auth.service.files().patch(fileId=origin_file['id'],
|
||||||
|
body={'title': target_folder},
|
||||||
|
fields='title').execute()
|
||||||
|
else:
|
||||||
|
# Move the file to the new folder
|
||||||
|
drive.auth.service.files().update(fileId=origin_file['id'],
|
||||||
|
addParents=gFileTargetDir['id'],
|
||||||
|
removeParents=previous_parents,
|
||||||
|
fields='id, parents').execute()
|
||||||
|
# if previous_parents has no children anymore, delete original fileparent
|
||||||
if len(children['items']) == 1:
|
if len(children['items']) == 1:
|
||||||
deleteDatabaseEntry(previous_parents)
|
deleteDatabaseEntry(previous_parents)
|
||||||
drive.auth.service.files().delete(fileId=previous_parents).execute()
|
drive.auth.service.files().delete(fileId=previous_parents).execute()
|
||||||
|
@ -336,29 +403,33 @@ def copyToDrive(drive, uploadFile, createRoot, replaceFiles,
|
||||||
driveFile.Upload()
|
driveFile.Upload()
|
||||||
|
|
||||||
|
|
||||||
def uploadFileToEbooksFolder(destFile, f):
|
def uploadFileToEbooksFolder(destFile, f, string=False):
|
||||||
drive = getDrive(Gdrive.Instance().drive)
|
drive = getDrive(Gdrive.Instance().drive)
|
||||||
parent = getEbooksFolder(drive)
|
parent = getEbooksFolder(drive)
|
||||||
splitDir = destFile.split('/')
|
splitDir = destFile.split('/')
|
||||||
for i, x in enumerate(splitDir):
|
for i, x in enumerate(splitDir):
|
||||||
if i == len(splitDir)-1:
|
if i == len(splitDir)-1:
|
||||||
existingFiles = drive.ListFile({'q': "title = '%s' and '%s' in parents and trashed = false" %
|
existing_Files = drive.ListFile({'q': "title = '%s' and '%s' in parents and trashed = false" %
|
||||||
(x.replace("'", r"\'"), parent['id'])}).GetList()
|
(x.replace("'", r"\'"), parent['id'])}).GetList()
|
||||||
if len(existingFiles) > 0:
|
if len(existing_Files) > 0:
|
||||||
driveFile = existingFiles[0]
|
driveFile = existing_Files[0]
|
||||||
else:
|
else:
|
||||||
driveFile = drive.CreateFile({'title': x, 'parents': [{"kind": "drive#fileLink", 'id': parent['id']}],})
|
driveFile = drive.CreateFile({'title': x,
|
||||||
driveFile.SetContentFile(f)
|
'parents': [{"kind": "drive#fileLink", 'id': parent['id']}], })
|
||||||
|
if not string:
|
||||||
|
driveFile.SetContentFile(f)
|
||||||
|
else:
|
||||||
|
driveFile.SetContentString(f)
|
||||||
driveFile.Upload()
|
driveFile.Upload()
|
||||||
else:
|
else:
|
||||||
existingFolder = drive.ListFile({'q': "title = '%s' and '%s' in parents and trashed = false" %
|
existing_Folder = drive.ListFile({'q': "title = '%s' and '%s' in parents and trashed = false" %
|
||||||
(x.replace("'", r"\'"), parent['id'])}).GetList()
|
(x.replace("'", r"\'"), parent['id'])}).GetList()
|
||||||
if len(existingFolder) == 0:
|
if len(existing_Folder) == 0:
|
||||||
parent = drive.CreateFile({'title': x, 'parents': [{"kind": "drive#fileLink", 'id': parent['id']}],
|
parent = drive.CreateFile({'title': x, 'parents': [{"kind": "drive#fileLink", 'id': parent['id']}],
|
||||||
"mimeType": "application/vnd.google-apps.folder"})
|
"mimeType": "application/vnd.google-apps.folder"})
|
||||||
parent.Upload()
|
parent.Upload()
|
||||||
else:
|
else:
|
||||||
parent = existingFolder[0]
|
parent = existing_Folder[0]
|
||||||
|
|
||||||
|
|
||||||
def watchChange(drive, channel_id, channel_type, channel_address,
|
def watchChange(drive, channel_id, channel_type, channel_address,
|
||||||
|
@ -443,17 +514,23 @@ def getChangeById (drive, change_id):
|
||||||
change = drive.auth.service.changes().get(changeId=change_id).execute()
|
change = drive.auth.service.changes().get(changeId=change_id).execute()
|
||||||
return change
|
return change
|
||||||
except (errors.HttpError) as error:
|
except (errors.HttpError) as error:
|
||||||
web.app.logger.info(error.message)
|
log.error(error)
|
||||||
return None
|
return None
|
||||||
except Exception as e:
|
except Exception as ex:
|
||||||
web.app.logger.info(e)
|
log.error(ex)
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
# Deletes the local hashes database to force search for new folder names
|
# Deletes the local hashes database to force search for new folder names
|
||||||
def deleteDatabaseOnChange():
|
def deleteDatabaseOnChange():
|
||||||
session.query(GdriveId).delete()
|
try:
|
||||||
session.commit()
|
session.query(GdriveId).delete()
|
||||||
|
session.commit()
|
||||||
|
except (OperationalError, InvalidRequestError) as ex:
|
||||||
|
session.rollback()
|
||||||
|
log.error_or_exception('Database error: {}'.format(ex))
|
||||||
|
session.rollback()
|
||||||
|
|
||||||
|
|
||||||
def updateGdriveCalibreFromLocal():
|
def updateGdriveCalibreFromLocal():
|
||||||
copyToDrive(Gdrive.Instance().drive, config.config_calibre_dir, False, True)
|
copyToDrive(Gdrive.Instance().drive, config.config_calibre_dir, False, True)
|
||||||
|
@ -463,20 +540,29 @@ def updateGdriveCalibreFromLocal():
|
||||||
|
|
||||||
# update gdrive.db on edit of books title
|
# update gdrive.db on edit of books title
|
||||||
def updateDatabaseOnEdit(ID,newPath):
|
def updateDatabaseOnEdit(ID,newPath):
|
||||||
sqlCheckPath = newPath if newPath[-1] == '/' else newPath + u'/'
|
sqlCheckPath = newPath if newPath[-1] == '/' else newPath + '/'
|
||||||
storedPathName = session.query(GdriveId).filter(GdriveId.gdrive_id == ID).first()
|
storedPathName = session.query(GdriveId).filter(GdriveId.gdrive_id == ID).first()
|
||||||
if storedPathName:
|
if storedPathName:
|
||||||
storedPathName.path = sqlCheckPath
|
storedPathName.path = sqlCheckPath
|
||||||
session.commit()
|
try:
|
||||||
|
session.commit()
|
||||||
|
except OperationalError as ex:
|
||||||
|
log.error_or_exception('Database error: {}'.format(ex))
|
||||||
|
session.rollback()
|
||||||
|
|
||||||
|
|
||||||
# Deletes the hashes in database of deleted book
|
# Deletes the hashes in database of deleted book
|
||||||
def deleteDatabaseEntry(ID):
|
def deleteDatabaseEntry(ID):
|
||||||
session.query(GdriveId).filter(GdriveId.gdrive_id == ID).delete()
|
session.query(GdriveId).filter(GdriveId.gdrive_id == ID).delete()
|
||||||
session.commit()
|
try:
|
||||||
|
session.commit()
|
||||||
|
except OperationalError as ex:
|
||||||
|
log.error_or_exception('Database error: {}'.format(ex))
|
||||||
|
session.rollback()
|
||||||
|
|
||||||
|
|
||||||
# Gets cover file from gdrive
|
# Gets cover file from gdrive
|
||||||
|
# ToDo: Check is this right everyone get read permissions on cover files?
|
||||||
def get_cover_via_gdrive(cover_path):
|
def get_cover_via_gdrive(cover_path):
|
||||||
df = getFileFromEbooksFolder(cover_path, 'cover.jpg')
|
df = getFileFromEbooksFolder(cover_path, 'cover.jpg')
|
||||||
if df:
|
if df:
|
||||||
|
@ -490,7 +576,34 @@ def get_cover_via_gdrive(cover_path):
|
||||||
permissionAdded = PermissionAdded()
|
permissionAdded = PermissionAdded()
|
||||||
permissionAdded.gdrive_id = df['id']
|
permissionAdded.gdrive_id = df['id']
|
||||||
session.add(permissionAdded)
|
session.add(permissionAdded)
|
||||||
session.commit()
|
try:
|
||||||
|
session.commit()
|
||||||
|
except OperationalError as ex:
|
||||||
|
log.error_or_exception('Database error: {}'.format(ex))
|
||||||
|
session.rollback()
|
||||||
|
return df.metadata.get('webContentLink')
|
||||||
|
else:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Gets cover file from gdrive
|
||||||
|
def get_metadata_backup_via_gdrive(metadata_path):
|
||||||
|
df = getFileFromEbooksFolder(metadata_path, 'metadata.opf')
|
||||||
|
if df:
|
||||||
|
if not session.query(PermissionAdded).filter(PermissionAdded.gdrive_id == df['id']).first():
|
||||||
|
df.GetPermissions()
|
||||||
|
df.InsertPermission({
|
||||||
|
'type': 'anyone',
|
||||||
|
'value': 'anyone',
|
||||||
|
'role': 'writer', # ToDo needs write access
|
||||||
|
'withLink': True})
|
||||||
|
permissionAdded = PermissionAdded()
|
||||||
|
permissionAdded.gdrive_id = df['id']
|
||||||
|
session.add(permissionAdded)
|
||||||
|
try:
|
||||||
|
session.commit()
|
||||||
|
except OperationalError as ex:
|
||||||
|
log.error_or_exception('Database error: {}'.format(ex))
|
||||||
|
session.rollback()
|
||||||
return df.metadata.get('webContentLink')
|
return df.metadata.get('webContentLink')
|
||||||
else:
|
else:
|
||||||
return None
|
return None
|
||||||
|
@ -504,18 +617,75 @@ def partial(total_byte_len, part_size_limit):
|
||||||
return s
|
return s
|
||||||
|
|
||||||
# downloads files in chunks from gdrive
|
# downloads files in chunks from gdrive
|
||||||
def do_gdrive_download(df, headers):
|
def do_gdrive_download(df, headers, convert_encoding=False):
|
||||||
total_size = int(df.metadata.get('fileSize'))
|
total_size = int(df.metadata.get('fileSize'))
|
||||||
download_url = df.metadata.get('downloadUrl')
|
download_url = df.metadata.get('downloadUrl')
|
||||||
s = partial(total_size, 1024 * 1024) # I'm downloading BIG files, so 100M chunk size is fine for me
|
s = partial(total_size, 1024 * 1024) # I'm downloading BIG files, so 100M chunk size is fine for me
|
||||||
|
|
||||||
def stream():
|
def stream(convert_encoding):
|
||||||
for byte in s:
|
for byte in s:
|
||||||
headers = {"Range": 'bytes=%s-%s' % (byte[0], byte[1])}
|
headers = {"Range": 'bytes={}-{}'.format(byte[0], byte[1])}
|
||||||
resp, content = df.auth.Get_Http_Object().request(download_url, headers=headers)
|
resp, content = df.auth.Get_Http_Object().request(download_url, headers=headers)
|
||||||
if resp.status == 206:
|
if resp.status == 206:
|
||||||
|
if convert_encoding:
|
||||||
|
result = chardet.detect(content)
|
||||||
|
content = content.decode(result['encoding']).encode('utf-8')
|
||||||
yield content
|
yield content
|
||||||
else:
|
else:
|
||||||
web.app.logger.info('An error occurred: %s' % resp)
|
log.warning('An error occurred: {}'.format(resp))
|
||||||
return
|
return
|
||||||
return Response(stream_with_context(stream()), headers=headers)
|
return Response(stream_with_context(stream(convert_encoding)), headers=headers)
|
||||||
|
|
||||||
|
|
||||||
|
_SETTINGS_YAML_TEMPLATE = """
|
||||||
|
client_config_backend: settings
|
||||||
|
client_config_file: %(client_file)s
|
||||||
|
client_config:
|
||||||
|
client_id: %(client_id)s
|
||||||
|
client_secret: %(client_secret)s
|
||||||
|
redirect_uri: %(redirect_uri)s
|
||||||
|
|
||||||
|
save_credentials: True
|
||||||
|
save_credentials_backend: file
|
||||||
|
save_credentials_file: %(credential)s
|
||||||
|
|
||||||
|
get_refresh_token: True
|
||||||
|
|
||||||
|
oauth_scope:
|
||||||
|
- https://www.googleapis.com/auth/drive
|
||||||
|
"""
|
||||||
|
|
||||||
|
def update_settings(client_id, client_secret, redirect_uri):
|
||||||
|
if redirect_uri.endswith('/'):
|
||||||
|
redirect_uri = redirect_uri[:-1]
|
||||||
|
config_params = {
|
||||||
|
'client_file': CLIENT_SECRETS,
|
||||||
|
'client_id': client_id,
|
||||||
|
'client_secret': client_secret,
|
||||||
|
'redirect_uri': redirect_uri,
|
||||||
|
'credential': CREDENTIALS
|
||||||
|
}
|
||||||
|
|
||||||
|
with open(SETTINGS_YAML, 'w') as f:
|
||||||
|
f.write(_SETTINGS_YAML_TEMPLATE % config_params)
|
||||||
|
|
||||||
|
|
||||||
|
def get_error_text(client_secrets=None):
|
||||||
|
if not gdrive_support:
|
||||||
|
return 'Import of optional Google Drive requirements missing'
|
||||||
|
|
||||||
|
if not os.path.isfile(CLIENT_SECRETS):
|
||||||
|
return 'client_secrets.json is missing or not readable'
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(CLIENT_SECRETS, 'r') as settings:
|
||||||
|
filedata = json.load(settings)
|
||||||
|
except PermissionError:
|
||||||
|
return 'client_secrets.json is missing or not readable'
|
||||||
|
|
||||||
|
if 'web' not in filedata:
|
||||||
|
return 'client_secrets.json is not configured for web application'
|
||||||
|
if 'redirect_uris' not in filedata['web']:
|
||||||
|
return 'Callback url (redirect url) is missing in client_secrets.json'
|
||||||
|
if client_secrets:
|
||||||
|
client_secrets.update(filedata['web'])
|
||||||
|
|
|
@ -0,0 +1,29 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2022 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
|
||||||
|
from gevent.pywsgi import WSGIHandler
|
||||||
|
|
||||||
|
class MyWSGIHandler(WSGIHandler):
|
||||||
|
def get_environ(self):
|
||||||
|
env = super().get_environ()
|
||||||
|
path, __ = self.path.split('?', 1) if '?' in self.path else (self.path, '')
|
||||||
|
env['RAW_URI'] = path
|
||||||
|
return env
|
||||||
|
|
||||||
|
|
1403
cps/helper.py
1403
cps/helper.py
File diff suppressed because it is too large
Load Diff
|
@ -1,6 +1,26 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2019 pwr
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
from .iso_language_names import LANGUAGE_NAMES as _LANGUAGE_NAMES
|
||||||
|
from . import logger
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from iso639 import languages, __version__
|
from iso639 import languages, __version__
|
||||||
|
@ -15,14 +35,75 @@ except ImportError:
|
||||||
__version__ = "? (PyCountry)"
|
__version__ = "? (PyCountry)"
|
||||||
|
|
||||||
def _copy_fields(l):
|
def _copy_fields(l):
|
||||||
l.part1 = l.alpha_2
|
l.part1 = getattr(l, 'alpha_2', None)
|
||||||
l.part3 = l.alpha_3
|
l.part3 = getattr(l, 'alpha_3', None)
|
||||||
return l
|
return l
|
||||||
|
|
||||||
def get(name=None, part1=None, part3=None):
|
def get(name=None, part1=None, part3=None):
|
||||||
if (part3 is not None):
|
if part3 is not None:
|
||||||
return _copy_fields(pyc_languages.get(alpha_3=part3))
|
return _copy_fields(pyc_languages.get(alpha_3=part3))
|
||||||
if (part1 is not None):
|
if part1 is not None:
|
||||||
return _copy_fields(pyc_languages.get(alpha_2=part1))
|
return _copy_fields(pyc_languages.get(alpha_2=part1))
|
||||||
if (name is not None):
|
if name is not None:
|
||||||
return _copy_fields(pyc_languages.get(name=name))
|
return _copy_fields(pyc_languages.get(name=name))
|
||||||
|
|
||||||
|
|
||||||
|
def get_language_names(locale):
|
||||||
|
names = _LANGUAGE_NAMES.get(str(locale))
|
||||||
|
if names is None:
|
||||||
|
names = _LANGUAGE_NAMES.get(locale.language)
|
||||||
|
return names
|
||||||
|
|
||||||
|
|
||||||
|
def get_language_name(locale, lang_code):
|
||||||
|
UNKNOWN_TRANSLATION = "Unknown"
|
||||||
|
names = get_language_names(locale)
|
||||||
|
if names is None:
|
||||||
|
log.error(f"Missing language names for locale: {str(locale)}/{locale.language}")
|
||||||
|
return UNKNOWN_TRANSLATION
|
||||||
|
|
||||||
|
name = names.get(lang_code, UNKNOWN_TRANSLATION)
|
||||||
|
if name == UNKNOWN_TRANSLATION:
|
||||||
|
log.error("Missing translation for language name: {}".format(lang_code))
|
||||||
|
|
||||||
|
return name
|
||||||
|
|
||||||
|
|
||||||
|
def get_language_codes(locale, language_names, remainder=None):
|
||||||
|
language_names = set(x.strip().lower() for x in language_names if x)
|
||||||
|
lang = list()
|
||||||
|
for k, v in get_language_names(locale).items():
|
||||||
|
v = v.lower()
|
||||||
|
if v in language_names:
|
||||||
|
lang.append(k)
|
||||||
|
language_names.remove(v)
|
||||||
|
if remainder is not None and language_names:
|
||||||
|
remainder.extend(language_names)
|
||||||
|
return lang
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def get_valid_language_codes(locale, language_names, remainder=None):
|
||||||
|
lang = list()
|
||||||
|
if "" in language_names:
|
||||||
|
language_names.remove("")
|
||||||
|
for k, __ in get_language_names(locale).items():
|
||||||
|
if k in language_names:
|
||||||
|
lang.append(k)
|
||||||
|
language_names.remove(k)
|
||||||
|
if remainder is not None and len(language_names):
|
||||||
|
remainder.extend(language_names)
|
||||||
|
return lang
|
||||||
|
|
||||||
|
|
||||||
|
def get_lang3(lang):
|
||||||
|
try:
|
||||||
|
if len(lang) == 2:
|
||||||
|
ret_value = get(part1=lang).part3
|
||||||
|
elif len(lang) == 3:
|
||||||
|
ret_value = lang
|
||||||
|
else:
|
||||||
|
ret_value = ""
|
||||||
|
except KeyError:
|
||||||
|
ret_value = lang
|
||||||
|
return ret_value
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,182 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2019 OzzieIsaacs, cervinko, jkrehm, bodybybuddha, ok11,
|
||||||
|
# andy29485, idalin, Kyosfonica, wuqi, Kennyl, lemmsh,
|
||||||
|
# falgh1, grunjol, csitko, ytils, xybydy, trasba, vrabe,
|
||||||
|
# ruben-herold, marblepebble, JackED42, SiphonSquirrel,
|
||||||
|
# apetresc, nanu-c, mutschler
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
# custom jinja filters
|
||||||
|
|
||||||
|
from markupsafe import escape
|
||||||
|
import datetime
|
||||||
|
import mimetypes
|
||||||
|
from uuid import uuid4
|
||||||
|
|
||||||
|
# from babel.dates import format_date
|
||||||
|
from flask import Blueprint, request, url_for
|
||||||
|
from flask_babel import format_date
|
||||||
|
from flask_login import current_user
|
||||||
|
|
||||||
|
from . import constants, logger
|
||||||
|
|
||||||
|
jinjia = Blueprint('jinjia', __name__)
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
|
||||||
|
# pagination links in jinja
|
||||||
|
@jinjia.app_template_filter('url_for_other_page')
|
||||||
|
def url_for_other_page(page):
|
||||||
|
args = request.view_args.copy()
|
||||||
|
args['page'] = page
|
||||||
|
for get, val in request.args.items():
|
||||||
|
args[get] = val
|
||||||
|
return url_for(request.endpoint, **args)
|
||||||
|
|
||||||
|
|
||||||
|
# shortentitles to at longest nchar, shorten longer words if necessary
|
||||||
|
@jinjia.app_template_filter('shortentitle')
|
||||||
|
def shortentitle_filter(s, nchar=20):
|
||||||
|
text = s.split()
|
||||||
|
res = "" # result
|
||||||
|
suml = 0 # overall length
|
||||||
|
for line in text:
|
||||||
|
if suml >= 60:
|
||||||
|
res += '...'
|
||||||
|
break
|
||||||
|
# if word longer than 20 chars truncate line and append '...', otherwise add whole word to result
|
||||||
|
# string, and summarize total length to stop at chars given by nchar
|
||||||
|
if len(line) > nchar:
|
||||||
|
res += line[:(nchar-3)] + '[..] '
|
||||||
|
suml += nchar+3
|
||||||
|
else:
|
||||||
|
res += line + ' '
|
||||||
|
suml += len(line) + 1
|
||||||
|
return res.strip()
|
||||||
|
|
||||||
|
|
||||||
|
@jinjia.app_template_filter('mimetype')
|
||||||
|
def mimetype_filter(val):
|
||||||
|
return mimetypes.types_map.get('.' + val, 'application/octet-stream')
|
||||||
|
|
||||||
|
|
||||||
|
@jinjia.app_template_filter('formatdate')
|
||||||
|
def formatdate_filter(val):
|
||||||
|
try:
|
||||||
|
return format_date(val, format='medium')
|
||||||
|
except AttributeError as e:
|
||||||
|
log.error('Babel error: %s, Current user locale: %s, Current User: %s', e,
|
||||||
|
current_user.locale,
|
||||||
|
current_user.name
|
||||||
|
)
|
||||||
|
return val
|
||||||
|
|
||||||
|
|
||||||
|
@jinjia.app_template_filter('formatdateinput')
|
||||||
|
def format_date_input(val):
|
||||||
|
input_date = val.isoformat().split('T', 1)[0] # Hack to support dates <1900
|
||||||
|
return '' if input_date == "0101-01-01" else input_date
|
||||||
|
|
||||||
|
|
||||||
|
@jinjia.app_template_filter('strftime')
|
||||||
|
def timestamptodate(date, fmt=None):
|
||||||
|
date = datetime.datetime.fromtimestamp(
|
||||||
|
int(date)/1000
|
||||||
|
)
|
||||||
|
native = date.replace(tzinfo=None)
|
||||||
|
if fmt:
|
||||||
|
time_format = fmt
|
||||||
|
else:
|
||||||
|
time_format = '%d %m %Y - %H:%S'
|
||||||
|
return native.strftime(time_format)
|
||||||
|
|
||||||
|
|
||||||
|
@jinjia.app_template_filter('yesno')
|
||||||
|
def yesno(value, yes, no):
|
||||||
|
return yes if value else no
|
||||||
|
|
||||||
|
|
||||||
|
@jinjia.app_template_filter('formatfloat')
|
||||||
|
def formatfloat(value, decimals=1):
|
||||||
|
value = 0 if not value else value
|
||||||
|
return ('{0:.' + str(decimals) + 'f}').format(value).rstrip('0').rstrip('.')
|
||||||
|
|
||||||
|
|
||||||
|
@jinjia.app_template_filter('formatseriesindex')
|
||||||
|
def formatseriesindex_filter(series_index):
|
||||||
|
if series_index:
|
||||||
|
try:
|
||||||
|
if int(series_index) - series_index == 0:
|
||||||
|
return int(series_index)
|
||||||
|
else:
|
||||||
|
return series_index
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
return series_index
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
@jinjia.app_template_filter('escapedlink')
|
||||||
|
def escapedlink_filter(url, text):
|
||||||
|
return "<a href='{}'>{}</a>".format(url, escape(text))
|
||||||
|
|
||||||
|
|
||||||
|
@jinjia.app_template_filter('uuidfilter')
|
||||||
|
def uuidfilter(var):
|
||||||
|
return uuid4()
|
||||||
|
|
||||||
|
|
||||||
|
@jinjia.app_template_filter('cache_timestamp')
|
||||||
|
def cache_timestamp(rolling_period='month'):
|
||||||
|
if rolling_period == 'day':
|
||||||
|
return str(int(datetime.datetime.today().replace(hour=1, minute=1).timestamp()))
|
||||||
|
elif rolling_period == 'year':
|
||||||
|
return str(int(datetime.datetime.today().replace(day=1).timestamp()))
|
||||||
|
else:
|
||||||
|
return str(int(datetime.datetime.today().replace(month=1, day=1).timestamp()))
|
||||||
|
|
||||||
|
|
||||||
|
@jinjia.app_template_filter('last_modified')
|
||||||
|
def book_last_modified(book):
|
||||||
|
return str(int(book.last_modified.timestamp()))
|
||||||
|
|
||||||
|
|
||||||
|
@jinjia.app_template_filter('get_cover_srcset')
|
||||||
|
def get_cover_srcset(book):
|
||||||
|
srcset = list()
|
||||||
|
resolutions = {
|
||||||
|
constants.COVER_THUMBNAIL_SMALL: 'sm',
|
||||||
|
constants.COVER_THUMBNAIL_MEDIUM: 'md',
|
||||||
|
constants.COVER_THUMBNAIL_LARGE: 'lg'
|
||||||
|
}
|
||||||
|
for resolution, shortname in resolutions.items():
|
||||||
|
url = url_for('web.get_cover', book_id=book.id, resolution=shortname, c=book_last_modified(book))
|
||||||
|
srcset.append(f'{url} {resolution}x')
|
||||||
|
return ', '.join(srcset)
|
||||||
|
|
||||||
|
|
||||||
|
@jinjia.app_template_filter('get_series_srcset')
|
||||||
|
def get_cover_srcset(series):
|
||||||
|
srcset = list()
|
||||||
|
resolutions = {
|
||||||
|
constants.COVER_THUMBNAIL_SMALL: 'sm',
|
||||||
|
constants.COVER_THUMBNAIL_MEDIUM: 'md',
|
||||||
|
constants.COVER_THUMBNAIL_LARGE: 'lg'
|
||||||
|
}
|
||||||
|
for resolution, shortname in resolutions.items():
|
||||||
|
url = url_for('web.get_series_cover', series_id=series.id, resolution=shortname, c=cache_timestamp())
|
||||||
|
srcset.append(f'{url} {resolution}x')
|
||||||
|
return ', '.join(srcset)
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,174 @@
|
||||||
|
#!/usr/bin/env python
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2019 shavitmichael, OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
|
||||||
|
"""This module is used to control authentication/authorization of Kobo sync requests.
|
||||||
|
This module also includes research notes into the auth protocol used by Kobo devices.
|
||||||
|
|
||||||
|
Log-in:
|
||||||
|
When first booting a Kobo device the user must sign into a Kobo (or affiliate) account.
|
||||||
|
Upon successful sign-in, the user is redirected to
|
||||||
|
https://auth.kobobooks.com/CrossDomainSignIn?id=<some id>
|
||||||
|
which serves the following response:
|
||||||
|
<script type='text/javascript'>
|
||||||
|
location.href='kobo://UserAuthenticated?userId=<redacted>&userKey<redacted>&email=<redacted>&returnUrl=https%3a%2f%2fwww.kobo.com';
|
||||||
|
</script>
|
||||||
|
And triggers the insertion of a userKey into the device's User table.
|
||||||
|
|
||||||
|
Together, the device's DeviceId and UserKey act as an *irrevocable* authentication
|
||||||
|
token to most (if not all) Kobo APIs. In fact, in most cases only the UserKey is
|
||||||
|
required to authorize the API call.
|
||||||
|
|
||||||
|
Changing Kobo password *does not* invalidate user keys! This is apparently a known
|
||||||
|
issue for a few years now https://www.mobileread.com/forums/showpost.php?p=3476851&postcount=13
|
||||||
|
(although this poster hypothesised that Kobo could blacklist a DeviceId, many endpoints
|
||||||
|
will still grant access given the userkey.)
|
||||||
|
|
||||||
|
Official Kobo Store Api authorization:
|
||||||
|
* For most of the endpoints we care about (sync, metadata, tags, etc), the userKey is
|
||||||
|
passed in the x-kobo-userkey header, and is sufficient to authorize the API call.
|
||||||
|
* Some endpoints (e.g: AnnotationService) instead make use of Bearer tokens pass through
|
||||||
|
an authorization header. To get a BearerToken, the device makes a POST request to the
|
||||||
|
v1/auth/device endpoint with the secret UserKey and the device's DeviceId.
|
||||||
|
* The book download endpoint passes an auth token as a URL param instead of a header.
|
||||||
|
|
||||||
|
Our implementation:
|
||||||
|
We pretty much ignore all of the above. To authenticate the user, we generate a random
|
||||||
|
and unique token that they append to the CalibreWeb Url when setting up the api_store
|
||||||
|
setting on the device.
|
||||||
|
Thus, every request from the device to the api_store will hit CalibreWeb with the
|
||||||
|
auth_token in the url (e.g: https://mylibrary.com/<auth_token>/v1/library/sync).
|
||||||
|
In addition, once authenticated we also set the login cookie on the response that will
|
||||||
|
be sent back for the duration of the session to authorize subsequent API calls (in
|
||||||
|
particular calls to non-Kobo specific endpoints such as the CalibreWeb book download).
|
||||||
|
"""
|
||||||
|
|
||||||
|
from binascii import hexlify
|
||||||
|
from datetime import datetime
|
||||||
|
from os import urandom
|
||||||
|
from functools import wraps
|
||||||
|
|
||||||
|
from flask import g, Blueprint, abort, request
|
||||||
|
from flask_login import login_user, current_user, login_required
|
||||||
|
from flask_babel import gettext as _
|
||||||
|
from flask_limiter import RateLimitExceeded
|
||||||
|
|
||||||
|
from . import logger, config, calibre_db, db, helper, ub, lm, limiter
|
||||||
|
from .render_template import render_title_template
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
kobo_auth = Blueprint("kobo_auth", __name__, url_prefix="/kobo_auth")
|
||||||
|
|
||||||
|
|
||||||
|
@kobo_auth.route("/generate_auth_token/<int:user_id>")
|
||||||
|
@login_required
|
||||||
|
def generate_auth_token(user_id):
|
||||||
|
warning = False
|
||||||
|
host_list = request.host.rsplit(':')
|
||||||
|
if len(host_list) == 1:
|
||||||
|
host = ':'.join(host_list)
|
||||||
|
else:
|
||||||
|
host = ':'.join(host_list[0:-1])
|
||||||
|
if host.startswith('127.') or host.lower() == 'localhost' or host.startswith('[::ffff:7f') or host == "[::1]":
|
||||||
|
warning = _('Please access Calibre-Web from non localhost to get valid api_endpoint for kobo device')
|
||||||
|
|
||||||
|
# Generate auth token if none is existing for this user
|
||||||
|
auth_token = ub.session.query(ub.RemoteAuthToken).filter(
|
||||||
|
ub.RemoteAuthToken.user_id == user_id
|
||||||
|
).filter(ub.RemoteAuthToken.token_type==1).first()
|
||||||
|
|
||||||
|
if not auth_token:
|
||||||
|
auth_token = ub.RemoteAuthToken()
|
||||||
|
auth_token.user_id = user_id
|
||||||
|
auth_token.expiration = datetime.max
|
||||||
|
auth_token.auth_token = (hexlify(urandom(16))).decode("utf-8")
|
||||||
|
auth_token.token_type = 1
|
||||||
|
|
||||||
|
ub.session.add(auth_token)
|
||||||
|
ub.session_commit()
|
||||||
|
|
||||||
|
books = calibre_db.session.query(db.Books).join(db.Data).all()
|
||||||
|
|
||||||
|
for book in books:
|
||||||
|
formats = [data.format for data in book.data]
|
||||||
|
if 'KEPUB' not in formats and config.config_kepubifypath and 'EPUB' in formats:
|
||||||
|
helper.convert_book_format(book.id, config.config_calibre_dir, 'EPUB', 'KEPUB', current_user.name)
|
||||||
|
|
||||||
|
return render_title_template(
|
||||||
|
"generate_kobo_auth_url.html",
|
||||||
|
title=_("Kobo Setup"),
|
||||||
|
auth_token=auth_token.auth_token,
|
||||||
|
warning = warning
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@kobo_auth.route("/deleteauthtoken/<int:user_id>", methods=["POST"])
|
||||||
|
@login_required
|
||||||
|
def delete_auth_token(user_id):
|
||||||
|
# Invalidate any previously generated Kobo Auth token for this user
|
||||||
|
ub.session.query(ub.RemoteAuthToken).filter(ub.RemoteAuthToken.user_id == user_id)\
|
||||||
|
.filter(ub.RemoteAuthToken.token_type==1).delete()
|
||||||
|
|
||||||
|
return ub.session_commit()
|
||||||
|
|
||||||
|
|
||||||
|
def disable_failed_auth_redirect_for_blueprint(bp):
|
||||||
|
lm.blueprint_login_views[bp.name] = None
|
||||||
|
|
||||||
|
|
||||||
|
def get_auth_token():
|
||||||
|
if "auth_token" in g:
|
||||||
|
return g.get("auth_token")
|
||||||
|
else:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def register_url_value_preprocessor(kobo):
|
||||||
|
@kobo.url_value_preprocessor
|
||||||
|
# pylint: disable=unused-variable
|
||||||
|
def pop_auth_token(__, values):
|
||||||
|
g.auth_token = values.pop("auth_token")
|
||||||
|
|
||||||
|
|
||||||
|
def requires_kobo_auth(f):
|
||||||
|
@wraps(f)
|
||||||
|
def inner(*args, **kwargs):
|
||||||
|
auth_token = get_auth_token()
|
||||||
|
if auth_token is not None:
|
||||||
|
try:
|
||||||
|
limiter.check()
|
||||||
|
except RateLimitExceeded:
|
||||||
|
return abort(429)
|
||||||
|
except (ConnectionError, Exception) as e:
|
||||||
|
log.error("Connection error to limiter backend: %s", e)
|
||||||
|
return abort(429)
|
||||||
|
user = (
|
||||||
|
ub.session.query(ub.User)
|
||||||
|
.join(ub.RemoteAuthToken)
|
||||||
|
.filter(ub.RemoteAuthToken.auth_token == auth_token).filter(ub.RemoteAuthToken.token_type==1)
|
||||||
|
.first()
|
||||||
|
)
|
||||||
|
if user is not None:
|
||||||
|
login_user(user)
|
||||||
|
[limiter.limiter.storage.clear(k.key) for k in limiter.current_limits]
|
||||||
|
return f(*args, **kwargs)
|
||||||
|
log.debug("Received Kobo request without a recognizable auth token.")
|
||||||
|
return abort(401)
|
||||||
|
return inner
|
|
@ -0,0 +1,88 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2021 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
|
||||||
|
from flask_login import current_user
|
||||||
|
from . import ub
|
||||||
|
import datetime
|
||||||
|
from sqlalchemy.sql.expression import or_, and_, true
|
||||||
|
from sqlalchemy import exc
|
||||||
|
|
||||||
|
# Add the current book id to kobo_synced_books table for current user, if entry is already present,
|
||||||
|
# do nothing (safety precaution)
|
||||||
|
def add_synced_books(book_id):
|
||||||
|
is_present = ub.session.query(ub.KoboSyncedBooks).filter(ub.KoboSyncedBooks.book_id == book_id)\
|
||||||
|
.filter(ub.KoboSyncedBooks.user_id == current_user.id).count()
|
||||||
|
if not is_present:
|
||||||
|
synced_book = ub.KoboSyncedBooks()
|
||||||
|
synced_book.user_id = current_user.id
|
||||||
|
synced_book.book_id = book_id
|
||||||
|
ub.session.add(synced_book)
|
||||||
|
ub.session_commit()
|
||||||
|
|
||||||
|
|
||||||
|
# Select all entries of current book in kobo_synced_books table, which are from current user and delete them
|
||||||
|
def remove_synced_book(book_id, all=False, session=None):
|
||||||
|
if not all:
|
||||||
|
user = ub.KoboSyncedBooks.user_id == current_user.id
|
||||||
|
else:
|
||||||
|
user = true()
|
||||||
|
if not session:
|
||||||
|
ub.session.query(ub.KoboSyncedBooks).filter(ub.KoboSyncedBooks.book_id == book_id).filter(user).delete()
|
||||||
|
ub.session_commit()
|
||||||
|
else:
|
||||||
|
session.query(ub.KoboSyncedBooks).filter(ub.KoboSyncedBooks.book_id == book_id).filter(user).delete()
|
||||||
|
ub.session_commit(_session=session)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def change_archived_books(book_id, state=None, message=None):
|
||||||
|
archived_book = ub.session.query(ub.ArchivedBook).filter(and_(ub.ArchivedBook.user_id == int(current_user.id),
|
||||||
|
ub.ArchivedBook.book_id == book_id)).first()
|
||||||
|
if not archived_book:
|
||||||
|
archived_book = ub.ArchivedBook(user_id=current_user.id, book_id=book_id)
|
||||||
|
|
||||||
|
archived_book.is_archived = state if state else not archived_book.is_archived
|
||||||
|
archived_book.last_modified = datetime.datetime.utcnow() # toDo. Check utc timestamp
|
||||||
|
|
||||||
|
ub.session.merge(archived_book)
|
||||||
|
ub.session_commit(message)
|
||||||
|
return archived_book.is_archived
|
||||||
|
|
||||||
|
|
||||||
|
# select all books which are synced by the current user and do not belong to a synced shelf and set them to archive
|
||||||
|
# select all shelves from current user which are synced and do not belong to the "only sync" shelves
|
||||||
|
def update_on_sync_shelfs(user_id):
|
||||||
|
books_to_archive = (ub.session.query(ub.KoboSyncedBooks)
|
||||||
|
.join(ub.BookShelf, ub.KoboSyncedBooks.book_id == ub.BookShelf.book_id, isouter=True)
|
||||||
|
.join(ub.Shelf, ub.Shelf.user_id == user_id, isouter=True)
|
||||||
|
.filter(or_(ub.Shelf.kobo_sync == 0, ub.Shelf.kobo_sync == None))
|
||||||
|
.filter(ub.KoboSyncedBooks.user_id == user_id).all())
|
||||||
|
for b in books_to_archive:
|
||||||
|
change_archived_books(b.book_id, True)
|
||||||
|
ub.session.query(ub.KoboSyncedBooks) \
|
||||||
|
.filter(ub.KoboSyncedBooks.book_id == b.book_id) \
|
||||||
|
.filter(ub.KoboSyncedBooks.user_id == user_id).delete()
|
||||||
|
ub.session_commit()
|
||||||
|
|
||||||
|
# Search all shelf which are currently not synced
|
||||||
|
shelves_to_archive = ub.session.query(ub.Shelf).filter(ub.Shelf.user_id == user_id).filter(
|
||||||
|
ub.Shelf.kobo_sync == 0).all()
|
||||||
|
for a in shelves_to_archive:
|
||||||
|
ub.session.add(ub.ShelfArchive(uuid=a.uuid, user_id=user_id))
|
||||||
|
ub.session_commit()
|
|
@ -0,0 +1,210 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2019 pwr
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import inspect
|
||||||
|
import logging
|
||||||
|
from logging import Formatter, StreamHandler
|
||||||
|
from logging.handlers import RotatingFileHandler
|
||||||
|
|
||||||
|
from .constants import CONFIG_DIR as _CONFIG_DIR
|
||||||
|
|
||||||
|
|
||||||
|
ACCESS_FORMATTER_GEVENT = Formatter("%(message)s")
|
||||||
|
ACCESS_FORMATTER_TORNADO = Formatter("[%(asctime)s] %(message)s")
|
||||||
|
|
||||||
|
FORMATTER = Formatter("[%(asctime)s] %(levelname)5s {%(name)s:%(lineno)d} %(message)s")
|
||||||
|
DEFAULT_LOG_LEVEL = logging.INFO
|
||||||
|
DEFAULT_LOG_FILE = os.path.join(_CONFIG_DIR, "calibre-web.log")
|
||||||
|
DEFAULT_ACCESS_LOG = os.path.join(_CONFIG_DIR, "access.log")
|
||||||
|
LOG_TO_STDERR = '/dev/stderr'
|
||||||
|
LOG_TO_STDOUT = '/dev/stdout'
|
||||||
|
|
||||||
|
logging.addLevelName(logging.WARNING, "WARN")
|
||||||
|
logging.addLevelName(logging.CRITICAL, "CRIT")
|
||||||
|
|
||||||
|
|
||||||
|
class _Logger(logging.Logger):
|
||||||
|
|
||||||
|
def error_or_exception(self, message, stacklevel=2, *args, **kwargs):
|
||||||
|
is_debug = self.getEffectiveLevel() <= logging.DEBUG
|
||||||
|
if sys.version_info > (3, 7):
|
||||||
|
if is_debug:
|
||||||
|
self.exception(message, stacklevel=stacklevel, *args, **kwargs)
|
||||||
|
else:
|
||||||
|
self.error(message, stacklevel=stacklevel, *args, **kwargs)
|
||||||
|
else:
|
||||||
|
if is_debug:
|
||||||
|
self.exception(message, stack_info=True, *args, **kwargs)
|
||||||
|
else:
|
||||||
|
self.error(message, *args, **kwargs)
|
||||||
|
|
||||||
|
def debug_no_auth(self, message, *args, **kwargs):
|
||||||
|
message = message.strip("\r\n")
|
||||||
|
if message.startswith("send: AUTH"):
|
||||||
|
self.debug(message[:16], *args, **kwargs)
|
||||||
|
else:
|
||||||
|
self.debug(message, *args, **kwargs)
|
||||||
|
|
||||||
|
|
||||||
|
def get(name=None):
|
||||||
|
return logging.getLogger(name)
|
||||||
|
|
||||||
|
|
||||||
|
def create():
|
||||||
|
parent_frame = inspect.stack(0)[1]
|
||||||
|
if hasattr(parent_frame, 'frame'):
|
||||||
|
parent_frame = parent_frame.frame
|
||||||
|
else:
|
||||||
|
parent_frame = parent_frame[0]
|
||||||
|
parent_module = inspect.getmodule(parent_frame)
|
||||||
|
return get(parent_module.__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def is_debug_enabled():
|
||||||
|
return logging.root.level <= logging.DEBUG
|
||||||
|
|
||||||
|
|
||||||
|
def is_info_enabled(logger):
|
||||||
|
return logging.getLogger(logger).level <= logging.INFO
|
||||||
|
|
||||||
|
|
||||||
|
def get_level_name(level):
|
||||||
|
return logging.getLevelName(level)
|
||||||
|
|
||||||
|
|
||||||
|
def is_valid_logfile(file_path):
|
||||||
|
if file_path == LOG_TO_STDERR or file_path == LOG_TO_STDOUT:
|
||||||
|
return True
|
||||||
|
if not file_path:
|
||||||
|
return True
|
||||||
|
if os.path.isdir(file_path):
|
||||||
|
return False
|
||||||
|
log_dir = os.path.dirname(file_path)
|
||||||
|
return (not log_dir) or os.path.isdir(log_dir)
|
||||||
|
|
||||||
|
|
||||||
|
def _absolute_log_file(log_file, default_log_file):
|
||||||
|
if log_file:
|
||||||
|
if not os.path.dirname(log_file):
|
||||||
|
log_file = os.path.join(_CONFIG_DIR, log_file)
|
||||||
|
return os.path.abspath(log_file)
|
||||||
|
return default_log_file
|
||||||
|
|
||||||
|
|
||||||
|
def get_logfile(log_file):
|
||||||
|
return _absolute_log_file(log_file, DEFAULT_LOG_FILE)
|
||||||
|
|
||||||
|
|
||||||
|
def get_accesslogfile(log_file):
|
||||||
|
return _absolute_log_file(log_file, DEFAULT_ACCESS_LOG)
|
||||||
|
|
||||||
|
|
||||||
|
def setup(log_file, log_level=None):
|
||||||
|
"""
|
||||||
|
Configure the logging output.
|
||||||
|
May be called multiple times.
|
||||||
|
"""
|
||||||
|
log_level = log_level or DEFAULT_LOG_LEVEL
|
||||||
|
logging.setLoggerClass(_Logger)
|
||||||
|
logging.getLogger(__package__).setLevel(log_level)
|
||||||
|
|
||||||
|
r = logging.root
|
||||||
|
if log_level >= logging.INFO or os.environ.get('FLASK_DEBUG'):
|
||||||
|
# avoid spamming the log with debug messages from libraries
|
||||||
|
r.setLevel(log_level)
|
||||||
|
|
||||||
|
# Otherwise, name gets destroyed on Windows
|
||||||
|
if log_file != LOG_TO_STDERR and log_file != LOG_TO_STDOUT:
|
||||||
|
log_file = _absolute_log_file(log_file, DEFAULT_LOG_FILE)
|
||||||
|
|
||||||
|
previous_handler = r.handlers[0] if r.handlers else None
|
||||||
|
if previous_handler:
|
||||||
|
# if the log_file has not changed, don't create a new handler
|
||||||
|
if getattr(previous_handler, 'baseFilename', None) == log_file:
|
||||||
|
return "" if log_file == DEFAULT_LOG_FILE else log_file
|
||||||
|
logging.debug("logging to %s level %s", log_file, r.level)
|
||||||
|
|
||||||
|
if log_file == LOG_TO_STDERR or log_file == LOG_TO_STDOUT:
|
||||||
|
if log_file == LOG_TO_STDOUT:
|
||||||
|
file_handler = StreamHandler(sys.stdout)
|
||||||
|
file_handler.baseFilename = log_file
|
||||||
|
else:
|
||||||
|
file_handler = StreamHandler(sys.stderr)
|
||||||
|
file_handler.baseFilename = log_file
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
file_handler = RotatingFileHandler(log_file, maxBytes=100000, backupCount=2, encoding='utf-8')
|
||||||
|
except (IOError, PermissionError):
|
||||||
|
if log_file == DEFAULT_LOG_FILE:
|
||||||
|
raise
|
||||||
|
file_handler = RotatingFileHandler(DEFAULT_LOG_FILE, maxBytes=100000, backupCount=2, encoding='utf-8')
|
||||||
|
log_file = ""
|
||||||
|
file_handler.setFormatter(FORMATTER)
|
||||||
|
|
||||||
|
for h in r.handlers:
|
||||||
|
r.removeHandler(h)
|
||||||
|
h.close()
|
||||||
|
r.addHandler(file_handler)
|
||||||
|
logging.captureWarnings(True)
|
||||||
|
return "" if log_file == DEFAULT_LOG_FILE else log_file
|
||||||
|
|
||||||
|
|
||||||
|
def create_access_log(log_file, log_name, formatter):
|
||||||
|
"""
|
||||||
|
One-time configuration for the web server's access log.
|
||||||
|
"""
|
||||||
|
log_file = _absolute_log_file(log_file, DEFAULT_ACCESS_LOG)
|
||||||
|
logging.debug("access log: %s", log_file)
|
||||||
|
|
||||||
|
access_log = logging.getLogger(log_name)
|
||||||
|
access_log.propagate = False
|
||||||
|
access_log.setLevel(logging.INFO)
|
||||||
|
try:
|
||||||
|
file_handler = RotatingFileHandler(log_file, maxBytes=50000, backupCount=2, encoding='utf-8')
|
||||||
|
except (IOError, PermissionError):
|
||||||
|
if log_file == DEFAULT_ACCESS_LOG:
|
||||||
|
raise
|
||||||
|
file_handler = RotatingFileHandler(DEFAULT_ACCESS_LOG, maxBytes=50000, backupCount=2, encoding='utf-8')
|
||||||
|
log_file = ""
|
||||||
|
|
||||||
|
file_handler.setFormatter(formatter)
|
||||||
|
access_log.addHandler(file_handler)
|
||||||
|
return access_log, "" if _absolute_log_file(log_file, DEFAULT_ACCESS_LOG) == DEFAULT_ACCESS_LOG else log_file
|
||||||
|
|
||||||
|
|
||||||
|
# Enable logging of smtp lib debug output
|
||||||
|
class StderrLogger(object):
|
||||||
|
def __init__(self, name=None):
|
||||||
|
self.log = get(name or self.__class__.__name__)
|
||||||
|
self.buffer = ''
|
||||||
|
|
||||||
|
def write(self, message):
|
||||||
|
try:
|
||||||
|
if message == '\n':
|
||||||
|
self.log.debug(self.buffer.replace('\n', '\\n'))
|
||||||
|
self.buffer = ''
|
||||||
|
else:
|
||||||
|
self.buffer += message
|
||||||
|
except Exception:
|
||||||
|
self.log.debug("Logging Error")
|
||||||
|
|
||||||
|
|
||||||
|
# default configuration, before application settings are applied
|
||||||
|
setup(LOG_TO_STDERR, logging.DEBUG if os.environ.get('FLASK_DEBUG') else DEFAULT_LOG_LEVEL)
|
|
@ -0,0 +1,81 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2012-2022 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from . import create_app, limiter
|
||||||
|
from .jinjia import jinjia
|
||||||
|
from .remotelogin import remotelogin
|
||||||
|
from flask import request
|
||||||
|
|
||||||
|
|
||||||
|
def request_username():
|
||||||
|
return request.authorization.username
|
||||||
|
|
||||||
|
def main():
|
||||||
|
app = create_app()
|
||||||
|
|
||||||
|
from .web import web
|
||||||
|
from .opds import opds
|
||||||
|
from .admin import admi
|
||||||
|
from .gdrive import gdrive
|
||||||
|
from .editbooks import editbook
|
||||||
|
from .about import about
|
||||||
|
from .search import search
|
||||||
|
from .search_metadata import meta
|
||||||
|
from .shelf import shelf
|
||||||
|
from .tasks_status import tasks
|
||||||
|
from .error_handler import init_errorhandler
|
||||||
|
try:
|
||||||
|
from .kobo import kobo, get_kobo_activated
|
||||||
|
from .kobo_auth import kobo_auth
|
||||||
|
from flask_limiter.util import get_remote_address
|
||||||
|
kobo_available = get_kobo_activated()
|
||||||
|
except (ImportError, AttributeError): # Catch also error for not installed flask-WTF (missing csrf decorator)
|
||||||
|
kobo_available = False
|
||||||
|
|
||||||
|
try:
|
||||||
|
from .oauth_bb import oauth
|
||||||
|
oauth_available = True
|
||||||
|
except ImportError:
|
||||||
|
oauth_available = False
|
||||||
|
|
||||||
|
from . import web_server
|
||||||
|
init_errorhandler()
|
||||||
|
|
||||||
|
app.register_blueprint(search)
|
||||||
|
app.register_blueprint(tasks)
|
||||||
|
app.register_blueprint(web)
|
||||||
|
app.register_blueprint(opds)
|
||||||
|
limiter.limit("3/minute",key_func=request_username)(opds)
|
||||||
|
app.register_blueprint(jinjia)
|
||||||
|
app.register_blueprint(about)
|
||||||
|
app.register_blueprint(shelf)
|
||||||
|
app.register_blueprint(admi)
|
||||||
|
app.register_blueprint(remotelogin)
|
||||||
|
app.register_blueprint(meta)
|
||||||
|
app.register_blueprint(gdrive)
|
||||||
|
app.register_blueprint(editbook)
|
||||||
|
if kobo_available:
|
||||||
|
app.register_blueprint(kobo)
|
||||||
|
app.register_blueprint(kobo_auth)
|
||||||
|
limiter.limit("3/minute", key_func=get_remote_address)(kobo)
|
||||||
|
if oauth_available:
|
||||||
|
app.register_blueprint(oauth)
|
||||||
|
success = web_server.start()
|
||||||
|
sys.exit(0 if success else 1)
|
|
@ -0,0 +1,141 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2022 quarz12
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import concurrent.futures
|
||||||
|
import requests
|
||||||
|
from bs4 import BeautifulSoup as BS # requirement
|
||||||
|
from typing import List, Optional
|
||||||
|
|
||||||
|
try:
|
||||||
|
import cchardet #optional for better speed
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
from cps import logger
|
||||||
|
from cps.services.Metadata import MetaRecord, MetaSourceInfo, Metadata
|
||||||
|
import cps.logger as logger
|
||||||
|
|
||||||
|
#from time import time
|
||||||
|
from operator import itemgetter
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
|
||||||
|
class Amazon(Metadata):
|
||||||
|
__name__ = "Amazon"
|
||||||
|
__id__ = "amazon"
|
||||||
|
headers = {'upgrade-insecure-requests': '1',
|
||||||
|
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36',
|
||||||
|
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
|
||||||
|
'sec-gpc': '1',
|
||||||
|
'sec-fetch-site': 'none',
|
||||||
|
'sec-fetch-mode': 'navigate',
|
||||||
|
'sec-fetch-user': '?1',
|
||||||
|
'sec-fetch-dest': 'document',
|
||||||
|
'accept-encoding': 'gzip, deflate, br',
|
||||||
|
'accept-language': 'en-US,en;q=0.9'}
|
||||||
|
session = requests.Session()
|
||||||
|
session.headers=headers
|
||||||
|
|
||||||
|
def search(
|
||||||
|
self, query: str, generic_cover: str = "", locale: str = "en"
|
||||||
|
) -> Optional[List[MetaRecord]]:
|
||||||
|
#timer=time()
|
||||||
|
def inner(link, index) -> [dict, int]:
|
||||||
|
with self.session as session:
|
||||||
|
try:
|
||||||
|
r = session.get(f"https://www.amazon.com/{link}")
|
||||||
|
r.raise_for_status()
|
||||||
|
except Exception as ex:
|
||||||
|
log.warning(ex)
|
||||||
|
return None
|
||||||
|
long_soup = BS(r.text, "lxml") #~4sec :/
|
||||||
|
soup2 = long_soup.find("div", attrs={"cel_widget_id": "dpx-books-ppd_csm_instrumentation_wrapper"})
|
||||||
|
if soup2 is None:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
match = MetaRecord(
|
||||||
|
title = "",
|
||||||
|
authors = "",
|
||||||
|
source=MetaSourceInfo(
|
||||||
|
id=self.__id__,
|
||||||
|
description="Amazon Books",
|
||||||
|
link="https://amazon.com/"
|
||||||
|
),
|
||||||
|
url = f"https://www.amazon.com{link}",
|
||||||
|
#the more searches the slower, these are too hard to find in reasonable time or might not even exist
|
||||||
|
publisher= "", # very unreliable
|
||||||
|
publishedDate= "", # very unreliable
|
||||||
|
id = None, # ?
|
||||||
|
tags = [] # dont exist on amazon
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
match.description = "\n".join(
|
||||||
|
soup2.find("div", attrs={"data-feature-name": "bookDescription"}).stripped_strings)\
|
||||||
|
.replace("\xa0"," ")[:-9].strip().strip("\n")
|
||||||
|
except (AttributeError, TypeError):
|
||||||
|
return None # if there is no description it is not a book and therefore should be ignored
|
||||||
|
try:
|
||||||
|
match.title = soup2.find("span", attrs={"id": "productTitle"}).text
|
||||||
|
except (AttributeError, TypeError):
|
||||||
|
match.title = ""
|
||||||
|
try:
|
||||||
|
match.authors = [next(
|
||||||
|
filter(lambda i: i != " " and i != "\n" and not i.startswith("{"),
|
||||||
|
x.findAll(string=True))).strip()
|
||||||
|
for x in soup2.findAll("span", attrs={"class": "author"})]
|
||||||
|
except (AttributeError, TypeError, StopIteration):
|
||||||
|
match.authors = ""
|
||||||
|
try:
|
||||||
|
match.rating = int(
|
||||||
|
soup2.find("span", class_="a-icon-alt").text.split(" ")[0].split(".")[
|
||||||
|
0]) # first number in string
|
||||||
|
except (AttributeError, ValueError):
|
||||||
|
match.rating = 0
|
||||||
|
try:
|
||||||
|
match.cover = soup2.find("img", attrs={"class": "a-dynamic-image frontImage"})["src"]
|
||||||
|
except (AttributeError, TypeError):
|
||||||
|
match.cover = ""
|
||||||
|
return match, index
|
||||||
|
except Exception as e:
|
||||||
|
log.error_or_exception(e)
|
||||||
|
return None
|
||||||
|
|
||||||
|
val = list()
|
||||||
|
if self.active:
|
||||||
|
try:
|
||||||
|
results = self.session.get(
|
||||||
|
f"https://www.amazon.com/s?k={query.replace(' ', '+')}&i=digital-text&sprefix={query.replace(' ', '+')}"
|
||||||
|
f"%2Cdigital-text&ref=nb_sb_noss",
|
||||||
|
headers=self.headers)
|
||||||
|
results.raise_for_status()
|
||||||
|
except requests.exceptions.HTTPError as e:
|
||||||
|
log.error_or_exception(e)
|
||||||
|
return []
|
||||||
|
except Exception as e:
|
||||||
|
log.warning(e)
|
||||||
|
return []
|
||||||
|
soup = BS(results.text, 'html.parser')
|
||||||
|
links_list = [next(filter(lambda i: "digital-text" in i["href"], x.findAll("a")))["href"] for x in
|
||||||
|
soup.findAll("div", attrs={"data-component-type": "s-search-result"})]
|
||||||
|
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
|
||||||
|
fut = {executor.submit(inner, link, index) for index, link in enumerate(links_list[:5])}
|
||||||
|
val = list(map(lambda x : x.result() ,concurrent.futures.as_completed(fut)))
|
||||||
|
result = list(filter(lambda x: x, val))
|
||||||
|
return [x[0] for x in sorted(result, key=itemgetter(1))] #sort by amazons listing order for best relevance
|
|
@ -0,0 +1,92 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2021 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
# ComicVine api document: https://comicvine.gamespot.com/api/documentation
|
||||||
|
from typing import Dict, List, Optional
|
||||||
|
from urllib.parse import quote
|
||||||
|
|
||||||
|
import requests
|
||||||
|
from cps import logger
|
||||||
|
from cps.services.Metadata import MetaRecord, MetaSourceInfo, Metadata
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
|
||||||
|
class ComicVine(Metadata):
|
||||||
|
__name__ = "ComicVine"
|
||||||
|
__id__ = "comicvine"
|
||||||
|
DESCRIPTION = "ComicVine Books"
|
||||||
|
META_URL = "https://comicvine.gamespot.com/"
|
||||||
|
API_KEY = "57558043c53943d5d1e96a9ad425b0eb85532ee6"
|
||||||
|
BASE_URL = (
|
||||||
|
f"https://comicvine.gamespot.com/api/search?api_key={API_KEY}"
|
||||||
|
f"&resources=issue&query="
|
||||||
|
)
|
||||||
|
QUERY_PARAMS = "&sort=name:desc&format=json"
|
||||||
|
HEADERS = {"User-Agent": "Not Evil Browser"}
|
||||||
|
|
||||||
|
def search(
|
||||||
|
self, query: str, generic_cover: str = "", locale: str = "en"
|
||||||
|
) -> Optional[List[MetaRecord]]:
|
||||||
|
val = list()
|
||||||
|
if self.active:
|
||||||
|
title_tokens = list(self.get_title_tokens(query, strip_joiners=False))
|
||||||
|
if title_tokens:
|
||||||
|
tokens = [quote(t.encode("utf-8")) for t in title_tokens]
|
||||||
|
query = "%20".join(tokens)
|
||||||
|
try:
|
||||||
|
result = requests.get(
|
||||||
|
f"{ComicVine.BASE_URL}{query}{ComicVine.QUERY_PARAMS}",
|
||||||
|
headers=ComicVine.HEADERS,
|
||||||
|
)
|
||||||
|
result.raise_for_status()
|
||||||
|
except Exception as e:
|
||||||
|
log.warning(e)
|
||||||
|
return None
|
||||||
|
for result in result.json()["results"]:
|
||||||
|
match = self._parse_search_result(
|
||||||
|
result=result, generic_cover=generic_cover, locale=locale
|
||||||
|
)
|
||||||
|
val.append(match)
|
||||||
|
return val
|
||||||
|
|
||||||
|
def _parse_search_result(
|
||||||
|
self, result: Dict, generic_cover: str, locale: str
|
||||||
|
) -> MetaRecord:
|
||||||
|
series = result["volume"].get("name", "")
|
||||||
|
series_index = result.get("issue_number", 0)
|
||||||
|
issue_name = result.get("name", "")
|
||||||
|
match = MetaRecord(
|
||||||
|
id=result["id"],
|
||||||
|
title=f"{series}#{series_index} - {issue_name}",
|
||||||
|
authors=result.get("authors", []),
|
||||||
|
url=result.get("site_detail_url", ""),
|
||||||
|
source=MetaSourceInfo(
|
||||||
|
id=self.__id__,
|
||||||
|
description=ComicVine.DESCRIPTION,
|
||||||
|
link=ComicVine.META_URL,
|
||||||
|
),
|
||||||
|
series=series,
|
||||||
|
)
|
||||||
|
match.cover = result["image"].get("original_url", generic_cover)
|
||||||
|
match.description = result.get("description", "")
|
||||||
|
match.publishedDate = result.get("store_date", result.get("date_added"))
|
||||||
|
match.series_index = series_index
|
||||||
|
match.tags = ["Comics", series]
|
||||||
|
match.identifiers = {"comicvine": match.id}
|
||||||
|
return match
|
|
@ -0,0 +1,259 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2022 xlivevil
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
import re
|
||||||
|
from concurrent import futures
|
||||||
|
from typing import List, Optional
|
||||||
|
|
||||||
|
import requests
|
||||||
|
from html2text import HTML2Text
|
||||||
|
from lxml import etree
|
||||||
|
|
||||||
|
from cps import logger
|
||||||
|
from cps.services.Metadata import Metadata, MetaRecord, MetaSourceInfo
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
|
||||||
|
def html2text(html: str) -> str:
|
||||||
|
|
||||||
|
h2t = HTML2Text()
|
||||||
|
h2t.body_width = 0
|
||||||
|
h2t.single_line_break = True
|
||||||
|
h2t.emphasis_mark = "*"
|
||||||
|
return h2t.handle(html)
|
||||||
|
|
||||||
|
|
||||||
|
class Douban(Metadata):
|
||||||
|
__name__ = "豆瓣"
|
||||||
|
__id__ = "douban"
|
||||||
|
DESCRIPTION = "豆瓣"
|
||||||
|
META_URL = "https://book.douban.com/"
|
||||||
|
SEARCH_JSON_URL = "https://www.douban.com/j/search"
|
||||||
|
SEARCH_URL = "https://www.douban.com/search"
|
||||||
|
|
||||||
|
ID_PATTERN = re.compile(r"sid: (?P<id>\d+),")
|
||||||
|
AUTHORS_PATTERN = re.compile(r"作者|译者")
|
||||||
|
PUBLISHER_PATTERN = re.compile(r"出版社")
|
||||||
|
SUBTITLE_PATTERN = re.compile(r"副标题")
|
||||||
|
PUBLISHED_DATE_PATTERN = re.compile(r"出版年")
|
||||||
|
SERIES_PATTERN = re.compile(r"丛书")
|
||||||
|
IDENTIFIERS_PATTERN = re.compile(r"ISBN|统一书号")
|
||||||
|
CRITERIA_PATTERN = re.compile("criteria = '(.+)'")
|
||||||
|
|
||||||
|
TITTLE_XPATH = "//span[@property='v:itemreviewed']"
|
||||||
|
COVER_XPATH = "//a[@class='nbg']"
|
||||||
|
INFO_XPATH = "//*[@id='info']//span[@class='pl']"
|
||||||
|
TAGS_XPATH = "//a[contains(@class, 'tag')]"
|
||||||
|
DESCRIPTION_XPATH = "//div[@id='link-report']//div[@class='intro']"
|
||||||
|
RATING_XPATH = "//div[@class='rating_self clearfix']/strong"
|
||||||
|
|
||||||
|
session = requests.Session()
|
||||||
|
session.headers = {
|
||||||
|
'user-agent':
|
||||||
|
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36 Edg/98.0.1108.56',
|
||||||
|
}
|
||||||
|
|
||||||
|
def search(self,
|
||||||
|
query: str,
|
||||||
|
generic_cover: str = "",
|
||||||
|
locale: str = "en") -> List[MetaRecord]:
|
||||||
|
val = []
|
||||||
|
if self.active:
|
||||||
|
log.debug(f"start searching {query} on douban")
|
||||||
|
if title_tokens := list(
|
||||||
|
self.get_title_tokens(query, strip_joiners=False)):
|
||||||
|
query = "+".join(title_tokens)
|
||||||
|
|
||||||
|
book_id_list = self._get_book_id_list_from_html(query)
|
||||||
|
|
||||||
|
if not book_id_list:
|
||||||
|
log.debug("No search results in Douban")
|
||||||
|
return []
|
||||||
|
|
||||||
|
with futures.ThreadPoolExecutor(
|
||||||
|
max_workers=5, thread_name_prefix='douban') as executor:
|
||||||
|
|
||||||
|
fut = [
|
||||||
|
executor.submit(self._parse_single_book, book_id,
|
||||||
|
generic_cover) for book_id in book_id_list
|
||||||
|
]
|
||||||
|
|
||||||
|
val = [
|
||||||
|
future.result() for future in futures.as_completed(fut)
|
||||||
|
if future.result()
|
||||||
|
]
|
||||||
|
|
||||||
|
return val
|
||||||
|
|
||||||
|
def _get_book_id_list_from_html(self, query: str) -> List[str]:
|
||||||
|
try:
|
||||||
|
r = self.session.get(self.SEARCH_URL,
|
||||||
|
params={
|
||||||
|
"cat": 1001,
|
||||||
|
"q": query
|
||||||
|
})
|
||||||
|
r.raise_for_status()
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
log.warning(e)
|
||||||
|
return []
|
||||||
|
|
||||||
|
html = etree.HTML(r.content.decode("utf8"))
|
||||||
|
result_list = html.xpath(self.COVER_XPATH)
|
||||||
|
|
||||||
|
return [
|
||||||
|
self.ID_PATTERN.search(item.get("onclick")).group("id")
|
||||||
|
for item in result_list[:10]
|
||||||
|
if self.ID_PATTERN.search(item.get("onclick"))
|
||||||
|
]
|
||||||
|
|
||||||
|
def _get_book_id_list_from_json(self, query: str) -> List[str]:
|
||||||
|
try:
|
||||||
|
r = self.session.get(self.SEARCH_JSON_URL,
|
||||||
|
params={
|
||||||
|
"cat": 1001,
|
||||||
|
"q": query
|
||||||
|
})
|
||||||
|
r.raise_for_status()
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
log.warning(e)
|
||||||
|
return []
|
||||||
|
|
||||||
|
results = r.json()
|
||||||
|
if results["total"] == 0:
|
||||||
|
return []
|
||||||
|
|
||||||
|
return [
|
||||||
|
self.ID_PATTERN.search(item).group("id")
|
||||||
|
for item in results["items"][:10] if self.ID_PATTERN.search(item)
|
||||||
|
]
|
||||||
|
|
||||||
|
def _parse_single_book(self,
|
||||||
|
id: str,
|
||||||
|
generic_cover: str = "") -> Optional[MetaRecord]:
|
||||||
|
url = f"https://book.douban.com/subject/{id}/"
|
||||||
|
log.debug(f"start parsing {url}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
r = self.session.get(url)
|
||||||
|
r.raise_for_status()
|
||||||
|
except Exception as e:
|
||||||
|
log.warning(e)
|
||||||
|
return None
|
||||||
|
|
||||||
|
match = MetaRecord(
|
||||||
|
id=id,
|
||||||
|
title="",
|
||||||
|
authors=[],
|
||||||
|
url=url,
|
||||||
|
source=MetaSourceInfo(
|
||||||
|
id=self.__id__,
|
||||||
|
description=self.DESCRIPTION,
|
||||||
|
link=self.META_URL,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
decode_content = r.content.decode("utf8")
|
||||||
|
html = etree.HTML(decode_content)
|
||||||
|
|
||||||
|
match.title = html.xpath(self.TITTLE_XPATH)[0].text
|
||||||
|
match.cover = html.xpath(
|
||||||
|
self.COVER_XPATH)[0].attrib["href"] or generic_cover
|
||||||
|
try:
|
||||||
|
rating_num = float(html.xpath(self.RATING_XPATH)[0].text.strip())
|
||||||
|
except Exception:
|
||||||
|
rating_num = 0
|
||||||
|
match.rating = int(-1 * rating_num // 2 * -1) if rating_num else 0
|
||||||
|
|
||||||
|
tag_elements = html.xpath(self.TAGS_XPATH)
|
||||||
|
if len(tag_elements):
|
||||||
|
match.tags = [tag_element.text for tag_element in tag_elements]
|
||||||
|
else:
|
||||||
|
match.tags = self._get_tags(decode_content)
|
||||||
|
|
||||||
|
description_element = html.xpath(self.DESCRIPTION_XPATH)
|
||||||
|
if len(description_element):
|
||||||
|
match.description = html2text(
|
||||||
|
etree.tostring(description_element[-1]).decode("utf8"))
|
||||||
|
|
||||||
|
info = html.xpath(self.INFO_XPATH)
|
||||||
|
|
||||||
|
for element in info:
|
||||||
|
text = element.text
|
||||||
|
if self.AUTHORS_PATTERN.search(text):
|
||||||
|
next_element = element.getnext()
|
||||||
|
while next_element is not None and next_element.tag != "br":
|
||||||
|
match.authors.append(next_element.text)
|
||||||
|
next_element = next_element.getnext()
|
||||||
|
elif self.PUBLISHER_PATTERN.search(text):
|
||||||
|
if publisher := element.tail.strip():
|
||||||
|
match.publisher = publisher
|
||||||
|
else:
|
||||||
|
match.publisher = element.getnext().text
|
||||||
|
elif self.SUBTITLE_PATTERN.search(text):
|
||||||
|
match.title = f'{match.title}:{element.tail.strip()}'
|
||||||
|
elif self.PUBLISHED_DATE_PATTERN.search(text):
|
||||||
|
match.publishedDate = self._clean_date(element.tail.strip())
|
||||||
|
elif self.SERIES_PATTERN.search(text):
|
||||||
|
match.series = element.getnext().text
|
||||||
|
elif i_type := self.IDENTIFIERS_PATTERN.search(text):
|
||||||
|
match.identifiers[i_type.group()] = element.tail.strip()
|
||||||
|
|
||||||
|
return match
|
||||||
|
|
||||||
|
def _clean_date(self, date: str) -> str:
|
||||||
|
"""
|
||||||
|
Clean up the date string to be in the format YYYY-MM-DD
|
||||||
|
|
||||||
|
Examples of possible patterns:
|
||||||
|
'2014-7-16', '1988年4月', '1995-04', '2021-8', '2020-12-1', '1996年',
|
||||||
|
'1972', '2004/11/01', '1959年3月北京第1版第1印'
|
||||||
|
"""
|
||||||
|
year = date[:4]
|
||||||
|
moon = "01"
|
||||||
|
day = "01"
|
||||||
|
|
||||||
|
if len(date) > 5:
|
||||||
|
digit = []
|
||||||
|
ls = []
|
||||||
|
for i in range(5, len(date)):
|
||||||
|
if date[i].isdigit():
|
||||||
|
digit.append(date[i])
|
||||||
|
elif digit:
|
||||||
|
ls.append("".join(digit) if len(digit) ==
|
||||||
|
2 else f"0{digit[0]}")
|
||||||
|
digit = []
|
||||||
|
if digit:
|
||||||
|
ls.append("".join(digit) if len(digit) ==
|
||||||
|
2 else f"0{digit[0]}")
|
||||||
|
|
||||||
|
moon = ls[0]
|
||||||
|
if len(ls) > 1:
|
||||||
|
day = ls[1]
|
||||||
|
|
||||||
|
return f"{year}-{moon}-{day}"
|
||||||
|
|
||||||
|
def _get_tags(self, text: str) -> List[str]:
|
||||||
|
tags = []
|
||||||
|
if criteria := self.CRITERIA_PATTERN.search(text):
|
||||||
|
tags.extend(
|
||||||
|
item.replace('7:', '') for item in criteria.group().split('|')
|
||||||
|
if item.startswith('7:'))
|
||||||
|
|
||||||
|
return tags
|
|
@ -0,0 +1,129 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2021 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
# Google Books api document: https://developers.google.com/books/docs/v1/using
|
||||||
|
from typing import Dict, List, Optional
|
||||||
|
from urllib.parse import quote
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
import requests
|
||||||
|
|
||||||
|
from cps import logger
|
||||||
|
from cps.isoLanguages import get_lang3, get_language_name
|
||||||
|
from cps.services.Metadata import MetaRecord, MetaSourceInfo, Metadata
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
|
||||||
|
class Google(Metadata):
|
||||||
|
__name__ = "Google"
|
||||||
|
__id__ = "google"
|
||||||
|
DESCRIPTION = "Google Books"
|
||||||
|
META_URL = "https://books.google.com/"
|
||||||
|
BOOK_URL = "https://books.google.com/books?id="
|
||||||
|
SEARCH_URL = "https://www.googleapis.com/books/v1/volumes?q="
|
||||||
|
ISBN_TYPE = "ISBN_13"
|
||||||
|
|
||||||
|
def search(
|
||||||
|
self, query: str, generic_cover: str = "", locale: str = "en"
|
||||||
|
) -> Optional[List[MetaRecord]]:
|
||||||
|
val = list()
|
||||||
|
if self.active:
|
||||||
|
|
||||||
|
title_tokens = list(self.get_title_tokens(query, strip_joiners=False))
|
||||||
|
if title_tokens:
|
||||||
|
tokens = [quote(t.encode("utf-8")) for t in title_tokens]
|
||||||
|
query = "+".join(tokens)
|
||||||
|
try:
|
||||||
|
results = requests.get(Google.SEARCH_URL + query)
|
||||||
|
results.raise_for_status()
|
||||||
|
except Exception as e:
|
||||||
|
log.warning(e)
|
||||||
|
return None
|
||||||
|
for result in results.json().get("items", []):
|
||||||
|
val.append(
|
||||||
|
self._parse_search_result(
|
||||||
|
result=result, generic_cover=generic_cover, locale=locale
|
||||||
|
)
|
||||||
|
)
|
||||||
|
return val
|
||||||
|
|
||||||
|
def _parse_search_result(
|
||||||
|
self, result: Dict, generic_cover: str, locale: str
|
||||||
|
) -> MetaRecord:
|
||||||
|
match = MetaRecord(
|
||||||
|
id=result["id"],
|
||||||
|
title=result["volumeInfo"]["title"],
|
||||||
|
authors=result["volumeInfo"].get("authors", []),
|
||||||
|
url=Google.BOOK_URL + result["id"],
|
||||||
|
source=MetaSourceInfo(
|
||||||
|
id=self.__id__,
|
||||||
|
description=Google.DESCRIPTION,
|
||||||
|
link=Google.META_URL,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
match.cover = self._parse_cover(result=result, generic_cover=generic_cover)
|
||||||
|
match.description = result["volumeInfo"].get("description", "")
|
||||||
|
match.languages = self._parse_languages(result=result, locale=locale)
|
||||||
|
match.publisher = result["volumeInfo"].get("publisher", "")
|
||||||
|
try:
|
||||||
|
datetime.strptime(result["volumeInfo"].get("publishedDate", ""), "%Y-%m-%d")
|
||||||
|
match.publishedDate = result["volumeInfo"].get("publishedDate", "")
|
||||||
|
except ValueError:
|
||||||
|
match.publishedDate = ""
|
||||||
|
match.rating = result["volumeInfo"].get("averageRating", 0)
|
||||||
|
match.series, match.series_index = "", 1
|
||||||
|
match.tags = result["volumeInfo"].get("categories", [])
|
||||||
|
|
||||||
|
match.identifiers = {"google": match.id}
|
||||||
|
match = self._parse_isbn(result=result, match=match)
|
||||||
|
return match
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _parse_isbn(result: Dict, match: MetaRecord) -> MetaRecord:
|
||||||
|
identifiers = result["volumeInfo"].get("industryIdentifiers", [])
|
||||||
|
for identifier in identifiers:
|
||||||
|
if identifier.get("type") == Google.ISBN_TYPE:
|
||||||
|
match.identifiers["isbn"] = identifier.get("identifier")
|
||||||
|
break
|
||||||
|
return match
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _parse_cover(result: Dict, generic_cover: str) -> str:
|
||||||
|
if result["volumeInfo"].get("imageLinks"):
|
||||||
|
cover_url = result["volumeInfo"]["imageLinks"]["thumbnail"]
|
||||||
|
|
||||||
|
# strip curl in cover
|
||||||
|
cover_url = cover_url.replace("&edge=curl", "")
|
||||||
|
|
||||||
|
# request 800x900 cover image (higher resolution)
|
||||||
|
cover_url += "&fife=w800-h900"
|
||||||
|
|
||||||
|
return cover_url.replace("http://", "https://")
|
||||||
|
return generic_cover
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _parse_languages(result: Dict, locale: str) -> List[str]:
|
||||||
|
language_iso2 = result["volumeInfo"].get("language", "")
|
||||||
|
languages = (
|
||||||
|
[get_language_name(locale, get_lang3(language_iso2))]
|
||||||
|
if language_iso2
|
||||||
|
else []
|
||||||
|
)
|
||||||
|
return languages
|
|
@ -0,0 +1,357 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2021 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
import datetime
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
from multiprocessing.pool import ThreadPool
|
||||||
|
from typing import List, Optional, Tuple, Union
|
||||||
|
from urllib.parse import quote
|
||||||
|
|
||||||
|
import requests
|
||||||
|
from dateutil import parser
|
||||||
|
from html2text import HTML2Text
|
||||||
|
from lxml.html import HtmlElement, fromstring, tostring
|
||||||
|
from markdown2 import Markdown
|
||||||
|
|
||||||
|
from cps import logger
|
||||||
|
from cps.isoLanguages import get_language_name
|
||||||
|
from cps.services.Metadata import MetaRecord, MetaSourceInfo, Metadata
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
SYMBOLS_TO_TRANSLATE = (
|
||||||
|
"öÖüÜóÓőŐúÚéÉáÁűŰíÍąĄćĆęĘłŁńŃóÓśŚźŹżŻ",
|
||||||
|
"oOuUoOoOuUeEaAuUiIaAcCeElLnNoOsSzZzZ",
|
||||||
|
)
|
||||||
|
SYMBOL_TRANSLATION_MAP = dict(
|
||||||
|
[(ord(a), ord(b)) for (a, b) in zip(*SYMBOLS_TO_TRANSLATE)]
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def get_int_or_float(value: str) -> Union[int, float]:
|
||||||
|
number_as_float = float(value)
|
||||||
|
number_as_int = int(number_as_float)
|
||||||
|
return number_as_int if number_as_float == number_as_int else number_as_float
|
||||||
|
|
||||||
|
|
||||||
|
def strip_accents(s: Optional[str]) -> Optional[str]:
|
||||||
|
return s.translate(SYMBOL_TRANSLATION_MAP) if s is not None else s
|
||||||
|
|
||||||
|
|
||||||
|
def sanitize_comments_html(html: str) -> str:
|
||||||
|
text = html2text(html)
|
||||||
|
md = Markdown()
|
||||||
|
html = md.convert(text)
|
||||||
|
return html
|
||||||
|
|
||||||
|
|
||||||
|
def html2text(html: str) -> str:
|
||||||
|
# replace <u> tags with <span> as <u> becomes emphasis in html2text
|
||||||
|
if isinstance(html, bytes):
|
||||||
|
html = html.decode("utf-8")
|
||||||
|
html = re.sub(
|
||||||
|
r"<\s*(?P<solidus>/?)\s*[uU]\b(?P<rest>[^>]*)>",
|
||||||
|
r"<\g<solidus>span\g<rest>>",
|
||||||
|
html,
|
||||||
|
)
|
||||||
|
h2t = HTML2Text()
|
||||||
|
h2t.body_width = 0
|
||||||
|
h2t.single_line_break = True
|
||||||
|
h2t.emphasis_mark = "*"
|
||||||
|
return h2t.handle(html)
|
||||||
|
|
||||||
|
|
||||||
|
class LubimyCzytac(Metadata):
|
||||||
|
__name__ = "LubimyCzytac.pl"
|
||||||
|
__id__ = "lubimyczytac"
|
||||||
|
|
||||||
|
BASE_URL = "https://lubimyczytac.pl"
|
||||||
|
|
||||||
|
BOOK_SEARCH_RESULT_XPATH = (
|
||||||
|
"*//div[@class='listSearch']//div[@class='authorAllBooks__single']"
|
||||||
|
)
|
||||||
|
SINGLE_BOOK_RESULT_XPATH = ".//div[contains(@class,'authorAllBooks__singleText')]"
|
||||||
|
TITLE_PATH = "/div/a[contains(@class,'authorAllBooks__singleTextTitle')]"
|
||||||
|
TITLE_TEXT_PATH = f"{TITLE_PATH}//text()"
|
||||||
|
URL_PATH = f"{TITLE_PATH}/@href"
|
||||||
|
AUTHORS_PATH = "/div/a[contains(@href,'autor')]//text()"
|
||||||
|
|
||||||
|
SIBLINGS = "/following-sibling::dd"
|
||||||
|
|
||||||
|
CONTAINER = "//section[@class='container book']"
|
||||||
|
PUBLISHER = f"{CONTAINER}//dt[contains(text(),'Wydawnictwo:')]{SIBLINGS}/a/text()"
|
||||||
|
LANGUAGES = f"{CONTAINER}//dt[contains(text(),'Język:')]{SIBLINGS}/text()"
|
||||||
|
DESCRIPTION = f"{CONTAINER}//div[@class='collapse-content']"
|
||||||
|
SERIES = f"{CONTAINER}//span/a[contains(@href,'/cykl/')]/text()"
|
||||||
|
TRANSLATOR = f"{CONTAINER}//dt[contains(text(),'Tłumacz:')]{SIBLINGS}/a/text()"
|
||||||
|
|
||||||
|
DETAILS = "//div[@id='book-details']"
|
||||||
|
PUBLISH_DATE = "//dt[contains(@title,'Data pierwszego wydania"
|
||||||
|
FIRST_PUBLISH_DATE = f"{DETAILS}{PUBLISH_DATE} oryginalnego')]{SIBLINGS}[1]/text()"
|
||||||
|
FIRST_PUBLISH_DATE_PL = f"{DETAILS}{PUBLISH_DATE} polskiego')]{SIBLINGS}[1]/text()"
|
||||||
|
TAGS = "//a[contains(@href,'/ksiazki/t/')]/text()" # "//nav[@aria-label='breadcrumbs']//a[contains(@href,'/ksiazki/k/')]/span/text()"
|
||||||
|
|
||||||
|
|
||||||
|
RATING = "//meta[@property='books:rating:value']/@content"
|
||||||
|
COVER = "//meta[@property='og:image']/@content"
|
||||||
|
ISBN = "//meta[@property='books:isbn']/@content"
|
||||||
|
META_TITLE = "//meta[@property='og:description']/@content"
|
||||||
|
|
||||||
|
SUMMARY = "//script[@type='application/ld+json']//text()"
|
||||||
|
|
||||||
|
def search(
|
||||||
|
self, query: str, generic_cover: str = "", locale: str = "en"
|
||||||
|
) -> Optional[List[MetaRecord]]:
|
||||||
|
if self.active:
|
||||||
|
try:
|
||||||
|
result = requests.get(self._prepare_query(title=query))
|
||||||
|
result.raise_for_status()
|
||||||
|
except Exception as e:
|
||||||
|
log.warning(e)
|
||||||
|
return None
|
||||||
|
root = fromstring(result.text)
|
||||||
|
lc_parser = LubimyCzytacParser(root=root, metadata=self)
|
||||||
|
matches = lc_parser.parse_search_results()
|
||||||
|
if matches:
|
||||||
|
with ThreadPool(processes=10) as pool:
|
||||||
|
final_matches = pool.starmap(
|
||||||
|
lc_parser.parse_single_book,
|
||||||
|
[(match, generic_cover, locale) for match in matches],
|
||||||
|
)
|
||||||
|
return final_matches
|
||||||
|
return matches
|
||||||
|
|
||||||
|
def _prepare_query(self, title: str) -> str:
|
||||||
|
query = ""
|
||||||
|
characters_to_remove = "\?()\/"
|
||||||
|
pattern = "[" + characters_to_remove + "]"
|
||||||
|
title = re.sub(pattern, "", title)
|
||||||
|
title = title.replace("_", " ")
|
||||||
|
if '"' in title or ",," in title:
|
||||||
|
title = title.split('"')[0].split(",,")[0]
|
||||||
|
|
||||||
|
if "/" in title:
|
||||||
|
title_tokens = [
|
||||||
|
token for token in title.lower().split(" ") if len(token) > 1
|
||||||
|
]
|
||||||
|
else:
|
||||||
|
title_tokens = list(self.get_title_tokens(title, strip_joiners=False))
|
||||||
|
if title_tokens:
|
||||||
|
tokens = [quote(t.encode("utf-8")) for t in title_tokens]
|
||||||
|
query = query + "%20".join(tokens)
|
||||||
|
if not query:
|
||||||
|
return ""
|
||||||
|
return f"{LubimyCzytac.BASE_URL}/szukaj/ksiazki?phrase={query}"
|
||||||
|
|
||||||
|
|
||||||
|
class LubimyCzytacParser:
|
||||||
|
PAGES_TEMPLATE = "<p id='strony'>Książka ma {0} stron(y).</p>"
|
||||||
|
TRANSLATOR_TEMPLATE = "<p id='translator'>Tłumacz: {0}</p>"
|
||||||
|
PUBLISH_DATE_TEMPLATE = "<p id='pierwsze_wydanie'>Data pierwszego wydania: {0}</p>"
|
||||||
|
PUBLISH_DATE_PL_TEMPLATE = (
|
||||||
|
"<p id='pierwsze_wydanie'>Data pierwszego wydania w Polsce: {0}</p>"
|
||||||
|
)
|
||||||
|
|
||||||
|
def __init__(self, root: HtmlElement, metadata: Metadata) -> None:
|
||||||
|
self.root = root
|
||||||
|
self.metadata = metadata
|
||||||
|
|
||||||
|
def parse_search_results(self) -> List[MetaRecord]:
|
||||||
|
matches = []
|
||||||
|
results = self.root.xpath(LubimyCzytac.BOOK_SEARCH_RESULT_XPATH)
|
||||||
|
for result in results:
|
||||||
|
title = self._parse_xpath_node(
|
||||||
|
root=result,
|
||||||
|
xpath=f"{LubimyCzytac.SINGLE_BOOK_RESULT_XPATH}"
|
||||||
|
f"{LubimyCzytac.TITLE_TEXT_PATH}",
|
||||||
|
)
|
||||||
|
|
||||||
|
book_url = self._parse_xpath_node(
|
||||||
|
root=result,
|
||||||
|
xpath=f"{LubimyCzytac.SINGLE_BOOK_RESULT_XPATH}"
|
||||||
|
f"{LubimyCzytac.URL_PATH}",
|
||||||
|
)
|
||||||
|
authors = self._parse_xpath_node(
|
||||||
|
root=result,
|
||||||
|
xpath=f"{LubimyCzytac.SINGLE_BOOK_RESULT_XPATH}"
|
||||||
|
f"{LubimyCzytac.AUTHORS_PATH}",
|
||||||
|
take_first=False,
|
||||||
|
)
|
||||||
|
if not all([title, book_url, authors]):
|
||||||
|
continue
|
||||||
|
matches.append(
|
||||||
|
MetaRecord(
|
||||||
|
id=book_url.replace(f"/ksiazka/", "").split("/")[0],
|
||||||
|
title=title,
|
||||||
|
authors=[strip_accents(author) for author in authors],
|
||||||
|
url=LubimyCzytac.BASE_URL + book_url,
|
||||||
|
source=MetaSourceInfo(
|
||||||
|
id=self.metadata.__id__,
|
||||||
|
description=self.metadata.__name__,
|
||||||
|
link=LubimyCzytac.BASE_URL,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
return matches
|
||||||
|
|
||||||
|
def parse_single_book(
|
||||||
|
self, match: MetaRecord, generic_cover: str, locale: str
|
||||||
|
) -> MetaRecord:
|
||||||
|
try:
|
||||||
|
response = requests.get(match.url)
|
||||||
|
response.raise_for_status()
|
||||||
|
except Exception as e:
|
||||||
|
log.warning(e)
|
||||||
|
return None
|
||||||
|
self.root = fromstring(response.text)
|
||||||
|
match.cover = self._parse_cover(generic_cover=generic_cover)
|
||||||
|
match.description = self._parse_description()
|
||||||
|
match.languages = self._parse_languages(locale=locale)
|
||||||
|
match.publisher = self._parse_publisher()
|
||||||
|
match.publishedDate = self._parse_from_summary(attribute_name="datePublished")
|
||||||
|
match.rating = self._parse_rating()
|
||||||
|
match.series, match.series_index = self._parse_series()
|
||||||
|
match.tags = self._parse_tags()
|
||||||
|
match.identifiers = {
|
||||||
|
"isbn": self._parse_isbn(),
|
||||||
|
"lubimyczytac": match.id,
|
||||||
|
}
|
||||||
|
return match
|
||||||
|
|
||||||
|
def _parse_xpath_node(
|
||||||
|
self,
|
||||||
|
xpath: str,
|
||||||
|
root: HtmlElement = None,
|
||||||
|
take_first: bool = True,
|
||||||
|
strip_element: bool = True,
|
||||||
|
) -> Optional[Union[str, List[str]]]:
|
||||||
|
root = root if root is not None else self.root
|
||||||
|
node = root.xpath(xpath)
|
||||||
|
if not node:
|
||||||
|
return None
|
||||||
|
return (
|
||||||
|
(node[0].strip() if strip_element else node[0])
|
||||||
|
if take_first
|
||||||
|
else [x.strip() for x in node]
|
||||||
|
)
|
||||||
|
|
||||||
|
def _parse_cover(self, generic_cover) -> Optional[str]:
|
||||||
|
return (
|
||||||
|
self._parse_xpath_node(xpath=LubimyCzytac.COVER, take_first=True)
|
||||||
|
or generic_cover
|
||||||
|
)
|
||||||
|
|
||||||
|
def _parse_publisher(self) -> Optional[str]:
|
||||||
|
return self._parse_xpath_node(xpath=LubimyCzytac.PUBLISHER, take_first=True)
|
||||||
|
|
||||||
|
def _parse_languages(self, locale: str) -> List[str]:
|
||||||
|
languages = list()
|
||||||
|
lang = self._parse_xpath_node(xpath=LubimyCzytac.LANGUAGES, take_first=True)
|
||||||
|
if lang:
|
||||||
|
if "polski" in lang:
|
||||||
|
languages.append("pol")
|
||||||
|
if "angielski" in lang:
|
||||||
|
languages.append("eng")
|
||||||
|
return [get_language_name(locale, language) for language in languages]
|
||||||
|
|
||||||
|
def _parse_series(self) -> Tuple[Optional[str], Optional[Union[float, int]]]:
|
||||||
|
series_index = 0
|
||||||
|
series = self._parse_xpath_node(xpath=LubimyCzytac.SERIES, take_first=True)
|
||||||
|
if series:
|
||||||
|
if "tom " in series:
|
||||||
|
series_name, series_info = series.split(" (tom ", 1)
|
||||||
|
series_info = series_info.replace(" ", "").replace(")", "")
|
||||||
|
# Check if book is not a bundle, i.e. chapter 1-3
|
||||||
|
if "-" in series_info:
|
||||||
|
series_info = series_info.split("-", 1)[0]
|
||||||
|
if series_info.replace(".", "").isdigit() is True:
|
||||||
|
series_index = get_int_or_float(series_info)
|
||||||
|
return series_name, series_index
|
||||||
|
return None, None
|
||||||
|
|
||||||
|
def _parse_tags(self) -> List[str]:
|
||||||
|
tags = self._parse_xpath_node(xpath=LubimyCzytac.TAGS, take_first=False)
|
||||||
|
return [
|
||||||
|
strip_accents(w.replace(", itd.", " itd."))
|
||||||
|
for w in tags
|
||||||
|
if isinstance(w, str)
|
||||||
|
]
|
||||||
|
|
||||||
|
def _parse_from_summary(self, attribute_name: str) -> Optional[str]:
|
||||||
|
value = None
|
||||||
|
summary_text = self._parse_xpath_node(xpath=LubimyCzytac.SUMMARY)
|
||||||
|
if summary_text:
|
||||||
|
data = json.loads(summary_text)
|
||||||
|
value = data.get(attribute_name)
|
||||||
|
return value.strip() if value is not None else value
|
||||||
|
|
||||||
|
def _parse_rating(self) -> Optional[str]:
|
||||||
|
rating = self._parse_xpath_node(xpath=LubimyCzytac.RATING)
|
||||||
|
return round(float(rating.replace(",", ".")) / 2) if rating else rating
|
||||||
|
|
||||||
|
def _parse_date(self, xpath="first_publish") -> Optional[datetime.datetime]:
|
||||||
|
options = {
|
||||||
|
"first_publish": LubimyCzytac.FIRST_PUBLISH_DATE,
|
||||||
|
"first_publish_pl": LubimyCzytac.FIRST_PUBLISH_DATE_PL,
|
||||||
|
}
|
||||||
|
date = self._parse_xpath_node(xpath=options.get(xpath))
|
||||||
|
return parser.parse(date) if date else None
|
||||||
|
|
||||||
|
def _parse_isbn(self) -> Optional[str]:
|
||||||
|
return self._parse_xpath_node(xpath=LubimyCzytac.ISBN)
|
||||||
|
|
||||||
|
def _parse_description(self) -> str:
|
||||||
|
description = ""
|
||||||
|
description_node = self._parse_xpath_node(
|
||||||
|
xpath=LubimyCzytac.DESCRIPTION, strip_element=False
|
||||||
|
)
|
||||||
|
if description_node is not None:
|
||||||
|
for source in self.root.xpath('//p[@class="source"]'):
|
||||||
|
source.getparent().remove(source)
|
||||||
|
description = tostring(description_node, method="html")
|
||||||
|
description = sanitize_comments_html(description)
|
||||||
|
|
||||||
|
else:
|
||||||
|
description_node = self._parse_xpath_node(xpath=LubimyCzytac.META_TITLE)
|
||||||
|
if description_node is not None:
|
||||||
|
description = description_node
|
||||||
|
description = sanitize_comments_html(description)
|
||||||
|
description = self._add_extra_info_to_description(description=description)
|
||||||
|
return description
|
||||||
|
|
||||||
|
def _add_extra_info_to_description(self, description: str) -> str:
|
||||||
|
pages = self._parse_from_summary(attribute_name="numberOfPages")
|
||||||
|
if pages:
|
||||||
|
description += LubimyCzytacParser.PAGES_TEMPLATE.format(pages)
|
||||||
|
|
||||||
|
first_publish_date = self._parse_date()
|
||||||
|
if first_publish_date:
|
||||||
|
description += LubimyCzytacParser.PUBLISH_DATE_TEMPLATE.format(
|
||||||
|
first_publish_date.strftime("%d.%m.%Y")
|
||||||
|
)
|
||||||
|
|
||||||
|
first_publish_date_pl = self._parse_date(xpath="first_publish_pl")
|
||||||
|
if first_publish_date_pl:
|
||||||
|
description += LubimyCzytacParser.PUBLISH_DATE_PL_TEMPLATE.format(
|
||||||
|
first_publish_date_pl.strftime("%d.%m.%Y")
|
||||||
|
)
|
||||||
|
translator = self._parse_xpath_node(xpath=LubimyCzytac.TRANSLATOR)
|
||||||
|
if translator:
|
||||||
|
description += LubimyCzytacParser.TRANSLATOR_TEMPLATE.format(translator)
|
||||||
|
|
||||||
|
|
||||||
|
return description
|
|
@ -0,0 +1,83 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2021 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
import itertools
|
||||||
|
from typing import Dict, List, Optional
|
||||||
|
from urllib.parse import quote, unquote
|
||||||
|
|
||||||
|
try:
|
||||||
|
from fake_useragent.errors import FakeUserAgentError
|
||||||
|
except (ImportError):
|
||||||
|
FakeUserAgentError = BaseException
|
||||||
|
try:
|
||||||
|
from scholarly import scholarly
|
||||||
|
except FakeUserAgentError:
|
||||||
|
raise ImportError("No module named 'scholarly'")
|
||||||
|
|
||||||
|
from cps import logger
|
||||||
|
from cps.services.Metadata import MetaRecord, MetaSourceInfo, Metadata
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
|
||||||
|
class scholar(Metadata):
|
||||||
|
__name__ = "Google Scholar"
|
||||||
|
__id__ = "googlescholar"
|
||||||
|
META_URL = "https://scholar.google.com/"
|
||||||
|
|
||||||
|
def search(
|
||||||
|
self, query: str, generic_cover: str = "", locale: str = "en"
|
||||||
|
) -> Optional[List[MetaRecord]]:
|
||||||
|
val = list()
|
||||||
|
if self.active:
|
||||||
|
title_tokens = list(self.get_title_tokens(query, strip_joiners=False))
|
||||||
|
if title_tokens:
|
||||||
|
tokens = [quote(t.encode("utf-8")) for t in title_tokens]
|
||||||
|
query = " ".join(tokens)
|
||||||
|
try:
|
||||||
|
scholarly.set_timeout(20)
|
||||||
|
scholarly.set_retries(2)
|
||||||
|
scholar_gen = itertools.islice(scholarly.search_pubs(query), 10)
|
||||||
|
except Exception as e:
|
||||||
|
log.warning(e)
|
||||||
|
return list()
|
||||||
|
for result in scholar_gen:
|
||||||
|
match = self._parse_search_result(
|
||||||
|
result=result, generic_cover="", locale=locale
|
||||||
|
)
|
||||||
|
val.append(match)
|
||||||
|
return val
|
||||||
|
|
||||||
|
def _parse_search_result(
|
||||||
|
self, result: Dict, generic_cover: str, locale: str
|
||||||
|
) -> MetaRecord:
|
||||||
|
match = MetaRecord(
|
||||||
|
id=result.get("pub_url", result.get("eprint_url", "")),
|
||||||
|
title=result["bib"].get("title"),
|
||||||
|
authors=result["bib"].get("author", []),
|
||||||
|
url=result.get("pub_url", result.get("eprint_url", "")),
|
||||||
|
source=MetaSourceInfo(
|
||||||
|
id=self.__id__, description=self.__name__, link=scholar.META_URL
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
match.cover = result.get("image", {}).get("original_url", generic_cover)
|
||||||
|
match.description = unquote(result["bib"].get("abstract", ""))
|
||||||
|
match.publisher = result["bib"].get("venue", "")
|
||||||
|
match.publishedDate = result["bib"].get("pub_year") + "-01-01"
|
||||||
|
match.identifiers = {"scholar": match.id}
|
||||||
|
return match
|
|
@ -0,0 +1,156 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2019 jim3ma
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>
|
||||||
|
|
||||||
|
from flask import session
|
||||||
|
|
||||||
|
try:
|
||||||
|
from flask_dance.consumer.storage.sqla import SQLAlchemyStorage as SQLAlchemyBackend
|
||||||
|
from flask_dance.consumer.storage.sqla import first, _get_real_user
|
||||||
|
from sqlalchemy.orm.exc import NoResultFound
|
||||||
|
backend_resultcode = True # prevent storing values with this resultcode
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class OAuthBackend(SQLAlchemyBackend):
|
||||||
|
"""
|
||||||
|
Stores and retrieves OAuth tokens using a relational database through
|
||||||
|
the `SQLAlchemy`_ ORM.
|
||||||
|
|
||||||
|
.. _SQLAlchemy: https://www.sqlalchemy.org/
|
||||||
|
"""
|
||||||
|
def __init__(self, model, session, provider_id,
|
||||||
|
user=None, user_id=None, user_required=None, anon_user=None,
|
||||||
|
cache=None):
|
||||||
|
self.provider_id = provider_id
|
||||||
|
super(OAuthBackend, self).__init__(model, session, user, user_id, user_required, anon_user, cache)
|
||||||
|
|
||||||
|
def get(self, blueprint, user=None, user_id=None):
|
||||||
|
if self.provider_id + '_oauth_token' in session and session[self.provider_id + '_oauth_token'] != '':
|
||||||
|
return session[self.provider_id + '_oauth_token']
|
||||||
|
# check cache
|
||||||
|
cache_key = self.make_cache_key(blueprint=blueprint, user=user, user_id=user_id)
|
||||||
|
token = self.cache.get(cache_key)
|
||||||
|
if token:
|
||||||
|
return token
|
||||||
|
|
||||||
|
# if not cached, make database queries
|
||||||
|
query = (
|
||||||
|
self.session.query(self.model)
|
||||||
|
.filter_by(provider=self.provider_id)
|
||||||
|
)
|
||||||
|
uid = first([user_id, self.user_id, blueprint.config.get("user_id")])
|
||||||
|
u = first(_get_real_user(ref, self.anon_user)
|
||||||
|
for ref in (user, self.user, blueprint.config.get("user")))
|
||||||
|
|
||||||
|
use_provider_user_id = False
|
||||||
|
if self.provider_id + '_oauth_user_id' in session and session[self.provider_id + '_oauth_user_id'] != '':
|
||||||
|
query = query.filter_by(provider_user_id=session[self.provider_id + '_oauth_user_id'])
|
||||||
|
use_provider_user_id = True
|
||||||
|
|
||||||
|
if self.user_required and not u and not uid and not use_provider_user_id:
|
||||||
|
# raise ValueError("Cannot get OAuth token without an associated user")
|
||||||
|
return None
|
||||||
|
# check for user ID
|
||||||
|
if hasattr(self.model, "user_id") and uid:
|
||||||
|
query = query.filter_by(user_id=uid)
|
||||||
|
# check for user (relationship property)
|
||||||
|
elif hasattr(self.model, "user") and u:
|
||||||
|
query = query.filter_by(user=u)
|
||||||
|
# if we have the property, but not value, filter by None
|
||||||
|
elif hasattr(self.model, "user_id"):
|
||||||
|
query = query.filter_by(user_id=None)
|
||||||
|
# run query
|
||||||
|
try:
|
||||||
|
token = query.one().token
|
||||||
|
except NoResultFound:
|
||||||
|
token = None
|
||||||
|
|
||||||
|
# cache the result
|
||||||
|
self.cache.set(cache_key, token)
|
||||||
|
|
||||||
|
return token
|
||||||
|
|
||||||
|
def set(self, blueprint, token, user=None, user_id=None):
|
||||||
|
uid = first([user_id, self.user_id, blueprint.config.get("user_id")])
|
||||||
|
u = first(_get_real_user(ref, self.anon_user)
|
||||||
|
for ref in (user, self.user, blueprint.config.get("user")))
|
||||||
|
|
||||||
|
if self.user_required and not u and not uid:
|
||||||
|
raise ValueError("Cannot set OAuth token without an associated user")
|
||||||
|
|
||||||
|
# if there was an existing model, delete it
|
||||||
|
existing_query = (
|
||||||
|
self.session.query(self.model)
|
||||||
|
.filter_by(provider=self.provider_id)
|
||||||
|
)
|
||||||
|
# check for user ID
|
||||||
|
has_user_id = hasattr(self.model, "user_id")
|
||||||
|
if has_user_id and uid:
|
||||||
|
existing_query = existing_query.filter_by(user_id=uid)
|
||||||
|
# check for user (relationship property)
|
||||||
|
has_user = hasattr(self.model, "user")
|
||||||
|
if has_user and u:
|
||||||
|
existing_query = existing_query.filter_by(user=u)
|
||||||
|
# queue up delete query -- won't be run until commit()
|
||||||
|
existing_query.delete()
|
||||||
|
# create a new model for this token
|
||||||
|
kwargs = {
|
||||||
|
"provider": self.provider_id,
|
||||||
|
"token": token,
|
||||||
|
}
|
||||||
|
if has_user_id and uid:
|
||||||
|
kwargs["user_id"] = uid
|
||||||
|
if has_user and u:
|
||||||
|
kwargs["user"] = u
|
||||||
|
self.session.add(self.model(**kwargs))
|
||||||
|
# commit to delete and add simultaneously
|
||||||
|
self.session.commit()
|
||||||
|
# invalidate cache
|
||||||
|
self.cache.delete(self.make_cache_key(
|
||||||
|
blueprint=blueprint, user=user, user_id=user_id
|
||||||
|
))
|
||||||
|
|
||||||
|
def delete(self, blueprint, user=None, user_id=None):
|
||||||
|
query = (
|
||||||
|
self.session.query(self.model)
|
||||||
|
.filter_by(provider=self.provider_id)
|
||||||
|
)
|
||||||
|
uid = first([user_id, self.user_id, blueprint.config.get("user_id")])
|
||||||
|
u = first(_get_real_user(ref, self.anon_user)
|
||||||
|
for ref in (user, self.user, blueprint.config.get("user")))
|
||||||
|
|
||||||
|
if self.user_required and not u and not uid:
|
||||||
|
raise ValueError("Cannot delete OAuth token without an associated user")
|
||||||
|
|
||||||
|
# check for user ID
|
||||||
|
if hasattr(self.model, "user_id") and uid:
|
||||||
|
query = query.filter_by(user_id=uid)
|
||||||
|
# check for user (relationship property)
|
||||||
|
elif hasattr(self.model, "user") and u:
|
||||||
|
query = query.filter_by(user=u)
|
||||||
|
# if we have the property, but not value, filter by None
|
||||||
|
elif hasattr(self.model, "user_id"):
|
||||||
|
query = query.filter_by(user_id=None)
|
||||||
|
# run query
|
||||||
|
query.delete()
|
||||||
|
self.session.commit()
|
||||||
|
# invalidate cache
|
||||||
|
self.cache.delete(self.make_cache_key(
|
||||||
|
blueprint=blueprint, user=user, user_id=user_id,
|
||||||
|
))
|
|
@ -0,0 +1,367 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2019 OzzieIsaacs, cervinko, jkrehm, bodybybuddha, ok11,
|
||||||
|
# andy29485, idalin, Kyosfonica, wuqi, Kennyl, lemmsh,
|
||||||
|
# falgh1, grunjol, csitko, ytils, xybydy, trasba, vrabe,
|
||||||
|
# ruben-herold, marblepebble, JackED42, SiphonSquirrel,
|
||||||
|
# apetresc, nanu-c, mutschler
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>
|
||||||
|
|
||||||
|
import json
|
||||||
|
from functools import wraps
|
||||||
|
|
||||||
|
from flask import session, request, make_response, abort
|
||||||
|
from flask import Blueprint, flash, redirect, url_for
|
||||||
|
from flask_babel import gettext as _
|
||||||
|
from flask_dance.consumer import oauth_authorized, oauth_error
|
||||||
|
from flask_dance.contrib.github import make_github_blueprint, github
|
||||||
|
from flask_dance.contrib.google import make_google_blueprint, google
|
||||||
|
from oauthlib.oauth2 import TokenExpiredError, InvalidGrantError
|
||||||
|
from flask_login import login_user, current_user, login_required
|
||||||
|
from sqlalchemy.orm.exc import NoResultFound
|
||||||
|
|
||||||
|
from . import constants, logger, config, app, ub
|
||||||
|
|
||||||
|
try:
|
||||||
|
from .oauth import OAuthBackend, backend_resultcode
|
||||||
|
except NameError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
oauth_check = {}
|
||||||
|
oauthblueprints = []
|
||||||
|
oauth = Blueprint('oauth', __name__)
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
|
||||||
|
def oauth_required(f):
|
||||||
|
@wraps(f)
|
||||||
|
def inner(*args, **kwargs):
|
||||||
|
if config.config_login_type == constants.LOGIN_OAUTH:
|
||||||
|
return f(*args, **kwargs)
|
||||||
|
if request.headers.get('X-Requested-With') == 'XMLHttpRequest':
|
||||||
|
data = {'status': 'error', 'message': 'Not Found'}
|
||||||
|
response = make_response(json.dumps(data, ensure_ascii=False))
|
||||||
|
response.headers["Content-Type"] = "application/json; charset=utf-8"
|
||||||
|
return response, 404
|
||||||
|
abort(404)
|
||||||
|
|
||||||
|
return inner
|
||||||
|
|
||||||
|
|
||||||
|
def register_oauth_blueprint(cid, show_name):
|
||||||
|
oauth_check[cid] = show_name
|
||||||
|
|
||||||
|
|
||||||
|
def register_user_with_oauth(user=None):
|
||||||
|
all_oauth = {}
|
||||||
|
for oauth_key in oauth_check.keys():
|
||||||
|
if str(oauth_key) + '_oauth_user_id' in session and session[str(oauth_key) + '_oauth_user_id'] != '':
|
||||||
|
all_oauth[oauth_key] = oauth_check[oauth_key]
|
||||||
|
if len(all_oauth.keys()) == 0:
|
||||||
|
return
|
||||||
|
if user is None:
|
||||||
|
flash(_("Register with %(provider)s", provider=", ".join(list(all_oauth.values()))), category="success")
|
||||||
|
else:
|
||||||
|
for oauth_key in all_oauth.keys():
|
||||||
|
# Find this OAuth token in the database, or create it
|
||||||
|
query = ub.session.query(ub.OAuth).filter_by(
|
||||||
|
provider=oauth_key,
|
||||||
|
provider_user_id=session[str(oauth_key) + "_oauth_user_id"],
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
oauth_key = query.one()
|
||||||
|
oauth_key.user_id = user.id
|
||||||
|
except NoResultFound:
|
||||||
|
# no found, return error
|
||||||
|
return
|
||||||
|
ub.session_commit("User {} with OAuth for provider {} registered".format(user.name, oauth_key))
|
||||||
|
|
||||||
|
|
||||||
|
def logout_oauth_user():
|
||||||
|
for oauth_key in oauth_check.keys():
|
||||||
|
if str(oauth_key) + '_oauth_user_id' in session:
|
||||||
|
session.pop(str(oauth_key) + '_oauth_user_id')
|
||||||
|
|
||||||
|
|
||||||
|
def oauth_update_token(provider_id, token, provider_user_id):
|
||||||
|
session[provider_id + "_oauth_user_id"] = provider_user_id
|
||||||
|
session[provider_id + "_oauth_token"] = token
|
||||||
|
|
||||||
|
# Find this OAuth token in the database, or create it
|
||||||
|
query = ub.session.query(ub.OAuth).filter_by(
|
||||||
|
provider=provider_id,
|
||||||
|
provider_user_id=provider_user_id,
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
oauth_entry = query.one()
|
||||||
|
# update token
|
||||||
|
oauth_entry.token = token
|
||||||
|
except NoResultFound:
|
||||||
|
oauth_entry = ub.OAuth(
|
||||||
|
provider=provider_id,
|
||||||
|
provider_user_id=provider_user_id,
|
||||||
|
token=token,
|
||||||
|
)
|
||||||
|
ub.session.add(oauth_entry)
|
||||||
|
ub.session_commit()
|
||||||
|
|
||||||
|
# Disable Flask-Dance's default behavior for saving the OAuth token
|
||||||
|
# Value differrs depending on flask-dance version
|
||||||
|
return backend_resultcode
|
||||||
|
|
||||||
|
|
||||||
|
def bind_oauth_or_register(provider_id, provider_user_id, redirect_url, provider_name):
|
||||||
|
query = ub.session.query(ub.OAuth).filter_by(
|
||||||
|
provider=provider_id,
|
||||||
|
provider_user_id=provider_user_id,
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
oauth_entry = query.first()
|
||||||
|
# already bind with user, just login
|
||||||
|
if oauth_entry.user:
|
||||||
|
login_user(oauth_entry.user)
|
||||||
|
log.debug("You are now logged in as: '%s'", oauth_entry.user.name)
|
||||||
|
flash(_("Success! You are now logged in as: %(nickname)s", nickname= oauth_entry.user.name),
|
||||||
|
category="success")
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
else:
|
||||||
|
# bind to current user
|
||||||
|
if current_user and current_user.is_authenticated:
|
||||||
|
oauth_entry.user = current_user
|
||||||
|
try:
|
||||||
|
ub.session.add(oauth_entry)
|
||||||
|
ub.session.commit()
|
||||||
|
flash(_("Link to %(oauth)s Succeeded", oauth=provider_name), category="success")
|
||||||
|
log.info("Link to {} Succeeded".format(provider_name))
|
||||||
|
return redirect(url_for('web.profile'))
|
||||||
|
except Exception as ex:
|
||||||
|
log.error_or_exception(ex)
|
||||||
|
ub.session.rollback()
|
||||||
|
else:
|
||||||
|
flash(_("Login failed, No User Linked With OAuth Account"), category="error")
|
||||||
|
log.info('Login failed, No User Linked With OAuth Account')
|
||||||
|
return redirect(url_for('web.login'))
|
||||||
|
# return redirect(url_for('web.login'))
|
||||||
|
# if config.config_public_reg:
|
||||||
|
# return redirect(url_for('web.register'))
|
||||||
|
# else:
|
||||||
|
# flash(_("Public registration is not enabled"), category="error")
|
||||||
|
# return redirect(url_for(redirect_url))
|
||||||
|
except (NoResultFound, AttributeError):
|
||||||
|
return redirect(url_for(redirect_url))
|
||||||
|
|
||||||
|
|
||||||
|
def get_oauth_status():
|
||||||
|
status = []
|
||||||
|
query = ub.session.query(ub.OAuth).filter_by(
|
||||||
|
user_id=current_user.id,
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
oauths = query.all()
|
||||||
|
for oauth_entry in oauths:
|
||||||
|
status.append(int(oauth_entry.provider))
|
||||||
|
return status
|
||||||
|
except NoResultFound:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def unlink_oauth(provider):
|
||||||
|
if request.host_url + 'me' != request.referrer:
|
||||||
|
pass
|
||||||
|
query = ub.session.query(ub.OAuth).filter_by(
|
||||||
|
provider=provider,
|
||||||
|
user_id=current_user.id,
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
oauth_entry = query.one()
|
||||||
|
if current_user and current_user.is_authenticated:
|
||||||
|
oauth_entry.user = current_user
|
||||||
|
try:
|
||||||
|
ub.session.delete(oauth_entry)
|
||||||
|
ub.session.commit()
|
||||||
|
logout_oauth_user()
|
||||||
|
flash(_("Unlink to %(oauth)s Succeeded", oauth=oauth_check[provider]), category="success")
|
||||||
|
log.info("Unlink to {} Succeeded".format(oauth_check[provider]))
|
||||||
|
except Exception as ex:
|
||||||
|
log.error_or_exception(ex)
|
||||||
|
ub.session.rollback()
|
||||||
|
flash(_("Unlink to %(oauth)s Failed", oauth=oauth_check[provider]), category="error")
|
||||||
|
except NoResultFound:
|
||||||
|
log.warning("oauth %s for user %d not found", provider, current_user.id)
|
||||||
|
flash(_("Not Linked to %(oauth)s", oauth=provider), category="error")
|
||||||
|
return redirect(url_for('web.profile'))
|
||||||
|
|
||||||
|
def generate_oauth_blueprints():
|
||||||
|
if not ub.session.query(ub.OAuthProvider).count():
|
||||||
|
for provider in ("github", "google"):
|
||||||
|
oauthProvider = ub.OAuthProvider()
|
||||||
|
oauthProvider.provider_name = provider
|
||||||
|
oauthProvider.active = False
|
||||||
|
ub.session.add(oauthProvider)
|
||||||
|
ub.session_commit("{} Blueprint Created".format(provider))
|
||||||
|
|
||||||
|
oauth_ids = ub.session.query(ub.OAuthProvider).all()
|
||||||
|
ele1 = dict(provider_name='github',
|
||||||
|
id=oauth_ids[0].id,
|
||||||
|
active=oauth_ids[0].active,
|
||||||
|
oauth_client_id=oauth_ids[0].oauth_client_id,
|
||||||
|
scope=None,
|
||||||
|
oauth_client_secret=oauth_ids[0].oauth_client_secret,
|
||||||
|
obtain_link='https://github.com/settings/developers')
|
||||||
|
ele2 = dict(provider_name='google',
|
||||||
|
id=oauth_ids[1].id,
|
||||||
|
active=oauth_ids[1].active,
|
||||||
|
scope=["https://www.googleapis.com/auth/userinfo.email"],
|
||||||
|
oauth_client_id=oauth_ids[1].oauth_client_id,
|
||||||
|
oauth_client_secret=oauth_ids[1].oauth_client_secret,
|
||||||
|
obtain_link='https://console.developers.google.com/apis/credentials')
|
||||||
|
oauthblueprints.append(ele1)
|
||||||
|
oauthblueprints.append(ele2)
|
||||||
|
|
||||||
|
for element in oauthblueprints:
|
||||||
|
if element['provider_name'] == 'github':
|
||||||
|
blueprint_func = make_github_blueprint
|
||||||
|
else:
|
||||||
|
blueprint_func = make_google_blueprint
|
||||||
|
blueprint = blueprint_func(
|
||||||
|
client_id=element['oauth_client_id'],
|
||||||
|
client_secret=element['oauth_client_secret'],
|
||||||
|
redirect_to="oauth."+element['provider_name']+"_login",
|
||||||
|
scope=element['scope']
|
||||||
|
)
|
||||||
|
element['blueprint'] = blueprint
|
||||||
|
element['blueprint'].backend = OAuthBackend(ub.OAuth, ub.session, str(element['id']),
|
||||||
|
user=current_user, user_required=True)
|
||||||
|
app.register_blueprint(blueprint, url_prefix="/login")
|
||||||
|
if element['active']:
|
||||||
|
register_oauth_blueprint(element['id'], element['provider_name'])
|
||||||
|
return oauthblueprints
|
||||||
|
|
||||||
|
|
||||||
|
if ub.oauth_support:
|
||||||
|
oauthblueprints = generate_oauth_blueprints()
|
||||||
|
|
||||||
|
@oauth_authorized.connect_via(oauthblueprints[0]['blueprint'])
|
||||||
|
def github_logged_in(blueprint, token):
|
||||||
|
if not token:
|
||||||
|
flash(_("Failed to log in with GitHub."), category="error")
|
||||||
|
log.error("Failed to log in with GitHub")
|
||||||
|
return False
|
||||||
|
|
||||||
|
resp = blueprint.session.get("/user")
|
||||||
|
if not resp.ok:
|
||||||
|
flash(_("Failed to fetch user info from GitHub."), category="error")
|
||||||
|
log.error("Failed to fetch user info from GitHub")
|
||||||
|
return False
|
||||||
|
|
||||||
|
github_info = resp.json()
|
||||||
|
github_user_id = str(github_info["id"])
|
||||||
|
return oauth_update_token(str(oauthblueprints[0]['id']), token, github_user_id)
|
||||||
|
|
||||||
|
|
||||||
|
@oauth_authorized.connect_via(oauthblueprints[1]['blueprint'])
|
||||||
|
def google_logged_in(blueprint, token):
|
||||||
|
if not token:
|
||||||
|
flash(_("Failed to log in with Google."), category="error")
|
||||||
|
log.error("Failed to log in with Google")
|
||||||
|
return False
|
||||||
|
|
||||||
|
resp = blueprint.session.get("/oauth2/v2/userinfo")
|
||||||
|
if not resp.ok:
|
||||||
|
flash(_("Failed to fetch user info from Google."), category="error")
|
||||||
|
log.error("Failed to fetch user info from Google")
|
||||||
|
return False
|
||||||
|
|
||||||
|
google_info = resp.json()
|
||||||
|
google_user_id = str(google_info["id"])
|
||||||
|
return oauth_update_token(str(oauthblueprints[1]['id']), token, google_user_id)
|
||||||
|
|
||||||
|
|
||||||
|
# notify on OAuth provider error
|
||||||
|
@oauth_error.connect_via(oauthblueprints[0]['blueprint'])
|
||||||
|
def github_error(blueprint, error, error_description=None, error_uri=None):
|
||||||
|
msg = (
|
||||||
|
"OAuth error from {name}! "
|
||||||
|
"error={error} description={description} uri={uri}"
|
||||||
|
).format(
|
||||||
|
name=blueprint.name,
|
||||||
|
error=error,
|
||||||
|
description=error_description,
|
||||||
|
uri=error_uri,
|
||||||
|
) # ToDo: Translate
|
||||||
|
flash(msg, category="error")
|
||||||
|
|
||||||
|
@oauth_error.connect_via(oauthblueprints[1]['blueprint'])
|
||||||
|
def google_error(blueprint, error, error_description=None, error_uri=None):
|
||||||
|
msg = (
|
||||||
|
"OAuth error from {name}! "
|
||||||
|
"error={error} description={description} uri={uri}"
|
||||||
|
).format(
|
||||||
|
name=blueprint.name,
|
||||||
|
error=error,
|
||||||
|
description=error_description,
|
||||||
|
uri=error_uri,
|
||||||
|
) # ToDo: Translate
|
||||||
|
flash(msg, category="error")
|
||||||
|
|
||||||
|
|
||||||
|
@oauth.route('/link/github')
|
||||||
|
@oauth_required
|
||||||
|
def github_login():
|
||||||
|
if not github.authorized:
|
||||||
|
return redirect(url_for('github.login'))
|
||||||
|
try:
|
||||||
|
account_info = github.get('/user')
|
||||||
|
if account_info.ok:
|
||||||
|
account_info_json = account_info.json()
|
||||||
|
return bind_oauth_or_register(oauthblueprints[0]['id'], account_info_json['id'], 'github.login', 'github')
|
||||||
|
flash(_("GitHub Oauth error, please retry later."), category="error")
|
||||||
|
log.error("GitHub Oauth error, please retry later")
|
||||||
|
except (InvalidGrantError, TokenExpiredError) as e:
|
||||||
|
flash(_("GitHub Oauth error: {}").format(e), category="error")
|
||||||
|
log.error(e)
|
||||||
|
return redirect(url_for('web.login'))
|
||||||
|
|
||||||
|
|
||||||
|
@oauth.route('/unlink/github', methods=["GET"])
|
||||||
|
@login_required
|
||||||
|
def github_login_unlink():
|
||||||
|
return unlink_oauth(oauthblueprints[0]['id'])
|
||||||
|
|
||||||
|
|
||||||
|
@oauth.route('/link/google')
|
||||||
|
@oauth_required
|
||||||
|
def google_login():
|
||||||
|
if not google.authorized:
|
||||||
|
return redirect(url_for("google.login"))
|
||||||
|
try:
|
||||||
|
resp = google.get("/oauth2/v2/userinfo")
|
||||||
|
if resp.ok:
|
||||||
|
account_info_json = resp.json()
|
||||||
|
return bind_oauth_or_register(oauthblueprints[1]['id'], account_info_json['id'], 'google.login', 'google')
|
||||||
|
flash(_("Google Oauth error, please retry later."), category="error")
|
||||||
|
log.error("Google Oauth error, please retry later")
|
||||||
|
except (InvalidGrantError, TokenExpiredError) as e:
|
||||||
|
flash(_("Google Oauth error: {}").format(e), category="error")
|
||||||
|
log.error(e)
|
||||||
|
return redirect(url_for('web.login'))
|
||||||
|
|
||||||
|
|
||||||
|
@oauth.route('/unlink/google', methods=["GET"])
|
||||||
|
@login_required
|
||||||
|
def google_login_unlink():
|
||||||
|
return unlink_oauth(oauthblueprints[1]['id'])
|
|
@ -0,0 +1,552 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2019 OzzieIsaacs, cervinko, jkrehm, bodybybuddha, ok11,
|
||||||
|
# andy29485, idalin, Kyosfonica, wuqi, Kennyl, lemmsh,
|
||||||
|
# falgh1, grunjol, csitko, ytils, xybydy, trasba, vrabe,
|
||||||
|
# ruben-herold, marblepebble, JackED42, SiphonSquirrel,
|
||||||
|
# apetresc, nanu-c, mutschler
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import datetime
|
||||||
|
import json
|
||||||
|
from urllib.parse import unquote_plus
|
||||||
|
|
||||||
|
from flask import Blueprint, request, render_template, make_response, abort, Response, g
|
||||||
|
from flask_login import current_user
|
||||||
|
from flask_babel import get_locale
|
||||||
|
from flask_babel import gettext as _
|
||||||
|
from sqlalchemy.sql.expression import func, text, or_, and_, true
|
||||||
|
from sqlalchemy.exc import InvalidRequestError, OperationalError
|
||||||
|
|
||||||
|
from . import logger, config, db, calibre_db, ub, isoLanguages, constants
|
||||||
|
from .usermanagement import requires_basic_auth_if_no_ano
|
||||||
|
from .helper import get_download_link, get_book_cover
|
||||||
|
from .pagination import Pagination
|
||||||
|
from .web import render_read_books
|
||||||
|
|
||||||
|
|
||||||
|
opds = Blueprint('opds', __name__)
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/")
|
||||||
|
@opds.route("/opds")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_index():
|
||||||
|
return render_xml_template('index.xml')
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/osd")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_osd():
|
||||||
|
return render_xml_template('osd.xml', lang='en-EN')
|
||||||
|
|
||||||
|
|
||||||
|
# @opds.route("/opds/search", defaults={'query': ""})
|
||||||
|
@opds.route("/opds/search/<path:query>")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_cc_search(query):
|
||||||
|
# Handle strange query from Libera Reader with + instead of spaces
|
||||||
|
plus_query = unquote_plus(request.environ['RAW_URI'].split('/opds/search/')[1]).strip()
|
||||||
|
return feed_search(plus_query)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/search", methods=["GET"])
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_normal_search():
|
||||||
|
return feed_search(request.args.get("query", "").strip())
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/books")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_booksindex():
|
||||||
|
return render_element_index(db.Books.sort, None, 'opds.feed_letter_books')
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/books/letter/<book_id>")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_letter_books(book_id):
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
letter = true() if book_id == "00" else func.upper(db.Books.sort).startswith(book_id)
|
||||||
|
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
|
||||||
|
db.Books,
|
||||||
|
letter,
|
||||||
|
[db.Books.sort],
|
||||||
|
True, config.config_read_column)
|
||||||
|
|
||||||
|
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/new")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_new():
|
||||||
|
if not current_user.check_visibility(constants.SIDEBAR_RECENT):
|
||||||
|
abort(404)
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
|
||||||
|
db.Books, True, [db.Books.timestamp.desc()],
|
||||||
|
True, config.config_read_column)
|
||||||
|
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/discover")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_discover():
|
||||||
|
if not current_user.check_visibility(constants.SIDEBAR_RANDOM):
|
||||||
|
abort(404)
|
||||||
|
query = calibre_db.generate_linked_query(config.config_read_column, db.Books)
|
||||||
|
entries = query.filter(calibre_db.common_filters()).order_by(func.random()).limit(config.config_books_per_page)
|
||||||
|
pagination = Pagination(1, config.config_books_per_page, int(config.config_books_per_page))
|
||||||
|
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/rated")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_best_rated():
|
||||||
|
if not current_user.check_visibility(constants.SIDEBAR_BEST_RATED):
|
||||||
|
abort(404)
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
|
||||||
|
db.Books, db.Books.ratings.any(db.Ratings.rating > 9),
|
||||||
|
[db.Books.timestamp.desc()],
|
||||||
|
True, config.config_read_column)
|
||||||
|
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/hot")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_hot():
|
||||||
|
if not current_user.check_visibility(constants.SIDEBAR_HOT):
|
||||||
|
abort(404)
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
all_books = ub.session.query(ub.Downloads, func.count(ub.Downloads.book_id)).order_by(
|
||||||
|
func.count(ub.Downloads.book_id).desc()).group_by(ub.Downloads.book_id)
|
||||||
|
hot_books = all_books.offset(off).limit(config.config_books_per_page)
|
||||||
|
entries = list()
|
||||||
|
for book in hot_books:
|
||||||
|
query = calibre_db.generate_linked_query(config.config_read_column, db.Books)
|
||||||
|
download_book = query.filter(calibre_db.common_filters()).filter(
|
||||||
|
book.Downloads.book_id == db.Books.id).first()
|
||||||
|
if download_book:
|
||||||
|
entries.append(download_book)
|
||||||
|
else:
|
||||||
|
ub.delete_download(book.Downloads.book_id)
|
||||||
|
num_books = entries.__len__()
|
||||||
|
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1),
|
||||||
|
config.config_books_per_page, num_books)
|
||||||
|
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/author")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_authorindex():
|
||||||
|
if not current_user.check_visibility(constants.SIDEBAR_AUTHOR):
|
||||||
|
abort(404)
|
||||||
|
return render_element_index(db.Authors.sort, db.books_authors_link, 'opds.feed_letter_author')
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/author/letter/<book_id>")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_letter_author(book_id):
|
||||||
|
if not current_user.check_visibility(constants.SIDEBAR_AUTHOR):
|
||||||
|
abort(404)
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
letter = true() if book_id == "00" else func.upper(db.Authors.sort).startswith(book_id)
|
||||||
|
entries = calibre_db.session.query(db.Authors).join(db.books_authors_link).join(db.Books)\
|
||||||
|
.filter(calibre_db.common_filters()).filter(letter)\
|
||||||
|
.group_by(text('books_authors_link.author'))\
|
||||||
|
.order_by(db.Authors.sort)
|
||||||
|
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||||
|
entries.count())
|
||||||
|
entries = entries.limit(config.config_books_per_page).offset(off).all()
|
||||||
|
return render_xml_template('feed.xml', listelements=entries, folder='opds.feed_author', pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/author/<int:book_id>")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_author(book_id):
|
||||||
|
return render_xml_dataset(db.Authors, book_id)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/publisher")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_publisherindex():
|
||||||
|
if not current_user.check_visibility(constants.SIDEBAR_PUBLISHER):
|
||||||
|
abort(404)
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
entries = calibre_db.session.query(db.Publishers)\
|
||||||
|
.join(db.books_publishers_link)\
|
||||||
|
.join(db.Books).filter(calibre_db.common_filters())\
|
||||||
|
.group_by(text('books_publishers_link.publisher'))\
|
||||||
|
.order_by(db.Publishers.sort)\
|
||||||
|
.limit(config.config_books_per_page).offset(off)
|
||||||
|
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||||
|
len(calibre_db.session.query(db.Publishers).all()))
|
||||||
|
return render_xml_template('feed.xml', listelements=entries, folder='opds.feed_publisher', pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/publisher/<int:book_id>")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_publisher(book_id):
|
||||||
|
return render_xml_dataset(db.Publishers, book_id)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/category")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_categoryindex():
|
||||||
|
if not current_user.check_visibility(constants.SIDEBAR_CATEGORY):
|
||||||
|
abort(404)
|
||||||
|
return render_element_index(db.Tags.name, db.books_tags_link, 'opds.feed_letter_category')
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/category/letter/<book_id>")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_letter_category(book_id):
|
||||||
|
if not current_user.check_visibility(constants.SIDEBAR_CATEGORY):
|
||||||
|
abort(404)
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
letter = true() if book_id == "00" else func.upper(db.Tags.name).startswith(book_id)
|
||||||
|
entries = calibre_db.session.query(db.Tags)\
|
||||||
|
.join(db.books_tags_link)\
|
||||||
|
.join(db.Books)\
|
||||||
|
.filter(calibre_db.common_filters()).filter(letter)\
|
||||||
|
.group_by(text('books_tags_link.tag'))\
|
||||||
|
.order_by(db.Tags.name)
|
||||||
|
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||||
|
entries.count())
|
||||||
|
entries = entries.offset(off).limit(config.config_books_per_page).all()
|
||||||
|
return render_xml_template('feed.xml', listelements=entries, folder='opds.feed_category', pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/category/<int:book_id>")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_category(book_id):
|
||||||
|
return render_xml_dataset(db.Tags, book_id)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/series")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_seriesindex():
|
||||||
|
if not current_user.check_visibility(constants.SIDEBAR_SERIES):
|
||||||
|
abort(404)
|
||||||
|
return render_element_index(db.Series.sort, db.books_series_link, 'opds.feed_letter_series')
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/series/letter/<book_id>")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_letter_series(book_id):
|
||||||
|
if not current_user.check_visibility(constants.SIDEBAR_SERIES):
|
||||||
|
abort(404)
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
letter = true() if book_id == "00" else func.upper(db.Series.sort).startswith(book_id)
|
||||||
|
entries = calibre_db.session.query(db.Series)\
|
||||||
|
.join(db.books_series_link)\
|
||||||
|
.join(db.Books)\
|
||||||
|
.filter(calibre_db.common_filters()).filter(letter)\
|
||||||
|
.group_by(text('books_series_link.series'))\
|
||||||
|
.order_by(db.Series.sort)
|
||||||
|
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||||
|
entries.count())
|
||||||
|
entries = entries.offset(off).limit(config.config_books_per_page).all()
|
||||||
|
return render_xml_template('feed.xml', listelements=entries, folder='opds.feed_series', pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/series/<int:book_id>")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_series(book_id):
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
|
||||||
|
db.Books,
|
||||||
|
db.Books.series.any(db.Series.id == book_id),
|
||||||
|
[db.Books.series_index],
|
||||||
|
True, config.config_read_column)
|
||||||
|
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/ratings")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_ratingindex():
|
||||||
|
if not current_user.check_visibility(constants.SIDEBAR_RATING):
|
||||||
|
abort(404)
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
entries = calibre_db.session.query(db.Ratings, func.count('books_ratings_link.book').label('count'),
|
||||||
|
(db.Ratings.rating / 2).label('name')) \
|
||||||
|
.join(db.books_ratings_link)\
|
||||||
|
.join(db.Books)\
|
||||||
|
.filter(calibre_db.common_filters()) \
|
||||||
|
.group_by(text('books_ratings_link.rating'))\
|
||||||
|
.order_by(db.Ratings.rating).all()
|
||||||
|
|
||||||
|
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||||
|
len(entries))
|
||||||
|
element = list()
|
||||||
|
for entry in entries:
|
||||||
|
element.append(FeedObject(entry[0].id, _("{} Stars").format(entry.name)))
|
||||||
|
return render_xml_template('feed.xml', listelements=element, folder='opds.feed_ratings', pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/ratings/<book_id>")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_ratings(book_id):
|
||||||
|
return render_xml_dataset(db.Ratings, book_id)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/formats")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_formatindex():
|
||||||
|
if not current_user.check_visibility(constants.SIDEBAR_FORMAT):
|
||||||
|
abort(404)
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
entries = calibre_db.session.query(db.Data).join(db.Books)\
|
||||||
|
.filter(calibre_db.common_filters()) \
|
||||||
|
.group_by(db.Data.format)\
|
||||||
|
.order_by(db.Data.format).all()
|
||||||
|
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||||
|
len(entries))
|
||||||
|
element = list()
|
||||||
|
for entry in entries:
|
||||||
|
element.append(FeedObject(entry.format, entry.format))
|
||||||
|
return render_xml_template('feed.xml', listelements=element, folder='opds.feed_format', pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/formats/<book_id>")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_format(book_id):
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
|
||||||
|
db.Books,
|
||||||
|
db.Books.data.any(db.Data.format == book_id.upper()),
|
||||||
|
[db.Books.timestamp.desc()],
|
||||||
|
True, config.config_read_column)
|
||||||
|
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/language")
|
||||||
|
@opds.route("/opds/language/")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_languagesindex():
|
||||||
|
if not current_user.check_visibility(constants.SIDEBAR_LANGUAGE):
|
||||||
|
abort(404)
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
if current_user.filter_language() == "all":
|
||||||
|
languages = calibre_db.speaking_language()
|
||||||
|
else:
|
||||||
|
languages = calibre_db.session.query(db.Languages).filter(
|
||||||
|
db.Languages.lang_code == current_user.filter_language()).all()
|
||||||
|
languages[0].name = isoLanguages.get_language_name(get_locale(), languages[0].lang_code)
|
||||||
|
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||||
|
len(languages))
|
||||||
|
return render_xml_template('feed.xml', listelements=languages, folder='opds.feed_languages', pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/language/<int:book_id>")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_languages(book_id):
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
|
||||||
|
db.Books,
|
||||||
|
db.Books.languages.any(db.Languages.id == book_id),
|
||||||
|
[db.Books.timestamp.desc()],
|
||||||
|
True, config.config_read_column)
|
||||||
|
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/shelfindex")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_shelfindex():
|
||||||
|
if not (current_user.is_authenticated or g.allow_anonymous):
|
||||||
|
abort(404)
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
shelf = ub.session.query(ub.Shelf).filter(
|
||||||
|
or_(ub.Shelf.is_public == 1, ub.Shelf.user_id == current_user.id)).order_by(ub.Shelf.name).all()
|
||||||
|
number = len(shelf)
|
||||||
|
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||||
|
number)
|
||||||
|
return render_xml_template('feed.xml', listelements=shelf, folder='opds.feed_shelf', pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/shelf/<int:book_id>")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_shelf(book_id):
|
||||||
|
if not (current_user.is_authenticated or g.allow_anonymous):
|
||||||
|
abort(404)
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
if current_user.is_anonymous:
|
||||||
|
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.is_public == 1,
|
||||||
|
ub.Shelf.id == book_id).first()
|
||||||
|
else:
|
||||||
|
shelf = ub.session.query(ub.Shelf).filter(or_(and_(ub.Shelf.user_id == int(current_user.id),
|
||||||
|
ub.Shelf.id == book_id),
|
||||||
|
and_(ub.Shelf.is_public == 1,
|
||||||
|
ub.Shelf.id == book_id))).first()
|
||||||
|
result = list()
|
||||||
|
# user is allowed to access shelf
|
||||||
|
if shelf:
|
||||||
|
result, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1),
|
||||||
|
config.config_books_per_page,
|
||||||
|
db.Books,
|
||||||
|
ub.BookShelf.shelf == shelf.id,
|
||||||
|
[ub.BookShelf.order.asc()],
|
||||||
|
True, config.config_read_column,
|
||||||
|
ub.BookShelf, ub.BookShelf.book_id == db.Books.id)
|
||||||
|
# delete shelf entries where book is not existent anymore, can happen if book is deleted outside calibre-web
|
||||||
|
wrong_entries = calibre_db.session.query(ub.BookShelf) \
|
||||||
|
.join(db.Books, ub.BookShelf.book_id == db.Books.id, isouter=True) \
|
||||||
|
.filter(db.Books.id == None).all()
|
||||||
|
for entry in wrong_entries:
|
||||||
|
log.info('Not existing book {} in {} deleted'.format(entry.book_id, shelf))
|
||||||
|
try:
|
||||||
|
ub.session.query(ub.BookShelf).filter(ub.BookShelf.book_id == entry.book_id).delete()
|
||||||
|
ub.session.commit()
|
||||||
|
except (OperationalError, InvalidRequestError) as e:
|
||||||
|
ub.session.rollback()
|
||||||
|
log.error_or_exception("Settings Database error: {}".format(e))
|
||||||
|
return render_xml_template('feed.xml', entries=result, pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/download/<book_id>/<book_format>/")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def opds_download_link(book_id, book_format):
|
||||||
|
if not current_user.role_download():
|
||||||
|
return abort(403)
|
||||||
|
if "Kobo" in request.headers.get('User-Agent'):
|
||||||
|
client = "kobo"
|
||||||
|
else:
|
||||||
|
client = ""
|
||||||
|
return get_download_link(book_id, book_format.lower(), client)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/ajax/book/<string:uuid>/<library>")
|
||||||
|
@opds.route("/ajax/book/<string:uuid>", defaults={'library': ""})
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def get_metadata_calibre_companion(uuid, library):
|
||||||
|
entry = calibre_db.session.query(db.Books).filter(db.Books.uuid.like("%" + uuid + "%")).first()
|
||||||
|
if entry is not None:
|
||||||
|
js = render_template('json.txt', entry=entry)
|
||||||
|
response = make_response(js)
|
||||||
|
response.headers["Content-Type"] = "application/json; charset=utf-8"
|
||||||
|
return response
|
||||||
|
else:
|
||||||
|
return ""
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/stats")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def get_database_stats():
|
||||||
|
stat = dict()
|
||||||
|
stat['books'] = calibre_db.session.query(db.Books).count()
|
||||||
|
stat['authors'] = calibre_db.session.query(db.Authors).count()
|
||||||
|
stat['categories'] = calibre_db.session.query(db.Tags).count()
|
||||||
|
stat['series'] = calibre_db.session.query(db.Series).count()
|
||||||
|
return Response(json.dumps(stat), mimetype="application/json")
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/thumb_240_240/<book_id>")
|
||||||
|
@opds.route("/opds/cover_240_240/<book_id>")
|
||||||
|
@opds.route("/opds/cover_90_90/<book_id>")
|
||||||
|
@opds.route("/opds/cover/<book_id>")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_get_cover(book_id):
|
||||||
|
return get_book_cover(book_id)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/readbooks")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_read_books():
|
||||||
|
if not (current_user.check_visibility(constants.SIDEBAR_READ_AND_UNREAD) and not current_user.is_anonymous):
|
||||||
|
return abort(403)
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
result, pagination = render_read_books(int(off) / (int(config.config_books_per_page)) + 1, True, True)
|
||||||
|
return render_xml_template('feed.xml', entries=result, pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
@opds.route("/opds/unreadbooks")
|
||||||
|
@requires_basic_auth_if_no_ano
|
||||||
|
def feed_unread_books():
|
||||||
|
if not (current_user.check_visibility(constants.SIDEBAR_READ_AND_UNREAD) and not current_user.is_anonymous):
|
||||||
|
return abort(403)
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
result, pagination = render_read_books(int(off) / (int(config.config_books_per_page)) + 1, False, True)
|
||||||
|
return render_xml_template('feed.xml', entries=result, pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
class FeedObject:
|
||||||
|
def __init__(self, rating_id, rating_name):
|
||||||
|
self.rating_id = rating_id
|
||||||
|
self.rating_name = rating_name
|
||||||
|
|
||||||
|
@property
|
||||||
|
def id(self):
|
||||||
|
return self.rating_id
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self):
|
||||||
|
return self.rating_name
|
||||||
|
|
||||||
|
|
||||||
|
def feed_search(term):
|
||||||
|
if term:
|
||||||
|
entries, __, ___ = calibre_db.get_search_results(term, config=config)
|
||||||
|
entries_count = len(entries) if len(entries) > 0 else 1
|
||||||
|
pagination = Pagination(1, entries_count, entries_count)
|
||||||
|
return render_xml_template('feed.xml', searchterm=term, entries=entries, pagination=pagination)
|
||||||
|
else:
|
||||||
|
return render_xml_template('feed.xml', searchterm="")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def render_xml_template(*args, **kwargs):
|
||||||
|
# ToDo: return time in current timezone similar to %z
|
||||||
|
currtime = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S+00:00")
|
||||||
|
xml = render_template(current_time=currtime, instance=config.config_calibre_web_title, constants=constants.sidebar_settings, *args, **kwargs)
|
||||||
|
response = make_response(xml)
|
||||||
|
response.headers["Content-Type"] = "application/atom+xml; charset=utf-8"
|
||||||
|
return response
|
||||||
|
|
||||||
|
|
||||||
|
def render_xml_dataset(data_table, book_id):
|
||||||
|
off = request.args.get("offset") or 0
|
||||||
|
entries, __, pagination = calibre_db.fill_indexpage((int(off) / (int(config.config_books_per_page)) + 1), 0,
|
||||||
|
db.Books,
|
||||||
|
getattr(db.Books, data_table.__tablename__).any(data_table.id == book_id),
|
||||||
|
[db.Books.timestamp.desc()],
|
||||||
|
True, config.config_read_column)
|
||||||
|
return render_xml_template('feed.xml', entries=entries, pagination=pagination)
|
||||||
|
|
||||||
|
|
||||||
|
def render_element_index(database_column, linked_table, folder):
|
||||||
|
shift = 0
|
||||||
|
off = int(request.args.get("offset") or 0)
|
||||||
|
entries = calibre_db.session.query(func.upper(func.substr(database_column, 1, 1)).label('id'), None, None)
|
||||||
|
# query = calibre_db.generate_linked_query(config.config_read_column, db.Books)
|
||||||
|
if linked_table is not None:
|
||||||
|
entries = entries.join(linked_table).join(db.Books)
|
||||||
|
entries = entries.filter(calibre_db.common_filters()).group_by(func.upper(func.substr(database_column, 1, 1))).all()
|
||||||
|
elements = []
|
||||||
|
if off == 0 and entries:
|
||||||
|
elements.append({'id': "00", 'name': _("All")})
|
||||||
|
shift = 1
|
||||||
|
for entry in entries[
|
||||||
|
off + shift - 1:
|
||||||
|
int(off + int(config.config_books_per_page) - shift)]:
|
||||||
|
elements.append({'id': entry.id, 'name': entry.id})
|
||||||
|
pagination = Pagination((int(off) / (int(config.config_books_per_page)) + 1), config.config_books_per_page,
|
||||||
|
len(entries) + 1)
|
||||||
|
return render_xml_template('feed.xml',
|
||||||
|
letterelements=elements,
|
||||||
|
folder=folder,
|
||||||
|
pagination=pagination)
|
|
@ -0,0 +1,75 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2019 OzzieIsaacs, cervinko, jkrehm, bodybybuddha, ok11,
|
||||||
|
# andy29485, idalin, Kyosfonica, wuqi, Kennyl, lemmsh,
|
||||||
|
# falgh1, grunjol, csitko, ytils, xybydy, trasba, vrabe,
|
||||||
|
# ruben-herold, marblepebble, JackED42, SiphonSquirrel,
|
||||||
|
# apetresc, nanu-c, mutschler
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
from math import ceil
|
||||||
|
|
||||||
|
|
||||||
|
# simple pagination for the feed
|
||||||
|
class Pagination(object):
|
||||||
|
def __init__(self, page, per_page, total_count):
|
||||||
|
self.page = int(page)
|
||||||
|
self.per_page = int(per_page)
|
||||||
|
self.total_count = int(total_count)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def next_offset(self):
|
||||||
|
return int(self.page * self.per_page)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def previous_offset(self):
|
||||||
|
return int((self.page - 2) * self.per_page)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def last_offset(self):
|
||||||
|
last = int(self.total_count) - int(self.per_page)
|
||||||
|
if last < 0:
|
||||||
|
last = 0
|
||||||
|
return int(last)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def pages(self):
|
||||||
|
return int(ceil(self.total_count / float(self.per_page)))
|
||||||
|
|
||||||
|
@property
|
||||||
|
def has_prev(self):
|
||||||
|
return self.page > 1
|
||||||
|
|
||||||
|
@property
|
||||||
|
def has_next(self):
|
||||||
|
return self.page < self.pages
|
||||||
|
|
||||||
|
# right_edge: last right_edges count of all pages are shown as number, means, if 10 pages are paginated -> 9,10 shown
|
||||||
|
# left_edge: first left_edges count of all pages are shown as number -> 1,2 shown
|
||||||
|
# left_current: left_current count below current page are shown as number, means if current page 5 -> 3,4 shown
|
||||||
|
# left_current: right_current count above current page are shown as number, means if current page 5 -> 6,7 shown
|
||||||
|
def iter_pages(self, left_edge=2, left_current=2,
|
||||||
|
right_current=4, right_edge=2):
|
||||||
|
last = 0
|
||||||
|
left_current = self.page - left_current - 1
|
||||||
|
right_current = self.page + right_current + 1
|
||||||
|
right_edge = self.pages - right_edge
|
||||||
|
for num in range(1, (self.pages + 1)):
|
||||||
|
if num <= left_edge or (left_current < num < right_current) or num > right_edge:
|
||||||
|
if last + 1 != num:
|
||||||
|
yield None
|
||||||
|
yield num
|
||||||
|
last = num
|
|
@ -1,4 +1,3 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
# Flask License
|
# Flask License
|
||||||
|
@ -26,13 +25,11 @@
|
||||||
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
|
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
|
||||||
# IN ANY WAY OUT OF THE USE OF THIS SOFTWARE AND DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
# IN ANY WAY OUT OF THE USE OF THIS SOFTWARE AND DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||||
|
|
||||||
# http://flask.pocoo.org/snippets/62/
|
# https://web.archive.org/web/20120517003641/http://flask.pocoo.org/snippets/62/
|
||||||
|
|
||||||
try:
|
from urllib.parse import urlparse, urljoin
|
||||||
from urllib.parse import urlparse, urljoin
|
|
||||||
except ImportError:
|
from flask import request, url_for, redirect, current_app
|
||||||
from urlparse import urlparse, urljoin
|
|
||||||
from flask import request, url_for, redirect
|
|
||||||
|
|
||||||
|
|
||||||
def is_safe_url(target):
|
def is_safe_url(target):
|
||||||
|
@ -41,16 +38,15 @@ def is_safe_url(target):
|
||||||
return test_url.scheme in ('http', 'https') and ref_url.netloc == test_url.netloc
|
return test_url.scheme in ('http', 'https') and ref_url.netloc == test_url.netloc
|
||||||
|
|
||||||
|
|
||||||
def get_redirect_target():
|
def remove_prefix(text, prefix):
|
||||||
for target in request.values.get('next'), request.referrer:
|
if text.startswith(prefix):
|
||||||
if not target:
|
return text[len(prefix):]
|
||||||
continue
|
return ""
|
||||||
if is_safe_url(target):
|
|
||||||
return target
|
|
||||||
|
|
||||||
|
|
||||||
def redirect_back(endpoint, **values):
|
def get_redirect_location(next, endpoint, **values):
|
||||||
target = request.form['next']
|
target = next or url_for(endpoint, **values)
|
||||||
if not target or not is_safe_url(target):
|
adapter = current_app.url_map.bind(urlparse(request.host_url).netloc)
|
||||||
|
if not len(adapter.allowed_methods(remove_prefix(target, request.environ.get('HTTP_X_SCRIPT_NAME',"")))):
|
||||||
target = url_for(endpoint, **values)
|
target = url_for(endpoint, **values)
|
||||||
return redirect(target)
|
return target
|
||||||
|
|
|
@ -0,0 +1,135 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2019 OzzieIsaacs, cervinko, jkrehm, bodybybuddha, ok11,
|
||||||
|
# andy29485, idalin, Kyosfonica, wuqi, Kennyl, lemmsh,
|
||||||
|
# falgh1, grunjol, csitko, ytils, xybydy, trasba, vrabe,
|
||||||
|
# ruben-herold, marblepebble, JackED42, SiphonSquirrel,
|
||||||
|
# apetresc, nanu-c, mutschler
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import json
|
||||||
|
from datetime import datetime
|
||||||
|
from functools import wraps
|
||||||
|
|
||||||
|
from flask import Blueprint, request, make_response, abort, url_for, flash, redirect
|
||||||
|
from flask_login import login_required, current_user, login_user
|
||||||
|
from flask_babel import gettext as _
|
||||||
|
from sqlalchemy.sql.expression import true
|
||||||
|
|
||||||
|
from . import config, logger, ub
|
||||||
|
from .render_template import render_title_template
|
||||||
|
|
||||||
|
|
||||||
|
remotelogin = Blueprint('remotelogin', __name__)
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
|
||||||
|
def remote_login_required(f):
|
||||||
|
@wraps(f)
|
||||||
|
def inner(*args, **kwargs):
|
||||||
|
if config.config_remote_login:
|
||||||
|
return f(*args, **kwargs)
|
||||||
|
if request.headers.get('X-Requested-With') == 'XMLHttpRequest':
|
||||||
|
data = {'status': 'error', 'message': 'Forbidden'}
|
||||||
|
response = make_response(json.dumps(data, ensure_ascii=False))
|
||||||
|
response.headers["Content-Type"] = "application/json; charset=utf-8"
|
||||||
|
return response, 403
|
||||||
|
abort(403)
|
||||||
|
|
||||||
|
return inner
|
||||||
|
|
||||||
|
@remotelogin.route('/remote/login')
|
||||||
|
@remote_login_required
|
||||||
|
def remote_login():
|
||||||
|
auth_token = ub.RemoteAuthToken()
|
||||||
|
ub.session.add(auth_token)
|
||||||
|
ub.session_commit()
|
||||||
|
verify_url = url_for('remotelogin.verify_token', token=auth_token.auth_token, _external=true)
|
||||||
|
log.debug("Remot Login request with token: %s", auth_token.auth_token)
|
||||||
|
return render_title_template('remote_login.html', title=_("Login"), token=auth_token.auth_token,
|
||||||
|
verify_url=verify_url, page="remotelogin")
|
||||||
|
|
||||||
|
|
||||||
|
@remotelogin.route('/verify/<token>')
|
||||||
|
@remote_login_required
|
||||||
|
@login_required
|
||||||
|
def verify_token(token):
|
||||||
|
auth_token = ub.session.query(ub.RemoteAuthToken).filter(ub.RemoteAuthToken.auth_token == token).first()
|
||||||
|
|
||||||
|
# Token not found
|
||||||
|
if auth_token is None:
|
||||||
|
flash(_("Token not found"), category="error")
|
||||||
|
log.error("Remote Login token not found")
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
|
||||||
|
# Token expired
|
||||||
|
elif datetime.now() > auth_token.expiration:
|
||||||
|
ub.session.delete(auth_token)
|
||||||
|
ub.session_commit()
|
||||||
|
|
||||||
|
flash(_("Token has expired"), category="error")
|
||||||
|
log.error("Remote Login token expired")
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
|
||||||
|
# Update token with user information
|
||||||
|
auth_token.user_id = current_user.id
|
||||||
|
auth_token.verified = True
|
||||||
|
ub.session_commit()
|
||||||
|
|
||||||
|
flash(_("Success! Please return to your device"), category="success")
|
||||||
|
log.debug("Remote Login token for userid %s verified", auth_token.user_id)
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
|
||||||
|
|
||||||
|
@remotelogin.route('/ajax/verify_token', methods=['POST'])
|
||||||
|
@remote_login_required
|
||||||
|
def token_verified():
|
||||||
|
token = request.form['token']
|
||||||
|
auth_token = ub.session.query(ub.RemoteAuthToken).filter(ub.RemoteAuthToken.auth_token == token).first()
|
||||||
|
|
||||||
|
data = {}
|
||||||
|
|
||||||
|
# Token not found
|
||||||
|
if auth_token is None:
|
||||||
|
data['status'] = 'error'
|
||||||
|
data['message'] = _("Token not found")
|
||||||
|
|
||||||
|
# Token expired
|
||||||
|
elif datetime.now() > auth_token.expiration:
|
||||||
|
ub.session.delete(auth_token)
|
||||||
|
ub.session_commit()
|
||||||
|
|
||||||
|
data['status'] = 'error'
|
||||||
|
data['message'] = _("Token has expired")
|
||||||
|
|
||||||
|
elif not auth_token.verified:
|
||||||
|
data['status'] = 'not_verified'
|
||||||
|
|
||||||
|
else:
|
||||||
|
user = ub.session.query(ub.User).filter(ub.User.id == auth_token.user_id).first()
|
||||||
|
login_user(user)
|
||||||
|
|
||||||
|
ub.session.delete(auth_token)
|
||||||
|
ub.session_commit("User {} logged in via remotelogin, token deleted".format(user.name))
|
||||||
|
|
||||||
|
data['status'] = 'success'
|
||||||
|
log.debug("Remote Login for userid %s succeeded", user.id)
|
||||||
|
flash(_("Success! You are now logged in as: %(nickname)s", nickname=user.name), category="success")
|
||||||
|
|
||||||
|
response = make_response(json.dumps(data, ensure_ascii=False))
|
||||||
|
response.headers["Content-Type"] = "application/json; charset=utf-8"
|
||||||
|
|
||||||
|
return response
|
|
@ -0,0 +1,119 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2020 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
from flask import render_template, g, abort, request
|
||||||
|
from flask_babel import gettext as _
|
||||||
|
from werkzeug.local import LocalProxy
|
||||||
|
from flask_login import current_user
|
||||||
|
from sqlalchemy.sql.expression import or_
|
||||||
|
|
||||||
|
from . import config, constants, logger, ub
|
||||||
|
from .ub import User
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
def get_sidebar_config(kwargs=None):
|
||||||
|
kwargs = kwargs or []
|
||||||
|
simple = bool([e for e in ['kindle', 'tolino', "kobo", "bookeen"]
|
||||||
|
if (e in request.headers.get('User-Agent', "").lower())])
|
||||||
|
if 'content' in kwargs:
|
||||||
|
content = kwargs['content']
|
||||||
|
content = isinstance(content, (User, LocalProxy)) and not content.role_anonymous()
|
||||||
|
else:
|
||||||
|
content = 'conf' in kwargs
|
||||||
|
sidebar = list()
|
||||||
|
sidebar.append({"glyph": "glyphicon-book", "text": _('Books'), "link": 'web.index', "id": "new",
|
||||||
|
"visibility": constants.SIDEBAR_RECENT, 'public': True, "page": "root",
|
||||||
|
"show_text": _('Show recent books'), "config_show":False})
|
||||||
|
sidebar.append({"glyph": "glyphicon-fire", "text": _('Hot Books'), "link": 'web.books_list', "id": "hot",
|
||||||
|
"visibility": constants.SIDEBAR_HOT, 'public': True, "page": "hot",
|
||||||
|
"show_text": _('Show Hot Books'), "config_show": True})
|
||||||
|
if current_user.role_admin():
|
||||||
|
sidebar.append({"glyph": "glyphicon-download", "text": _('Downloaded Books'), "link": 'web.download_list',
|
||||||
|
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not current_user.is_anonymous),
|
||||||
|
"page": "download", "show_text": _('Show Downloaded Books'),
|
||||||
|
"config_show": content})
|
||||||
|
else:
|
||||||
|
sidebar.append({"glyph": "glyphicon-download", "text": _('Downloaded Books'), "link": 'web.books_list',
|
||||||
|
"id": "download", "visibility": constants.SIDEBAR_DOWNLOAD, 'public': (not current_user.is_anonymous),
|
||||||
|
"page": "download", "show_text": _('Show Downloaded Books'),
|
||||||
|
"config_show": content})
|
||||||
|
sidebar.append(
|
||||||
|
{"glyph": "glyphicon-star", "text": _('Top Rated Books'), "link": 'web.books_list', "id": "rated",
|
||||||
|
"visibility": constants.SIDEBAR_BEST_RATED, 'public': True, "page": "rated",
|
||||||
|
"show_text": _('Show Top Rated Books'), "config_show": True})
|
||||||
|
sidebar.append({"glyph": "glyphicon-eye-open", "text": _('Read Books'), "link": 'web.books_list', "id": "read",
|
||||||
|
"visibility": constants.SIDEBAR_READ_AND_UNREAD, 'public': (not current_user.is_anonymous),
|
||||||
|
"page": "read", "show_text": _('Show Read and Unread'), "config_show": content})
|
||||||
|
sidebar.append(
|
||||||
|
{"glyph": "glyphicon-eye-close", "text": _('Unread Books'), "link": 'web.books_list', "id": "unread",
|
||||||
|
"visibility": constants.SIDEBAR_READ_AND_UNREAD, 'public': (not current_user.is_anonymous), "page": "unread",
|
||||||
|
"show_text": _('Show unread'), "config_show": False})
|
||||||
|
sidebar.append({"glyph": "glyphicon-random", "text": _('Discover'), "link": 'web.books_list', "id": "rand",
|
||||||
|
"visibility": constants.SIDEBAR_RANDOM, 'public': True, "page": "discover",
|
||||||
|
"show_text": _('Show Random Books'), "config_show": True})
|
||||||
|
sidebar.append({"glyph": "glyphicon-inbox", "text": _('Categories'), "link": 'web.category_list', "id": "cat",
|
||||||
|
"visibility": constants.SIDEBAR_CATEGORY, 'public': True, "page": "category",
|
||||||
|
"show_text": _('Show Category Section'), "config_show": True})
|
||||||
|
sidebar.append({"glyph": "glyphicon-bookmark", "text": _('Series'), "link": 'web.series_list', "id": "serie",
|
||||||
|
"visibility": constants.SIDEBAR_SERIES, 'public': True, "page": "series",
|
||||||
|
"show_text": _('Show Series Section'), "config_show": True})
|
||||||
|
sidebar.append({"glyph": "glyphicon-user", "text": _('Authors'), "link": 'web.author_list', "id": "author",
|
||||||
|
"visibility": constants.SIDEBAR_AUTHOR, 'public': True, "page": "author",
|
||||||
|
"show_text": _('Show Author Section'), "config_show": True})
|
||||||
|
sidebar.append(
|
||||||
|
{"glyph": "glyphicon-text-size", "text": _('Publishers'), "link": 'web.publisher_list', "id": "publisher",
|
||||||
|
"visibility": constants.SIDEBAR_PUBLISHER, 'public': True, "page": "publisher",
|
||||||
|
"show_text": _('Show Publisher Section'), "config_show":True})
|
||||||
|
sidebar.append({"glyph": "glyphicon-flag", "text": _('Languages'), "link": 'web.language_overview', "id": "lang",
|
||||||
|
"visibility": constants.SIDEBAR_LANGUAGE, 'public': (current_user.filter_language() == 'all'),
|
||||||
|
"page": "language",
|
||||||
|
"show_text": _('Show Language Section'), "config_show": True})
|
||||||
|
sidebar.append({"glyph": "glyphicon-star-empty", "text": _('Ratings'), "link": 'web.ratings_list', "id": "rate",
|
||||||
|
"visibility": constants.SIDEBAR_RATING, 'public': True,
|
||||||
|
"page": "rating", "show_text": _('Show Ratings Section'), "config_show": True})
|
||||||
|
sidebar.append({"glyph": "glyphicon-file", "text": _('File formats'), "link": 'web.formats_list', "id": "format",
|
||||||
|
"visibility": constants.SIDEBAR_FORMAT, 'public': True,
|
||||||
|
"page": "format", "show_text": _('Show File Formats Section'), "config_show": True})
|
||||||
|
sidebar.append(
|
||||||
|
{"glyph": "glyphicon-trash", "text": _('Archived Books'), "link": 'web.books_list', "id": "archived",
|
||||||
|
"visibility": constants.SIDEBAR_ARCHIVED, 'public': (not current_user.is_anonymous), "page": "archived",
|
||||||
|
"show_text": _('Show Archived Books'), "config_show": content})
|
||||||
|
if not simple:
|
||||||
|
sidebar.append(
|
||||||
|
{"glyph": "glyphicon-th-list", "text": _('Books List'), "link": 'web.books_table', "id": "list",
|
||||||
|
"visibility": constants.SIDEBAR_LIST, 'public': (not current_user.is_anonymous), "page": "list",
|
||||||
|
"show_text": _('Show Books List'), "config_show": content})
|
||||||
|
g.shelves_access = ub.session.query(ub.Shelf).filter(
|
||||||
|
or_(ub.Shelf.is_public == 1, ub.Shelf.user_id == current_user.id)).order_by(ub.Shelf.name).all()
|
||||||
|
|
||||||
|
return sidebar, simple
|
||||||
|
|
||||||
|
|
||||||
|
# Returns the template for rendering and includes the instance name
|
||||||
|
def render_title_template(*args, **kwargs):
|
||||||
|
sidebar, simple = get_sidebar_config(kwargs)
|
||||||
|
try:
|
||||||
|
return render_template(instance=config.config_calibre_web_title, sidebar=sidebar, simple=simple,
|
||||||
|
accept=constants.EXTENSIONS_UPLOAD,
|
||||||
|
*args, **kwargs)
|
||||||
|
except PermissionError:
|
||||||
|
log.error("No permission to access {} file.".format(args[0]))
|
||||||
|
abort(403)
|
|
@ -1,21 +1,41 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
# Flask License
|
||||||
# Copyright (C) 2018 cervinko, janeczku, OzzieIsaacs
|
|
||||||
#
|
#
|
||||||
# This program is free software: you can redistribute it and/or modify
|
# Copyright © 2010 by the Pallets team, cervinko, janeczku, OzzieIsaacs
|
||||||
# it under the terms of the GNU General Public License as published by
|
|
||||||
# the Free Software Foundation, either version 3 of the License, or
|
|
||||||
# (at your option) any later version.
|
|
||||||
#
|
#
|
||||||
# This program is distributed in the hope that it will be useful,
|
# Some rights reserved.
|
||||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
# GNU General Public License for more details.
|
|
||||||
#
|
#
|
||||||
# You should have received a copy of the GNU General Public License
|
# Redistribution and use in source and binary forms of the software as
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# well as documentation, with or without modification, are permitted
|
||||||
|
# provided that the following conditions are met:
|
||||||
|
#
|
||||||
|
# * Redistributions of source code must retain the above copyright notice,
|
||||||
|
# this list of conditions and the following disclaimer.
|
||||||
|
#
|
||||||
|
# * Redistributions in binary form must reproduce the above copyright
|
||||||
|
# notice, this list of conditions and the following disclaimer in the
|
||||||
|
# documentation and/or other materials provided with the distribution.
|
||||||
|
#
|
||||||
|
# * Neither the name of the copyright holder nor the names of its
|
||||||
|
# contributors may be used to endorse or promote products derived from
|
||||||
|
# this software without specific prior written permission.
|
||||||
|
#
|
||||||
|
# THIS SOFTWARE AND DOCUMENTATION IS PROVIDED BY THE COPYRIGHT HOLDERS AND
|
||||||
|
# CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING,
|
||||||
|
# BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
|
||||||
|
# FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
|
||||||
|
# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||||
|
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
|
||||||
|
# NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
|
||||||
|
# USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
|
||||||
|
# ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||||
|
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
|
||||||
|
# THIS SOFTWARE AND DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF
|
||||||
|
# SUCH DAMAGE.
|
||||||
|
#
|
||||||
|
# Inspired by http://flask.pocoo.org/snippets/35/
|
||||||
|
|
||||||
|
|
||||||
class ReverseProxied(object):
|
class ReverseProxied(object):
|
||||||
"""Wrap the application in this middleware and configure the
|
"""Wrap the application in this middleware and configure the
|
||||||
|
@ -37,10 +57,13 @@ class ReverseProxied(object):
|
||||||
|
|
||||||
def __init__(self, application):
|
def __init__(self, application):
|
||||||
self.app = application
|
self.app = application
|
||||||
|
self.proxied = False
|
||||||
|
|
||||||
def __call__(self, environ, start_response):
|
def __call__(self, environ, start_response):
|
||||||
|
self.proxied = False
|
||||||
script_name = environ.get('HTTP_X_SCRIPT_NAME', '')
|
script_name = environ.get('HTTP_X_SCRIPT_NAME', '')
|
||||||
if script_name:
|
if script_name:
|
||||||
|
self.proxied = True
|
||||||
environ['SCRIPT_NAME'] = script_name
|
environ['SCRIPT_NAME'] = script_name
|
||||||
path_info = environ.get('PATH_INFO', '')
|
path_info = environ.get('PATH_INFO', '')
|
||||||
if path_info and path_info.startswith(script_name):
|
if path_info and path_info.startswith(script_name):
|
||||||
|
@ -52,4 +75,9 @@ class ReverseProxied(object):
|
||||||
servr = environ.get('HTTP_X_FORWARDED_HOST', '')
|
servr = environ.get('HTTP_X_FORWARDED_HOST', '')
|
||||||
if servr:
|
if servr:
|
||||||
environ['HTTP_HOST'] = servr
|
environ['HTTP_HOST'] = servr
|
||||||
|
self.proxied = True
|
||||||
return self.app(environ, start_response)
|
return self.app(environ, start_response)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def is_proxied(self):
|
||||||
|
return self.proxied
|
||||||
|
|
|
@ -0,0 +1,110 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2020 mmonkey
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import datetime
|
||||||
|
|
||||||
|
from . import config, constants
|
||||||
|
from .services.background_scheduler import BackgroundScheduler, CronTrigger, use_APScheduler
|
||||||
|
from .tasks.database import TaskReconnectDatabase
|
||||||
|
from .tasks.tempFolder import TaskDeleteTempFolder
|
||||||
|
from .tasks.thumbnail import TaskGenerateCoverThumbnails, TaskGenerateSeriesThumbnails, TaskClearCoverThumbnailCache
|
||||||
|
from .services.worker import WorkerThread
|
||||||
|
from .tasks.metadata_backup import TaskBackupMetadata
|
||||||
|
|
||||||
|
def get_scheduled_tasks(reconnect=True):
|
||||||
|
tasks = list()
|
||||||
|
# Reconnect Calibre database (metadata.db) based on config.schedule_reconnect
|
||||||
|
if reconnect:
|
||||||
|
tasks.append([lambda: TaskReconnectDatabase(), 'reconnect', False])
|
||||||
|
|
||||||
|
# Delete temp folder
|
||||||
|
tasks.append([lambda: TaskDeleteTempFolder(), 'delete temp', True])
|
||||||
|
|
||||||
|
# Generate metadata.opf file for each changed book
|
||||||
|
if config.schedule_metadata_backup:
|
||||||
|
tasks.append([lambda: TaskBackupMetadata("en"), 'backup metadata', False])
|
||||||
|
|
||||||
|
# Generate all missing book cover thumbnails
|
||||||
|
if config.schedule_generate_book_covers:
|
||||||
|
tasks.append([lambda: TaskClearCoverThumbnailCache(0), 'delete superfluous book covers', True])
|
||||||
|
tasks.append([lambda: TaskGenerateCoverThumbnails(), 'generate book covers', False])
|
||||||
|
|
||||||
|
# Generate all missing series thumbnails
|
||||||
|
if config.schedule_generate_series_covers:
|
||||||
|
tasks.append([lambda: TaskGenerateSeriesThumbnails(), 'generate book covers', False])
|
||||||
|
|
||||||
|
return tasks
|
||||||
|
|
||||||
|
|
||||||
|
def end_scheduled_tasks():
|
||||||
|
worker = WorkerThread.get_instance()
|
||||||
|
for __, __, __, task, __ in worker.tasks:
|
||||||
|
if task.scheduled and task.is_cancellable:
|
||||||
|
worker.end_task(task.id)
|
||||||
|
|
||||||
|
|
||||||
|
def register_scheduled_tasks(reconnect=True):
|
||||||
|
scheduler = BackgroundScheduler()
|
||||||
|
|
||||||
|
if scheduler:
|
||||||
|
# Remove all existing jobs
|
||||||
|
scheduler.remove_all_jobs()
|
||||||
|
|
||||||
|
start = config.schedule_start_time
|
||||||
|
duration = config.schedule_duration
|
||||||
|
|
||||||
|
# Register scheduled tasks
|
||||||
|
timezone_info = datetime.datetime.now(datetime.timezone.utc).astimezone().tzinfo
|
||||||
|
scheduler.schedule_tasks(tasks=get_scheduled_tasks(reconnect), trigger=CronTrigger(hour=start,
|
||||||
|
timezone=timezone_info))
|
||||||
|
end_time = calclulate_end_time(start, duration)
|
||||||
|
scheduler.schedule(func=end_scheduled_tasks, trigger=CronTrigger(hour=end_time.hour, minute=end_time.minute,
|
||||||
|
timezone=timezone_info),
|
||||||
|
name="end scheduled task")
|
||||||
|
|
||||||
|
# Kick-off tasks, if they should currently be running
|
||||||
|
if should_task_be_running(start, duration):
|
||||||
|
scheduler.schedule_tasks_immediately(tasks=get_scheduled_tasks(reconnect))
|
||||||
|
|
||||||
|
|
||||||
|
def register_startup_tasks():
|
||||||
|
scheduler = BackgroundScheduler()
|
||||||
|
|
||||||
|
if scheduler:
|
||||||
|
start = config.schedule_start_time
|
||||||
|
duration = config.schedule_duration
|
||||||
|
|
||||||
|
# Run scheduled tasks immediately for development and testing
|
||||||
|
# Ignore tasks that should currently be running, as these will be added when registering scheduled tasks
|
||||||
|
if constants.APP_MODE in ['development', 'test'] and not should_task_be_running(start, duration):
|
||||||
|
scheduler.schedule_tasks_immediately(tasks=get_scheduled_tasks(False))
|
||||||
|
else:
|
||||||
|
scheduler.schedule_tasks_immediately(tasks=[[lambda: TaskDeleteTempFolder(), 'delete temp', True]])
|
||||||
|
|
||||||
|
|
||||||
|
def should_task_be_running(start, duration):
|
||||||
|
now = datetime.datetime.now()
|
||||||
|
start_time = datetime.datetime.now().replace(hour=start, minute=0, second=0, microsecond=0)
|
||||||
|
end_time = start_time + datetime.timedelta(hours=duration // 60, minutes=duration % 60)
|
||||||
|
return start_time < now < end_time
|
||||||
|
|
||||||
|
|
||||||
|
def calclulate_end_time(start, duration):
|
||||||
|
start_time = datetime.datetime.now().replace(hour=start, minute=0)
|
||||||
|
return start_time + datetime.timedelta(hours=duration // 60, minutes=duration % 60)
|
||||||
|
|
|
@ -0,0 +1,403 @@
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2022 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import json
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from flask import Blueprint, request, redirect, url_for, flash
|
||||||
|
from flask import session as flask_session
|
||||||
|
from flask_login import current_user
|
||||||
|
from flask_babel import format_date
|
||||||
|
from flask_babel import gettext as _
|
||||||
|
from sqlalchemy.sql.expression import func, not_, and_, or_, text, true
|
||||||
|
from sqlalchemy.sql.functions import coalesce
|
||||||
|
|
||||||
|
from . import logger, db, calibre_db, config, ub
|
||||||
|
from .usermanagement import login_required_if_no_ano
|
||||||
|
from .render_template import render_title_template
|
||||||
|
from .pagination import Pagination
|
||||||
|
|
||||||
|
search = Blueprint('search', __name__)
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
|
||||||
|
@search.route("/search", methods=["GET"])
|
||||||
|
@login_required_if_no_ano
|
||||||
|
def simple_search():
|
||||||
|
term = request.args.get("query")
|
||||||
|
if term:
|
||||||
|
return redirect(url_for('web.books_list', data="search", sort_param='stored', query=term.strip()))
|
||||||
|
else:
|
||||||
|
return render_title_template('search.html',
|
||||||
|
searchterm="",
|
||||||
|
result_count=0,
|
||||||
|
title=_("Search"),
|
||||||
|
page="search")
|
||||||
|
|
||||||
|
|
||||||
|
@search.route("/advsearch", methods=['POST'])
|
||||||
|
@login_required_if_no_ano
|
||||||
|
def advanced_search():
|
||||||
|
values = dict(request.form)
|
||||||
|
params = ['include_tag', 'exclude_tag', 'include_serie', 'exclude_serie', 'include_shelf', 'exclude_shelf',
|
||||||
|
'include_language', 'exclude_language', 'include_extension', 'exclude_extension']
|
||||||
|
for param in params:
|
||||||
|
values[param] = list(request.form.getlist(param))
|
||||||
|
flask_session['query'] = json.dumps(values)
|
||||||
|
return redirect(url_for('web.books_list', data="advsearch", sort_param='stored', query=""))
|
||||||
|
|
||||||
|
|
||||||
|
@search.route("/advsearch", methods=['GET'])
|
||||||
|
@login_required_if_no_ano
|
||||||
|
def advanced_search_form():
|
||||||
|
# Build custom columns names
|
||||||
|
cc = calibre_db.get_cc_columns(config, filter_config_custom_read=True)
|
||||||
|
return render_prepare_search_form(cc)
|
||||||
|
|
||||||
|
|
||||||
|
def adv_search_custom_columns(cc, term, q):
|
||||||
|
for c in cc:
|
||||||
|
if c.datatype == "datetime":
|
||||||
|
custom_start = term.get('custom_column_' + str(c.id) + '_start')
|
||||||
|
custom_end = term.get('custom_column_' + str(c.id) + '_end')
|
||||||
|
if custom_start:
|
||||||
|
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
|
||||||
|
func.datetime(db.cc_classes[c.id].value) >= func.datetime(custom_start)))
|
||||||
|
if custom_end:
|
||||||
|
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
|
||||||
|
func.datetime(db.cc_classes[c.id].value) <= func.datetime(custom_end)))
|
||||||
|
else:
|
||||||
|
custom_query = term.get('custom_column_' + str(c.id))
|
||||||
|
if custom_query != '' and custom_query is not None:
|
||||||
|
if c.datatype == 'bool':
|
||||||
|
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
|
||||||
|
db.cc_classes[c.id].value == (custom_query == "True")))
|
||||||
|
elif c.datatype == 'int' or c.datatype == 'float':
|
||||||
|
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
|
||||||
|
db.cc_classes[c.id].value == custom_query))
|
||||||
|
elif c.datatype == 'rating':
|
||||||
|
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
|
||||||
|
db.cc_classes[c.id].value == int(float(custom_query) * 2)))
|
||||||
|
else:
|
||||||
|
q = q.filter(getattr(db.Books, 'custom_column_' + str(c.id)).any(
|
||||||
|
func.lower(db.cc_classes[c.id].value).ilike("%" + custom_query + "%")))
|
||||||
|
return q
|
||||||
|
|
||||||
|
|
||||||
|
def adv_search_language(q, include_languages_inputs, exclude_languages_inputs):
|
||||||
|
if current_user.filter_language() != "all":
|
||||||
|
q = q.filter(db.Books.languages.any(db.Languages.lang_code == current_user.filter_language()))
|
||||||
|
else:
|
||||||
|
for language in include_languages_inputs:
|
||||||
|
q = q.filter(db.Books.languages.any(db.Languages.id == language))
|
||||||
|
for language in exclude_languages_inputs:
|
||||||
|
q = q.filter(not_(db.Books.series.any(db.Languages.id == language)))
|
||||||
|
return q
|
||||||
|
|
||||||
|
|
||||||
|
def adv_search_ratings(q, rating_high, rating_low):
|
||||||
|
if rating_high:
|
||||||
|
rating_high = int(rating_high) * 2
|
||||||
|
q = q.filter(db.Books.ratings.any(db.Ratings.rating <= rating_high))
|
||||||
|
if rating_low:
|
||||||
|
rating_low = int(rating_low) * 2
|
||||||
|
q = q.filter(db.Books.ratings.any(db.Ratings.rating >= rating_low))
|
||||||
|
return q
|
||||||
|
|
||||||
|
|
||||||
|
def adv_search_read_status(read_status):
|
||||||
|
if not config.config_read_column:
|
||||||
|
if read_status == "True":
|
||||||
|
db_filter = and_(ub.ReadBook.user_id == int(current_user.id),
|
||||||
|
ub.ReadBook.read_status == ub.ReadBook.STATUS_FINISHED)
|
||||||
|
else:
|
||||||
|
db_filter = coalesce(ub.ReadBook.read_status, 0) != ub.ReadBook.STATUS_FINISHED
|
||||||
|
else:
|
||||||
|
try:
|
||||||
|
if read_status == "True":
|
||||||
|
db_filter = db.cc_classes[config.config_read_column].value == True
|
||||||
|
else:
|
||||||
|
db_filter = coalesce(db.cc_classes[config.config_read_column].value, False) != True
|
||||||
|
except (KeyError, AttributeError, IndexError):
|
||||||
|
log.error("Custom Column No.{} does not exist in calibre database".format(config.config_read_column))
|
||||||
|
flash(_("Custom Column No.%(column)d does not exist in calibre database",
|
||||||
|
column=config.config_read_column),
|
||||||
|
category="error")
|
||||||
|
return true()
|
||||||
|
return db_filter
|
||||||
|
|
||||||
|
|
||||||
|
def adv_search_extension(q, include_extension_inputs, exclude_extension_inputs):
|
||||||
|
for extension in include_extension_inputs:
|
||||||
|
q = q.filter(db.Books.data.any(db.Data.format == extension))
|
||||||
|
for extension in exclude_extension_inputs:
|
||||||
|
q = q.filter(not_(db.Books.data.any(db.Data.format == extension)))
|
||||||
|
return q
|
||||||
|
|
||||||
|
|
||||||
|
def adv_search_tag(q, include_tag_inputs, exclude_tag_inputs):
|
||||||
|
for tag in include_tag_inputs:
|
||||||
|
q = q.filter(db.Books.tags.any(db.Tags.id == tag))
|
||||||
|
for tag in exclude_tag_inputs:
|
||||||
|
q = q.filter(not_(db.Books.tags.any(db.Tags.id == tag)))
|
||||||
|
return q
|
||||||
|
|
||||||
|
|
||||||
|
def adv_search_serie(q, include_series_inputs, exclude_series_inputs):
|
||||||
|
for serie in include_series_inputs:
|
||||||
|
q = q.filter(db.Books.series.any(db.Series.id == serie))
|
||||||
|
for serie in exclude_series_inputs:
|
||||||
|
q = q.filter(not_(db.Books.series.any(db.Series.id == serie)))
|
||||||
|
return q
|
||||||
|
|
||||||
|
def adv_search_shelf(q, include_shelf_inputs, exclude_shelf_inputs):
|
||||||
|
q = q.outerjoin(ub.BookShelf, db.Books.id == ub.BookShelf.book_id)\
|
||||||
|
.filter(or_(ub.BookShelf.shelf == None, ub.BookShelf.shelf.notin_(exclude_shelf_inputs)))
|
||||||
|
if len(include_shelf_inputs) > 0:
|
||||||
|
q = q.filter(ub.BookShelf.shelf.in_(include_shelf_inputs))
|
||||||
|
return q
|
||||||
|
|
||||||
|
def extend_search_term(searchterm,
|
||||||
|
author_name,
|
||||||
|
book_title,
|
||||||
|
publisher,
|
||||||
|
pub_start,
|
||||||
|
pub_end,
|
||||||
|
tags,
|
||||||
|
rating_high,
|
||||||
|
rating_low,
|
||||||
|
read_status,
|
||||||
|
):
|
||||||
|
searchterm.extend((author_name.replace('|', ','), book_title, publisher))
|
||||||
|
if pub_start:
|
||||||
|
try:
|
||||||
|
searchterm.extend([_("Published after ") +
|
||||||
|
format_date(datetime.strptime(pub_start, "%Y-%m-%d"),
|
||||||
|
format='medium')])
|
||||||
|
except ValueError:
|
||||||
|
pub_start = ""
|
||||||
|
if pub_end:
|
||||||
|
try:
|
||||||
|
searchterm.extend([_("Published before ") +
|
||||||
|
format_date(datetime.strptime(pub_end, "%Y-%m-%d"),
|
||||||
|
format='medium')])
|
||||||
|
except ValueError:
|
||||||
|
pub_end = ""
|
||||||
|
elements = {'tag': db.Tags, 'serie':db.Series, 'shelf':ub.Shelf}
|
||||||
|
for key, db_element in elements.items():
|
||||||
|
tag_names = calibre_db.session.query(db_element).filter(db_element.id.in_(tags['include_' + key])).all()
|
||||||
|
searchterm.extend(tag.name for tag in tag_names)
|
||||||
|
tag_names = calibre_db.session.query(db_element).filter(db_element.id.in_(tags['exclude_' + key])).all()
|
||||||
|
searchterm.extend(tag.name for tag in tag_names)
|
||||||
|
language_names = calibre_db.session.query(db.Languages). \
|
||||||
|
filter(db.Languages.id.in_(tags['include_language'])).all()
|
||||||
|
if language_names:
|
||||||
|
language_names = calibre_db.speaking_language(language_names)
|
||||||
|
searchterm.extend(language.name for language in language_names)
|
||||||
|
language_names = calibre_db.session.query(db.Languages). \
|
||||||
|
filter(db.Languages.id.in_(tags['exclude_language'])).all()
|
||||||
|
if language_names:
|
||||||
|
language_names = calibre_db.speaking_language(language_names)
|
||||||
|
searchterm.extend(language.name for language in language_names)
|
||||||
|
if rating_high:
|
||||||
|
searchterm.extend([_("Rating <= %(rating)s", rating=rating_high)])
|
||||||
|
if rating_low:
|
||||||
|
searchterm.extend([_("Rating >= %(rating)s", rating=rating_low)])
|
||||||
|
if read_status != "Any":
|
||||||
|
searchterm.extend([_("Read Status = '%(status)s'", status=read_status)])
|
||||||
|
searchterm.extend(ext for ext in tags['include_extension'])
|
||||||
|
searchterm.extend(ext for ext in tags['exclude_extension'])
|
||||||
|
# handle custom columns
|
||||||
|
searchterm = " + ".join(filter(None, searchterm))
|
||||||
|
return searchterm, pub_start, pub_end
|
||||||
|
|
||||||
|
|
||||||
|
def render_adv_search_results(term, offset=None, order=None, limit=None):
|
||||||
|
sort = order[0] if order else [db.Books.sort]
|
||||||
|
pagination = None
|
||||||
|
|
||||||
|
cc = calibre_db.get_cc_columns(config, filter_config_custom_read=True)
|
||||||
|
calibre_db.session.connection().connection.connection.create_function("lower", 1, db.lcase)
|
||||||
|
query = calibre_db.generate_linked_query(config.config_read_column, db.Books)
|
||||||
|
q = query.outerjoin(db.books_series_link, db.Books.id == db.books_series_link.c.book)\
|
||||||
|
.outerjoin(db.Series)\
|
||||||
|
.filter(calibre_db.common_filters(True))
|
||||||
|
|
||||||
|
# parse multi selects to a complete dict
|
||||||
|
tags = dict()
|
||||||
|
elements = ['tag', 'serie', 'shelf', 'language', 'extension']
|
||||||
|
for element in elements:
|
||||||
|
tags['include_' + element] = term.get('include_' + element)
|
||||||
|
tags['exclude_' + element] = term.get('exclude_' + element)
|
||||||
|
|
||||||
|
author_name = term.get("author_name")
|
||||||
|
book_title = term.get("book_title")
|
||||||
|
publisher = term.get("publisher")
|
||||||
|
pub_start = term.get("publishstart")
|
||||||
|
pub_end = term.get("publishend")
|
||||||
|
rating_low = term.get("ratinghigh")
|
||||||
|
rating_high = term.get("ratinglow")
|
||||||
|
description = term.get("comment")
|
||||||
|
read_status = term.get("read_status")
|
||||||
|
if author_name:
|
||||||
|
author_name = author_name.strip().lower().replace(',', '|')
|
||||||
|
if book_title:
|
||||||
|
book_title = book_title.strip().lower()
|
||||||
|
if publisher:
|
||||||
|
publisher = publisher.strip().lower()
|
||||||
|
|
||||||
|
search_term = []
|
||||||
|
cc_present = False
|
||||||
|
for c in cc:
|
||||||
|
if c.datatype == "datetime":
|
||||||
|
column_start = term.get('custom_column_' + str(c.id) + '_start')
|
||||||
|
column_end = term.get('custom_column_' + str(c.id) + '_end')
|
||||||
|
if column_start:
|
||||||
|
search_term.extend(["{} >= {}".format(c.name,
|
||||||
|
format_date(datetime.strptime(column_start, "%Y-%m-%d").date(),
|
||||||
|
format='medium')
|
||||||
|
)])
|
||||||
|
cc_present = True
|
||||||
|
if column_end:
|
||||||
|
search_term.extend(["{} <= {}".format(c.name,
|
||||||
|
format_date(datetime.strptime(column_end, "%Y-%m-%d").date(),
|
||||||
|
format='medium')
|
||||||
|
)])
|
||||||
|
cc_present = True
|
||||||
|
elif term.get('custom_column_' + str(c.id)):
|
||||||
|
search_term.extend([("{}: {}".format(c.name, term.get('custom_column_' + str(c.id))))])
|
||||||
|
cc_present = True
|
||||||
|
|
||||||
|
if any(tags.values()) or author_name or book_title or publisher or pub_start or pub_end or rating_low \
|
||||||
|
or rating_high or description or cc_present or read_status != "Any":
|
||||||
|
search_term, pub_start, pub_end = extend_search_term(search_term,
|
||||||
|
author_name,
|
||||||
|
book_title,
|
||||||
|
publisher,
|
||||||
|
pub_start,
|
||||||
|
pub_end,
|
||||||
|
tags,
|
||||||
|
rating_high,
|
||||||
|
rating_low,
|
||||||
|
read_status)
|
||||||
|
if author_name:
|
||||||
|
q = q.filter(db.Books.authors.any(func.lower(db.Authors.name).ilike("%" + author_name + "%")))
|
||||||
|
if book_title:
|
||||||
|
q = q.filter(func.lower(db.Books.title).ilike("%" + book_title + "%"))
|
||||||
|
if pub_start:
|
||||||
|
q = q.filter(func.datetime(db.Books.pubdate) > func.datetime(pub_start))
|
||||||
|
if pub_end:
|
||||||
|
q = q.filter(func.datetime(db.Books.pubdate) < func.datetime(pub_end))
|
||||||
|
if read_status != "Any":
|
||||||
|
q = q.filter(adv_search_read_status(read_status))
|
||||||
|
if publisher:
|
||||||
|
q = q.filter(db.Books.publishers.any(func.lower(db.Publishers.name).ilike("%" + publisher + "%")))
|
||||||
|
q = adv_search_tag(q, tags['include_tag'], tags['exclude_tag'])
|
||||||
|
q = adv_search_serie(q, tags['include_serie'], tags['exclude_serie'])
|
||||||
|
q = adv_search_shelf(q, tags['include_shelf'], tags['exclude_shelf'])
|
||||||
|
q = adv_search_extension(q, tags['include_extension'], tags['exclude_extension'])
|
||||||
|
q = adv_search_language(q, tags['include_language'], tags['exclude_language'])
|
||||||
|
q = adv_search_ratings(q, rating_high, rating_low)
|
||||||
|
|
||||||
|
if description:
|
||||||
|
q = q.filter(db.Books.comments.any(func.lower(db.Comments.text).ilike("%" + description + "%")))
|
||||||
|
|
||||||
|
# search custom columns
|
||||||
|
try:
|
||||||
|
q = adv_search_custom_columns(cc, term, q)
|
||||||
|
except AttributeError as ex:
|
||||||
|
log.debug_or_exception(ex)
|
||||||
|
flash(_("Error on search for custom columns, please restart Calibre-Web"), category="error")
|
||||||
|
|
||||||
|
q = q.order_by(*sort).all()
|
||||||
|
flask_session['query'] = json.dumps(term)
|
||||||
|
ub.store_combo_ids(q)
|
||||||
|
result_count = len(q)
|
||||||
|
if offset is not None and limit is not None:
|
||||||
|
offset = int(offset)
|
||||||
|
limit_all = offset + int(limit)
|
||||||
|
pagination = Pagination((offset / (int(limit)) + 1), limit, result_count)
|
||||||
|
else:
|
||||||
|
offset = 0
|
||||||
|
limit_all = result_count
|
||||||
|
entries = calibre_db.order_authors(q[offset:limit_all], list_return=True, combined=True)
|
||||||
|
return render_title_template('search.html',
|
||||||
|
adv_searchterm=search_term,
|
||||||
|
pagination=pagination,
|
||||||
|
entries=entries,
|
||||||
|
result_count=result_count,
|
||||||
|
title=_("Advanced Search"), page="advsearch",
|
||||||
|
order=order[1])
|
||||||
|
|
||||||
|
|
||||||
|
def render_prepare_search_form(cc):
|
||||||
|
# prepare data for search-form
|
||||||
|
tags = calibre_db.session.query(db.Tags)\
|
||||||
|
.join(db.books_tags_link)\
|
||||||
|
.join(db.Books)\
|
||||||
|
.filter(calibre_db.common_filters()) \
|
||||||
|
.group_by(text('books_tags_link.tag'))\
|
||||||
|
.order_by(db.Tags.name).all()
|
||||||
|
series = calibre_db.session.query(db.Series)\
|
||||||
|
.join(db.books_series_link)\
|
||||||
|
.join(db.Books)\
|
||||||
|
.filter(calibre_db.common_filters()) \
|
||||||
|
.group_by(text('books_series_link.series'))\
|
||||||
|
.order_by(db.Series.name)\
|
||||||
|
.filter(calibre_db.common_filters()).all()
|
||||||
|
shelves = ub.session.query(ub.Shelf)\
|
||||||
|
.filter(or_(ub.Shelf.is_public == 1, ub.Shelf.user_id == int(current_user.id)))\
|
||||||
|
.order_by(ub.Shelf.name).all()
|
||||||
|
extensions = calibre_db.session.query(db.Data)\
|
||||||
|
.join(db.Books)\
|
||||||
|
.filter(calibre_db.common_filters()) \
|
||||||
|
.group_by(db.Data.format)\
|
||||||
|
.order_by(db.Data.format).all()
|
||||||
|
if current_user.filter_language() == "all":
|
||||||
|
languages = calibre_db.speaking_language()
|
||||||
|
else:
|
||||||
|
languages = None
|
||||||
|
return render_title_template('search_form.html', tags=tags, languages=languages, extensions=extensions,
|
||||||
|
series=series,shelves=shelves, title=_("Advanced Search"), cc=cc, page="advsearch")
|
||||||
|
|
||||||
|
|
||||||
|
def render_search_results(term, offset=None, order=None, limit=None):
|
||||||
|
if term:
|
||||||
|
join = db.books_series_link, db.Books.id == db.books_series_link.c.book, db.Series
|
||||||
|
entries, result_count, pagination = calibre_db.get_search_results(term,
|
||||||
|
config,
|
||||||
|
offset,
|
||||||
|
order,
|
||||||
|
limit,
|
||||||
|
*join)
|
||||||
|
else:
|
||||||
|
entries = list()
|
||||||
|
order = [None, None]
|
||||||
|
pagination = result_count = None
|
||||||
|
|
||||||
|
return render_title_template('search.html',
|
||||||
|
searchterm=term,
|
||||||
|
pagination=pagination,
|
||||||
|
query=term,
|
||||||
|
adv_searchterm=term,
|
||||||
|
entries=entries,
|
||||||
|
result_count=result_count,
|
||||||
|
title=_("Search"),
|
||||||
|
page="search",
|
||||||
|
order=order[1])
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,143 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2021 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import concurrent.futures
|
||||||
|
import importlib
|
||||||
|
import inspect
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
from flask import Blueprint, Response, request, url_for
|
||||||
|
from flask_login import current_user
|
||||||
|
from flask_login import login_required
|
||||||
|
from flask_babel import get_locale
|
||||||
|
from sqlalchemy.exc import InvalidRequestError, OperationalError
|
||||||
|
from sqlalchemy.orm.attributes import flag_modified
|
||||||
|
|
||||||
|
from cps.services.Metadata import Metadata
|
||||||
|
from . import constants, logger, ub, web_server
|
||||||
|
|
||||||
|
# current_milli_time = lambda: int(round(time() * 1000))
|
||||||
|
|
||||||
|
meta = Blueprint("metadata", __name__)
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
try:
|
||||||
|
from dataclasses import asdict
|
||||||
|
except ImportError:
|
||||||
|
log.info('*** "dataclasses" is needed for calibre-web to run. Please install it using pip: "pip install dataclasses" ***')
|
||||||
|
print('*** "dataclasses" is needed for calibre-web to run. Please install it using pip: "pip install dataclasses" ***')
|
||||||
|
web_server.stop(True)
|
||||||
|
sys.exit(6)
|
||||||
|
|
||||||
|
new_list = list()
|
||||||
|
meta_dir = os.path.join(constants.BASE_DIR, "cps", "metadata_provider")
|
||||||
|
modules = os.listdir(os.path.join(constants.BASE_DIR, "cps", "metadata_provider"))
|
||||||
|
for f in modules:
|
||||||
|
if os.path.isfile(os.path.join(meta_dir, f)) and not f.endswith("__init__.py"):
|
||||||
|
a = os.path.basename(f)[:-3]
|
||||||
|
try:
|
||||||
|
importlib.import_module("cps.metadata_provider." + a)
|
||||||
|
new_list.append(a)
|
||||||
|
except (IndentationError, SyntaxError) as e:
|
||||||
|
log.error("Syntax error for metadata source: {} - {}".format(a, e))
|
||||||
|
except ImportError as e:
|
||||||
|
log.debug("Import error for metadata source: {} - {}".format(a, e))
|
||||||
|
|
||||||
|
|
||||||
|
def list_classes(provider_list):
|
||||||
|
classes = list()
|
||||||
|
for element in provider_list:
|
||||||
|
for name, obj in inspect.getmembers(
|
||||||
|
sys.modules["cps.metadata_provider." + element]
|
||||||
|
):
|
||||||
|
if (
|
||||||
|
inspect.isclass(obj)
|
||||||
|
and name != "Metadata"
|
||||||
|
and issubclass(obj, Metadata)
|
||||||
|
):
|
||||||
|
classes.append(obj())
|
||||||
|
return classes
|
||||||
|
|
||||||
|
|
||||||
|
cl = list_classes(new_list)
|
||||||
|
|
||||||
|
|
||||||
|
@meta.route("/metadata/provider")
|
||||||
|
@login_required
|
||||||
|
def metadata_provider():
|
||||||
|
active = current_user.view_settings.get("metadata", {})
|
||||||
|
provider = list()
|
||||||
|
for c in cl:
|
||||||
|
ac = active.get(c.__id__, True)
|
||||||
|
provider.append(
|
||||||
|
{"name": c.__name__, "active": ac, "initial": ac, "id": c.__id__}
|
||||||
|
)
|
||||||
|
return Response(json.dumps(provider), mimetype="application/json")
|
||||||
|
|
||||||
|
|
||||||
|
@meta.route("/metadata/provider", methods=["POST"])
|
||||||
|
@meta.route("/metadata/provider/<prov_name>", methods=["POST"])
|
||||||
|
@login_required
|
||||||
|
def metadata_change_active_provider(prov_name):
|
||||||
|
new_state = request.get_json()
|
||||||
|
active = current_user.view_settings.get("metadata", {})
|
||||||
|
active[new_state["id"]] = new_state["value"]
|
||||||
|
current_user.view_settings["metadata"] = active
|
||||||
|
try:
|
||||||
|
try:
|
||||||
|
flag_modified(current_user, "view_settings")
|
||||||
|
except AttributeError:
|
||||||
|
pass
|
||||||
|
ub.session.commit()
|
||||||
|
except (InvalidRequestError, OperationalError):
|
||||||
|
log.error("Invalid request received: {}".format(request))
|
||||||
|
return "Invalid request", 400
|
||||||
|
if "initial" in new_state and prov_name:
|
||||||
|
data = []
|
||||||
|
provider = next((c for c in cl if c.__id__ == prov_name), None)
|
||||||
|
if provider is not None:
|
||||||
|
data = provider.search(new_state.get("query", ""))
|
||||||
|
return Response(
|
||||||
|
json.dumps([asdict(x) for x in data]), mimetype="application/json"
|
||||||
|
)
|
||||||
|
return ""
|
||||||
|
|
||||||
|
|
||||||
|
@meta.route("/metadata/search", methods=["POST"])
|
||||||
|
@login_required
|
||||||
|
def metadata_search():
|
||||||
|
query = request.form.to_dict().get("query")
|
||||||
|
data = list()
|
||||||
|
active = current_user.view_settings.get("metadata", {})
|
||||||
|
locale = get_locale()
|
||||||
|
if query:
|
||||||
|
static_cover = url_for("static", filename="generic_cover.jpg")
|
||||||
|
# start = current_milli_time()
|
||||||
|
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
|
||||||
|
meta = {
|
||||||
|
executor.submit(c.search, query, static_cover, locale): c
|
||||||
|
for c in cl
|
||||||
|
if active.get(c.__id__, True)
|
||||||
|
}
|
||||||
|
for future in concurrent.futures.as_completed(meta):
|
||||||
|
data.extend([asdict(x) for x in future.result() if x])
|
||||||
|
# log.info({'Time elapsed {}'.format(current_milli_time()-start)})
|
||||||
|
return Response(json.dumps(data), mimetype="application/json")
|
411
cps/server.py
411
cps/server.py
|
@ -1,4 +1,3 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
@ -17,148 +16,316 @@
|
||||||
# You should have received a copy of the GNU General Public License
|
# You should have received a copy of the GNU General Public License
|
||||||
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
|
||||||
from socket import error as SocketError
|
|
||||||
import sys
|
import sys
|
||||||
import os
|
import os
|
||||||
|
import errno
|
||||||
import signal
|
import signal
|
||||||
import web
|
import socket
|
||||||
|
import asyncio
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from gevent.pywsgi import WSGIServer
|
from gevent.pywsgi import WSGIServer
|
||||||
|
from .gevent_wsgi import MyWSGIHandler
|
||||||
from gevent.pool import Pool
|
from gevent.pool import Pool
|
||||||
from gevent import __version__ as geventVersion
|
from gevent.socket import socket as GeventSocket
|
||||||
gevent_present = True
|
from gevent import __version__ as _version
|
||||||
|
from greenlet import GreenletExit
|
||||||
|
import ssl
|
||||||
|
VERSION = 'Gevent ' + _version
|
||||||
|
_GEVENT = True
|
||||||
except ImportError:
|
except ImportError:
|
||||||
from tornado.wsgi import WSGIContainer
|
from .tornado_wsgi import MyWSGIContainer
|
||||||
from tornado.httpserver import HTTPServer
|
from tornado.httpserver import HTTPServer
|
||||||
from tornado.ioloop import IOLoop
|
from tornado.ioloop import IOLoop
|
||||||
from tornado import version as tornadoVersion
|
from tornado import netutil
|
||||||
gevent_present = False
|
from tornado import version as _version
|
||||||
|
VERSION = 'Tornado ' + _version
|
||||||
|
_GEVENT = False
|
||||||
|
|
||||||
|
from . import logger
|
||||||
|
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
class server:
|
|
||||||
|
|
||||||
wsgiserver = None
|
def _readable_listen_address(address, port):
|
||||||
restart= False
|
if ':' in address:
|
||||||
|
address = "[" + address + "]"
|
||||||
|
return '%s:%s' % (address, port)
|
||||||
|
|
||||||
|
|
||||||
|
class WebServer(object):
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
signal.signal(signal.SIGINT, self.killServer)
|
signal.signal(signal.SIGINT, self._killServer)
|
||||||
signal.signal(signal.SIGTERM, self.killServer)
|
signal.signal(signal.SIGTERM, self._killServer)
|
||||||
|
|
||||||
|
self.wsgiserver = None
|
||||||
|
self.access_logger = None
|
||||||
|
self.restart = False
|
||||||
|
self.app = None
|
||||||
|
self.listen_address = None
|
||||||
|
self.listen_port = None
|
||||||
|
self.unix_socket_file = None
|
||||||
|
self.ssl_args = None
|
||||||
|
|
||||||
|
def init_app(self, application, config):
|
||||||
|
self.app = application
|
||||||
|
self.listen_address = config.get_config_ipaddress()
|
||||||
|
self.listen_port = config.config_port
|
||||||
|
|
||||||
|
if config.config_access_log:
|
||||||
|
log_name = "gevent.access" if _GEVENT else "tornado.access"
|
||||||
|
formatter = logger.ACCESS_FORMATTER_GEVENT if _GEVENT else logger.ACCESS_FORMATTER_TORNADO
|
||||||
|
self.access_logger, logfile = logger.create_access_log(config.config_access_logfile, log_name, formatter)
|
||||||
|
if logfile != config.config_access_logfile:
|
||||||
|
log.warning("Accesslog path %s not valid, falling back to default", config.config_access_logfile)
|
||||||
|
config.config_access_logfile = logfile
|
||||||
|
config.save()
|
||||||
|
else:
|
||||||
|
if not _GEVENT:
|
||||||
|
logger.get('tornado.access').disabled = True
|
||||||
|
|
||||||
|
certfile_path = config.get_config_certfile()
|
||||||
|
keyfile_path = config.get_config_keyfile()
|
||||||
|
if certfile_path and keyfile_path:
|
||||||
|
if os.path.isfile(certfile_path) and os.path.isfile(keyfile_path):
|
||||||
|
self.ssl_args = dict(certfile=certfile_path, keyfile=keyfile_path)
|
||||||
|
else:
|
||||||
|
log.warning('The specified paths for the ssl certificate file and/or key file seem to be broken. '
|
||||||
|
'Ignoring ssl.')
|
||||||
|
log.warning('Cert path: %s', certfile_path)
|
||||||
|
log.warning('Key path: %s', keyfile_path)
|
||||||
|
|
||||||
|
def _make_gevent_socket_activated(self):
|
||||||
|
# Reuse an already open socket on fd=SD_LISTEN_FDS_START
|
||||||
|
SD_LISTEN_FDS_START = 3
|
||||||
|
return GeventSocket(fileno=SD_LISTEN_FDS_START)
|
||||||
|
|
||||||
|
def _prepare_unix_socket(self, socket_file):
|
||||||
|
# the socket file must not exist prior to bind()
|
||||||
|
if os.path.exists(socket_file):
|
||||||
|
# avoid nuking regular files and symbolic links (could be a mistype or security issue)
|
||||||
|
if os.path.isfile(socket_file) or os.path.islink(socket_file):
|
||||||
|
raise OSError(errno.EEXIST, os.strerror(errno.EEXIST), socket_file)
|
||||||
|
os.remove(socket_file)
|
||||||
|
|
||||||
|
self.unix_socket_file = socket_file
|
||||||
|
|
||||||
|
def _make_gevent_listener(self):
|
||||||
|
if os.name != 'nt':
|
||||||
|
socket_activated = os.environ.get("LISTEN_FDS")
|
||||||
|
if socket_activated:
|
||||||
|
sock = self._make_gevent_socket_activated()
|
||||||
|
sock_info = sock.getsockname()
|
||||||
|
return sock, "systemd-socket:" + _readable_listen_address(sock_info[0], sock_info[1])
|
||||||
|
unix_socket_file = os.environ.get("CALIBRE_UNIX_SOCKET")
|
||||||
|
if unix_socket_file:
|
||||||
|
self._prepare_unix_socket(unix_socket_file)
|
||||||
|
unix_sock = WSGIServer.get_listener(unix_socket_file, family=socket.AF_UNIX)
|
||||||
|
# ensure current user and group have r/w permissions, no permissions for other users
|
||||||
|
# this way the socket can be shared in a semi-secure manner
|
||||||
|
# between the user running calibre-web and the user running the fronting webserver
|
||||||
|
os.chmod(unix_socket_file, 0o660)
|
||||||
|
|
||||||
|
return unix_sock, "unix:" + unix_socket_file
|
||||||
|
|
||||||
|
if self.listen_address:
|
||||||
|
return ((self.listen_address, self.listen_port),
|
||||||
|
_readable_listen_address(self.listen_address, self.listen_port))
|
||||||
|
|
||||||
|
if os.name == 'nt':
|
||||||
|
self.listen_address = '0.0.0.0'
|
||||||
|
return ((self.listen_address, self.listen_port),
|
||||||
|
_readable_listen_address(self.listen_address, self.listen_port))
|
||||||
|
|
||||||
def start_gevent(self):
|
|
||||||
try:
|
try:
|
||||||
ssl_args = dict()
|
address = ('::', self.listen_port)
|
||||||
certfile_path = web.ub.config.get_config_certfile()
|
sock = WSGIServer.get_listener(address, family=socket.AF_INET6)
|
||||||
keyfile_path = web.ub.config.get_config_keyfile()
|
except socket.error as ex:
|
||||||
if certfile_path and keyfile_path:
|
log.error('%s', ex)
|
||||||
if os.path.isfile(certfile_path) and os.path.isfile(keyfile_path):
|
log.warning('Unable to listen on {}, trying on IPv4 only...'.format(address))
|
||||||
ssl_args = {"certfile": certfile_path,
|
address = ('', self.listen_port)
|
||||||
"keyfile": keyfile_path}
|
sock = WSGIServer.get_listener(address, family=socket.AF_INET)
|
||||||
else:
|
|
||||||
web.app.logger.info('The specified paths for the ssl certificate file and/or key file seem to be broken. Ignoring ssl. Cert path: %s | Key path: %s' % (certfile_path, keyfile_path))
|
|
||||||
if os.name == 'nt':
|
|
||||||
self.wsgiserver= WSGIServer(('0.0.0.0', web.ub.config.config_port), web.app, spawn=Pool(), **ssl_args)
|
|
||||||
else:
|
|
||||||
self.wsgiserver = WSGIServer(('', web.ub.config.config_port), web.app, spawn=Pool(), **ssl_args)
|
|
||||||
web.py3_gevent_link = self.wsgiserver
|
|
||||||
self.wsgiserver.serve_forever()
|
|
||||||
|
|
||||||
except SocketError:
|
return sock, _readable_listen_address(*address)
|
||||||
try:
|
|
||||||
web.app.logger.info('Unable to listen on \'\', trying on IPv4 only...')
|
|
||||||
self.wsgiserver = WSGIServer(('0.0.0.0', web.ub.config.config_port), web.app, spawn=Pool(), **ssl_args)
|
|
||||||
web.py3_gevent_link = self.wsgiserver
|
|
||||||
self.wsgiserver.serve_forever()
|
|
||||||
except (OSError, SocketError) as e:
|
|
||||||
web.app.logger.info("Error starting server: %s" % e.strerror)
|
|
||||||
print("Error starting server: %s" % e.strerror)
|
|
||||||
web.helper.global_WorkerThread.stop()
|
|
||||||
sys.exit(1)
|
|
||||||
except Exception:
|
|
||||||
web.app.logger.info("Unknown error while starting gevent")
|
|
||||||
|
|
||||||
def startServer(self):
|
|
||||||
if gevent_present:
|
|
||||||
web.app.logger.info('Starting Gevent server')
|
|
||||||
# leave subprocess out to allow forking for fetchers and processors
|
|
||||||
self.start_gevent()
|
|
||||||
else:
|
|
||||||
try:
|
|
||||||
ssl = None
|
|
||||||
web.app.logger.info('Starting Tornado server')
|
|
||||||
certfile_path = web.ub.config.get_config_certfile()
|
|
||||||
keyfile_path = web.ub.config.get_config_keyfile()
|
|
||||||
if certfile_path and keyfile_path:
|
|
||||||
if os.path.isfile(certfile_path) and os.path.isfile(keyfile_path):
|
|
||||||
ssl = {"certfile": certfile_path,
|
|
||||||
"keyfile": keyfile_path}
|
|
||||||
else:
|
|
||||||
web.app.logger.info('The specified paths for the ssl certificate file and/or key file seem to be broken. Ignoring ssl. Cert path: %s | Key path: %s' % (certfile_path, keyfile_path))
|
|
||||||
|
|
||||||
# Max Buffersize set to 200MB
|
|
||||||
http_server = HTTPServer(WSGIContainer(web.app),
|
|
||||||
max_buffer_size = 209700000,
|
|
||||||
ssl_options=ssl)
|
|
||||||
http_server.listen(web.ub.config.config_port)
|
|
||||||
self.wsgiserver=IOLoop.instance()
|
|
||||||
self.wsgiserver.start()
|
|
||||||
# wait for stop signal
|
|
||||||
self.wsgiserver.close(True)
|
|
||||||
except SocketError as e:
|
|
||||||
web.app.logger.info("Error starting server: %s" % e.strerror)
|
|
||||||
print("Error starting server: %s" % e.strerror)
|
|
||||||
web.helper.global_WorkerThread.stop()
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
# ToDo: Somehow caused by circular import under python3 refactor
|
|
||||||
if sys.version_info > (3, 0):
|
|
||||||
self.restart = web.py3_restart_Typ
|
|
||||||
if self.restart == True:
|
|
||||||
web.app.logger.info("Performing restart of Calibre-Web")
|
|
||||||
web.helper.global_WorkerThread.stop()
|
|
||||||
if os.name == 'nt':
|
|
||||||
arguments = ["\"" + sys.executable + "\""]
|
|
||||||
for e in sys.argv:
|
|
||||||
arguments.append("\"" + e + "\"")
|
|
||||||
os.execv(sys.executable, arguments)
|
|
||||||
else:
|
|
||||||
os.execl(sys.executable, sys.executable, *sys.argv)
|
|
||||||
else:
|
|
||||||
web.app.logger.info("Performing shutdown of Calibre-Web")
|
|
||||||
web.helper.global_WorkerThread.stop()
|
|
||||||
sys.exit(0)
|
|
||||||
|
|
||||||
def setRestartTyp(self,starttyp):
|
|
||||||
self.restart = starttyp
|
|
||||||
# ToDo: Somehow caused by circular import under python3 refactor
|
|
||||||
web.py3_restart_Typ = starttyp
|
|
||||||
|
|
||||||
def killServer(self, signum, frame):
|
|
||||||
self.stopServer()
|
|
||||||
|
|
||||||
def stopServer(self):
|
|
||||||
# ToDo: Somehow caused by circular import under python3 refactor
|
|
||||||
if sys.version_info > (3, 0):
|
|
||||||
if not self.wsgiserver:
|
|
||||||
if gevent_present:
|
|
||||||
self.wsgiserver = web.py3_gevent_link
|
|
||||||
else:
|
|
||||||
self.wsgiserver = IOLoop.instance()
|
|
||||||
if self.wsgiserver:
|
|
||||||
if gevent_present:
|
|
||||||
self.wsgiserver.close()
|
|
||||||
else:
|
|
||||||
self.wsgiserver.add_callback(self.wsgiserver.stop)
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def getNameVersion():
|
def _get_args_for_reloading():
|
||||||
if gevent_present:
|
"""Determine how the script was executed, and return the args needed
|
||||||
return {'Gevent':'v'+geventVersion}
|
to execute it again in a new process.
|
||||||
|
Code from https://github.com/pyload/pyload. Author GammaC0de, voulter
|
||||||
|
"""
|
||||||
|
rv = [sys.executable]
|
||||||
|
py_script = sys.argv[0]
|
||||||
|
args = sys.argv[1:]
|
||||||
|
# Need to look at main module to determine how it was executed.
|
||||||
|
__main__ = sys.modules["__main__"]
|
||||||
|
|
||||||
|
# The value of __package__ indicates how Python was called. It may
|
||||||
|
# not exist if a setuptools script is installed as an egg. It may be
|
||||||
|
# set incorrectly for entry points created with pip on Windows.
|
||||||
|
if getattr(__main__, "__package__", "") in ["", None] or (
|
||||||
|
os.name == "nt"
|
||||||
|
and __main__.__package__ == ""
|
||||||
|
and not os.path.exists(py_script)
|
||||||
|
and os.path.exists("{}.exe".format(py_script))
|
||||||
|
):
|
||||||
|
# Executed a file, like "python app.py".
|
||||||
|
py_script = os.path.abspath(py_script)
|
||||||
|
|
||||||
|
if os.name == "nt":
|
||||||
|
# Windows entry points have ".exe" extension and should be
|
||||||
|
# called directly.
|
||||||
|
if not os.path.exists(py_script) and os.path.exists("{}.exe".format(py_script)):
|
||||||
|
py_script += ".exe"
|
||||||
|
|
||||||
|
if (
|
||||||
|
os.path.splitext(sys.executable)[1] == ".exe"
|
||||||
|
and os.path.splitext(py_script)[1] == ".exe"
|
||||||
|
):
|
||||||
|
rv.pop(0)
|
||||||
|
|
||||||
|
rv.append(py_script)
|
||||||
else:
|
else:
|
||||||
return {'Tornado':'v'+tornadoVersion}
|
# Executed a module, like "python -m module".
|
||||||
|
if sys.argv[0] == "-m":
|
||||||
|
args = sys.argv
|
||||||
|
else:
|
||||||
|
if os.path.isfile(py_script):
|
||||||
|
# Rewritten by Python from "-m script" to "/path/to/script.py".
|
||||||
|
py_module = __main__.__package__
|
||||||
|
name = os.path.splitext(os.path.basename(py_script))[0]
|
||||||
|
|
||||||
|
if name != "__main__":
|
||||||
|
py_module += ".{}".format(name)
|
||||||
|
else:
|
||||||
|
# Incorrectly rewritten by pydevd debugger from "-m script" to "script".
|
||||||
|
py_module = py_script
|
||||||
|
|
||||||
|
rv.extend(("-m", py_module.lstrip(".")))
|
||||||
|
|
||||||
|
rv.extend(args)
|
||||||
|
if os.name == 'nt':
|
||||||
|
rv = ['"{}"'.format(a) for a in rv]
|
||||||
|
return rv
|
||||||
|
|
||||||
|
def _start_gevent(self):
|
||||||
|
ssl_args = self.ssl_args or {}
|
||||||
|
|
||||||
|
try:
|
||||||
|
sock, output = self._make_gevent_listener()
|
||||||
|
log.info('Starting Gevent server on %s', output)
|
||||||
|
self.wsgiserver = WSGIServer(sock, self.app, log=self.access_logger, handler_class=MyWSGIHandler,
|
||||||
|
error_log=log,
|
||||||
|
spawn=Pool(), **ssl_args)
|
||||||
|
if ssl_args:
|
||||||
|
wrap_socket = self.wsgiserver.wrap_socket
|
||||||
|
def my_wrap_socket(*args, **kwargs):
|
||||||
|
try:
|
||||||
|
return wrap_socket(*args, **kwargs)
|
||||||
|
except (ssl.SSLError, OSError) as ex:
|
||||||
|
log.warning('Gevent SSL Error: %s', ex)
|
||||||
|
raise GreenletExit
|
||||||
|
|
||||||
|
self.wsgiserver.wrap_socket = my_wrap_socket
|
||||||
|
self.wsgiserver.serve_forever()
|
||||||
|
finally:
|
||||||
|
if self.unix_socket_file:
|
||||||
|
os.remove(self.unix_socket_file)
|
||||||
|
self.unix_socket_file = None
|
||||||
|
|
||||||
|
def _start_tornado(self):
|
||||||
|
if os.name == 'nt' and sys.version_info > (3, 7):
|
||||||
|
import asyncio
|
||||||
|
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
|
||||||
|
try:
|
||||||
|
# Max Buffersize set to 200MB
|
||||||
|
http_server = HTTPServer(MyWSGIContainer(self.app),
|
||||||
|
max_buffer_size=209700000,
|
||||||
|
ssl_options=self.ssl_args)
|
||||||
|
|
||||||
|
unix_socket_file = os.environ.get("CALIBRE_UNIX_SOCKET")
|
||||||
|
if os.environ.get("LISTEN_FDS") and os.name != 'nt':
|
||||||
|
SD_LISTEN_FDS_START = 3
|
||||||
|
sock = socket.socket(fileno=SD_LISTEN_FDS_START)
|
||||||
|
http_server.add_socket(sock)
|
||||||
|
sock.setblocking(0)
|
||||||
|
socket_name =sock.getsockname()
|
||||||
|
output = "systemd-socket:" + _readable_listen_address(socket_name[0], socket_name[1])
|
||||||
|
elif unix_socket_file and os.name != 'nt':
|
||||||
|
self._prepare_unix_socket(unix_socket_file)
|
||||||
|
output = "unix:" + unix_socket_file
|
||||||
|
unix_socket = netutil.bind_unix_socket(self.unix_socket_file)
|
||||||
|
http_server.add_socket(unix_socket)
|
||||||
|
# ensure current user and group have r/w permissions, no permissions for other users
|
||||||
|
# this way the socket can be shared in a semi-secure manner
|
||||||
|
# between the user running calibre-web and the user running the fronting webserver
|
||||||
|
os.chmod(self.unix_socket_file, 0o660)
|
||||||
|
else:
|
||||||
|
output = _readable_listen_address(self.listen_address, self.listen_port)
|
||||||
|
http_server.listen(self.listen_port, self.listen_address)
|
||||||
|
log.info('Starting Tornado server on %s', output)
|
||||||
|
|
||||||
|
self.wsgiserver = IOLoop.current()
|
||||||
|
self.wsgiserver.start()
|
||||||
|
# wait for stop signal
|
||||||
|
self.wsgiserver.close(True)
|
||||||
|
finally:
|
||||||
|
if self.unix_socket_file:
|
||||||
|
os.remove(self.unix_socket_file)
|
||||||
|
self.unix_socket_file = None
|
||||||
|
|
||||||
|
def start(self):
|
||||||
|
try:
|
||||||
|
if _GEVENT:
|
||||||
|
# leave subprocess out to allow forking for fetchers and processors
|
||||||
|
self._start_gevent()
|
||||||
|
else:
|
||||||
|
self._start_tornado()
|
||||||
|
except Exception as ex:
|
||||||
|
log.error("Error starting server: %s", ex)
|
||||||
|
print("Error starting server: %s" % ex)
|
||||||
|
self.stop()
|
||||||
|
return False
|
||||||
|
finally:
|
||||||
|
self.wsgiserver = None
|
||||||
|
|
||||||
|
# prevent irritating log of pending tasks message from asyncio
|
||||||
|
logger.get('asyncio').setLevel(logger.logging.CRITICAL)
|
||||||
|
|
||||||
|
if not self.restart:
|
||||||
|
log.info("Performing shutdown of Calibre-Web")
|
||||||
|
return True
|
||||||
|
|
||||||
|
log.info("Performing restart of Calibre-Web")
|
||||||
|
args = self._get_args_for_reloading()
|
||||||
|
os.execv(args[0].lstrip('"').rstrip('"'), args)
|
||||||
|
return True
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def shutdown_scheduler():
|
||||||
|
from .services.background_scheduler import BackgroundScheduler
|
||||||
|
scheduler = BackgroundScheduler()
|
||||||
|
if scheduler:
|
||||||
|
scheduler.scheduler.shutdown()
|
||||||
|
|
||||||
|
def _killServer(self, __, ___):
|
||||||
|
self.stop()
|
||||||
|
|
||||||
|
def stop(self, restart=False):
|
||||||
|
from . import updater_thread
|
||||||
|
updater_thread.stop()
|
||||||
|
|
||||||
|
log.info("webserver stop (restart=%s)", restart)
|
||||||
|
self.shutdown_scheduler()
|
||||||
|
self.restart = restart
|
||||||
|
if self.wsgiserver:
|
||||||
|
if _GEVENT:
|
||||||
|
self.wsgiserver.close()
|
||||||
|
else:
|
||||||
|
if restart:
|
||||||
|
self.wsgiserver.call_later(1.0, self.wsgiserver.stop)
|
||||||
|
else:
|
||||||
|
self.wsgiserver.asyncio_loop.call_soon_threadsafe(self.wsgiserver.stop)
|
||||||
|
|
||||||
# Start Instance of Server
|
|
||||||
Server=server()
|
|
||||||
|
|
|
@ -0,0 +1,111 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2021 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
import abc
|
||||||
|
import dataclasses
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
from typing import Dict, Generator, List, Optional, Union
|
||||||
|
|
||||||
|
from cps import constants
|
||||||
|
|
||||||
|
|
||||||
|
@dataclasses.dataclass
|
||||||
|
class MetaSourceInfo:
|
||||||
|
id: str
|
||||||
|
description: str
|
||||||
|
link: str
|
||||||
|
|
||||||
|
|
||||||
|
@dataclasses.dataclass
|
||||||
|
class MetaRecord:
|
||||||
|
id: Union[str, int]
|
||||||
|
title: str
|
||||||
|
authors: List[str]
|
||||||
|
url: str
|
||||||
|
source: MetaSourceInfo
|
||||||
|
cover: str = os.path.join(constants.STATIC_DIR, 'generic_cover.jpg')
|
||||||
|
description: Optional[str] = ""
|
||||||
|
series: Optional[str] = None
|
||||||
|
series_index: Optional[Union[int, float]] = 0
|
||||||
|
identifiers: Dict[str, Union[str, int]] = dataclasses.field(default_factory=dict)
|
||||||
|
publisher: Optional[str] = None
|
||||||
|
publishedDate: Optional[str] = None
|
||||||
|
rating: Optional[int] = 0
|
||||||
|
languages: Optional[List[str]] = dataclasses.field(default_factory=list)
|
||||||
|
tags: Optional[List[str]] = dataclasses.field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
|
class Metadata:
|
||||||
|
__name__ = "Generic"
|
||||||
|
__id__ = "generic"
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.active = True
|
||||||
|
|
||||||
|
def set_status(self, state):
|
||||||
|
self.active = state
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def search(
|
||||||
|
self, query: str, generic_cover: str = "", locale: str = "en"
|
||||||
|
) -> Optional[List[MetaRecord]]:
|
||||||
|
pass
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_title_tokens(
|
||||||
|
title: str, strip_joiners: bool = True
|
||||||
|
) -> Generator[str, None, None]:
|
||||||
|
"""
|
||||||
|
Taken from calibre source code
|
||||||
|
It's a simplified (cut out what is unnecessary) version of
|
||||||
|
https://github.com/kovidgoyal/calibre/blob/99d85b97918625d172227c8ffb7e0c71794966c0/
|
||||||
|
src/calibre/ebooks/metadata/sources/base.py#L363-L367
|
||||||
|
(src/calibre/ebooks/metadata/sources/base.py - lines 363-398)
|
||||||
|
"""
|
||||||
|
title_patterns = [
|
||||||
|
(re.compile(pat, re.IGNORECASE), repl)
|
||||||
|
for pat, repl in [
|
||||||
|
# Remove things like: (2010) (Omnibus) etc.
|
||||||
|
(
|
||||||
|
r"(?i)[({\[](\d{4}|omnibus|anthology|hardcover|"
|
||||||
|
r"audiobook|audio\scd|paperback|turtleback|"
|
||||||
|
r"mass\s*market|edition|ed\.)[\])}]",
|
||||||
|
"",
|
||||||
|
),
|
||||||
|
# Remove any strings that contain the substring edition inside
|
||||||
|
# parentheses
|
||||||
|
(r"(?i)[({\[].*?(edition|ed.).*?[\]})]", ""),
|
||||||
|
# Remove commas used a separators in numbers
|
||||||
|
(r"(\d+),(\d+)", r"\1\2"),
|
||||||
|
# Remove hyphens only if they have whitespace before them
|
||||||
|
(r"(\s-)", " "),
|
||||||
|
# Replace other special chars with a space
|
||||||
|
(r"""[:,;!@$%^&*(){}.`~"\s\[\]/]《》「」“”""", " "),
|
||||||
|
]
|
||||||
|
]
|
||||||
|
|
||||||
|
for pat, repl in title_patterns:
|
||||||
|
title = pat.sub(repl, title)
|
||||||
|
|
||||||
|
tokens = title.split()
|
||||||
|
for token in tokens:
|
||||||
|
token = token.strip().strip('"').strip("'")
|
||||||
|
if token and (
|
||||||
|
not strip_joiners or token.lower() not in ("a", "and", "the", "&")
|
||||||
|
):
|
||||||
|
yield token
|
|
@ -0,0 +1,180 @@
|
||||||
|
#!/usr/bin/env python
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2019 shavitmichael, OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import sys
|
||||||
|
from base64 import b64decode, b64encode
|
||||||
|
from jsonschema import validate, exceptions
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from flask import json
|
||||||
|
from .. import logger
|
||||||
|
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
|
||||||
|
def b64encode_json(json_data):
|
||||||
|
return b64encode(json.dumps(json_data).encode()).decode("utf-8")
|
||||||
|
|
||||||
|
|
||||||
|
# Python3 has a timestamp() method we could be calling, however it's not available in python2.
|
||||||
|
def to_epoch_timestamp(datetime_object):
|
||||||
|
return (datetime_object - datetime(1970, 1, 1)).total_seconds()
|
||||||
|
|
||||||
|
|
||||||
|
def get_datetime_from_json(json_object, field_name):
|
||||||
|
try:
|
||||||
|
return datetime.utcfromtimestamp(json_object[field_name])
|
||||||
|
except (KeyError, OSError, OverflowError):
|
||||||
|
# OSError is thrown on Windows if timestamp is <1970 or >2038
|
||||||
|
return datetime.min
|
||||||
|
|
||||||
|
|
||||||
|
class SyncToken:
|
||||||
|
""" The SyncToken is used to persist state across requests.
|
||||||
|
When serialized over the response headers, the Kobo device will propagate the token onto following
|
||||||
|
requests to the service. As an example use-case, the SyncToken is used to detect books that have been added
|
||||||
|
to the library since the last time the device synced to the server.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
books_last_created: Datetime representing the newest book that the device knows about.
|
||||||
|
books_last_modified: Datetime representing the last modified book that the device knows about.
|
||||||
|
"""
|
||||||
|
|
||||||
|
SYNC_TOKEN_HEADER = "x-kobo-synctoken" # nosec
|
||||||
|
VERSION = "1-1-0"
|
||||||
|
LAST_MODIFIED_ADDED_VERSION = "1-1-0"
|
||||||
|
MIN_VERSION = "1-0-0"
|
||||||
|
|
||||||
|
token_schema = {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {"version": {"type": "string"}, "data": {"type": "object"}, },
|
||||||
|
}
|
||||||
|
# This Schema doesn't contain enough information to detect and propagate book deletions from Calibre to the device.
|
||||||
|
# A potential solution might be to keep a list of all known book uuids in the token, and look for any missing
|
||||||
|
# from the db.
|
||||||
|
data_schema_v1 = {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"raw_kobo_store_token": {"type": "string"},
|
||||||
|
"books_last_modified": {"type": "string"},
|
||||||
|
"books_last_created": {"type": "string"},
|
||||||
|
"archive_last_modified": {"type": "string"},
|
||||||
|
"reading_state_last_modified": {"type": "string"},
|
||||||
|
"tags_last_modified": {"type": "string"}
|
||||||
|
# "books_last_id": {"type": "integer", "optional": True}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
raw_kobo_store_token="",
|
||||||
|
books_last_created=datetime.min,
|
||||||
|
books_last_modified=datetime.min,
|
||||||
|
archive_last_modified=datetime.min,
|
||||||
|
reading_state_last_modified=datetime.min,
|
||||||
|
tags_last_modified=datetime.min
|
||||||
|
# books_last_id=-1
|
||||||
|
): # nosec
|
||||||
|
self.raw_kobo_store_token = raw_kobo_store_token
|
||||||
|
self.books_last_created = books_last_created
|
||||||
|
self.books_last_modified = books_last_modified
|
||||||
|
self.archive_last_modified = archive_last_modified
|
||||||
|
self.reading_state_last_modified = reading_state_last_modified
|
||||||
|
self.tags_last_modified = tags_last_modified
|
||||||
|
# self.books_last_id = books_last_id
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def from_headers(headers):
|
||||||
|
sync_token_header = headers.get(SyncToken.SYNC_TOKEN_HEADER, "")
|
||||||
|
if sync_token_header == "": # nosec
|
||||||
|
return SyncToken()
|
||||||
|
|
||||||
|
# On the first sync from a Kobo device, we may receive the SyncToken
|
||||||
|
# from the official Kobo store. Without digging too deep into it, that
|
||||||
|
# token is of the form [b64encoded blob].[b64encoded blob 2]
|
||||||
|
if "." in sync_token_header:
|
||||||
|
return SyncToken(raw_kobo_store_token=sync_token_header)
|
||||||
|
|
||||||
|
try:
|
||||||
|
sync_token_json = json.loads(
|
||||||
|
b64decode(sync_token_header + "=" * (-len(sync_token_header) % 4))
|
||||||
|
)
|
||||||
|
validate(sync_token_json, SyncToken.token_schema)
|
||||||
|
if sync_token_json["version"] < SyncToken.MIN_VERSION:
|
||||||
|
raise ValueError
|
||||||
|
|
||||||
|
data_json = sync_token_json["data"]
|
||||||
|
validate(sync_token_json, SyncToken.data_schema_v1)
|
||||||
|
except (exceptions.ValidationError, ValueError):
|
||||||
|
log.error("Sync token contents do not follow the expected json schema.")
|
||||||
|
return SyncToken()
|
||||||
|
|
||||||
|
raw_kobo_store_token = data_json["raw_kobo_store_token"]
|
||||||
|
try:
|
||||||
|
books_last_modified = get_datetime_from_json(data_json, "books_last_modified")
|
||||||
|
books_last_created = get_datetime_from_json(data_json, "books_last_created")
|
||||||
|
archive_last_modified = get_datetime_from_json(data_json, "archive_last_modified")
|
||||||
|
reading_state_last_modified = get_datetime_from_json(data_json, "reading_state_last_modified")
|
||||||
|
tags_last_modified = get_datetime_from_json(data_json, "tags_last_modified")
|
||||||
|
except TypeError:
|
||||||
|
log.error("SyncToken timestamps don't parse to a datetime.")
|
||||||
|
return SyncToken(raw_kobo_store_token=raw_kobo_store_token)
|
||||||
|
|
||||||
|
return SyncToken(
|
||||||
|
raw_kobo_store_token=raw_kobo_store_token,
|
||||||
|
books_last_created=books_last_created,
|
||||||
|
books_last_modified=books_last_modified,
|
||||||
|
archive_last_modified=archive_last_modified,
|
||||||
|
reading_state_last_modified=reading_state_last_modified,
|
||||||
|
tags_last_modified=tags_last_modified,
|
||||||
|
)
|
||||||
|
|
||||||
|
def set_kobo_store_header(self, store_headers):
|
||||||
|
store_headers.set(SyncToken.SYNC_TOKEN_HEADER, self.raw_kobo_store_token)
|
||||||
|
|
||||||
|
def merge_from_store_response(self, store_response):
|
||||||
|
self.raw_kobo_store_token = store_response.headers.get(
|
||||||
|
SyncToken.SYNC_TOKEN_HEADER, ""
|
||||||
|
)
|
||||||
|
|
||||||
|
def to_headers(self, headers):
|
||||||
|
headers[SyncToken.SYNC_TOKEN_HEADER] = self.build_sync_token()
|
||||||
|
|
||||||
|
def build_sync_token(self):
|
||||||
|
token = {
|
||||||
|
"version": SyncToken.VERSION,
|
||||||
|
"data": {
|
||||||
|
"raw_kobo_store_token": self.raw_kobo_store_token,
|
||||||
|
"books_last_modified": to_epoch_timestamp(self.books_last_modified),
|
||||||
|
"books_last_created": to_epoch_timestamp(self.books_last_created),
|
||||||
|
"archive_last_modified": to_epoch_timestamp(self.archive_last_modified),
|
||||||
|
"reading_state_last_modified": to_epoch_timestamp(self.reading_state_last_modified),
|
||||||
|
"tags_last_modified": to_epoch_timestamp(self.tags_last_modified),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
return b64encode_json(token)
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return "{},{},{},{},{},{}".format(self.books_last_created,
|
||||||
|
self.books_last_modified,
|
||||||
|
self.archive_last_modified,
|
||||||
|
self.reading_state_last_modified,
|
||||||
|
self.tags_last_modified,
|
||||||
|
self.raw_kobo_store_token)
|
|
@ -0,0 +1,50 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2019 pwr
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
from .. import logger
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
try:
|
||||||
|
from . import goodreads_support
|
||||||
|
except ImportError as err:
|
||||||
|
log.debug("Cannot import goodreads, showing authors-metadata will not work: %s", err)
|
||||||
|
goodreads_support = None
|
||||||
|
|
||||||
|
|
||||||
|
try:
|
||||||
|
from . import simpleldap as ldap
|
||||||
|
from .simpleldap import ldapVersion
|
||||||
|
except ImportError as err:
|
||||||
|
log.debug("Cannot import simpleldap, logging in with ldap will not work: %s", err)
|
||||||
|
ldap = None
|
||||||
|
ldapVersion = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
from . import SyncToken as SyncToken
|
||||||
|
kobo = True
|
||||||
|
except ImportError as err:
|
||||||
|
log.debug("Cannot import SyncToken, syncing books with Kobo Devices will not work: %s", err)
|
||||||
|
kobo = None
|
||||||
|
SyncToken = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
from . import gmail
|
||||||
|
except ImportError as err:
|
||||||
|
log.debug("Cannot import gmail, sending books via Gmail Oauth2 Verification will not work: %s", err)
|
||||||
|
gmail = None
|
|
@ -0,0 +1,84 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2020 mmonkey
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import atexit
|
||||||
|
|
||||||
|
from .. import logger
|
||||||
|
from .worker import WorkerThread
|
||||||
|
|
||||||
|
try:
|
||||||
|
from apscheduler.schedulers.background import BackgroundScheduler as BScheduler
|
||||||
|
from apscheduler.triggers.cron import CronTrigger
|
||||||
|
from apscheduler.triggers.date import DateTrigger
|
||||||
|
use_APScheduler = True
|
||||||
|
except (ImportError, RuntimeError) as e:
|
||||||
|
use_APScheduler = False
|
||||||
|
log = logger.create()
|
||||||
|
log.info('APScheduler not found. Unable to schedule tasks.')
|
||||||
|
|
||||||
|
|
||||||
|
class BackgroundScheduler:
|
||||||
|
_instance = None
|
||||||
|
|
||||||
|
def __new__(cls):
|
||||||
|
if not use_APScheduler:
|
||||||
|
return False
|
||||||
|
|
||||||
|
if cls._instance is None:
|
||||||
|
cls._instance = super(BackgroundScheduler, cls).__new__(cls)
|
||||||
|
cls.log = logger.create()
|
||||||
|
cls.scheduler = BScheduler()
|
||||||
|
cls.scheduler.start()
|
||||||
|
|
||||||
|
return cls._instance
|
||||||
|
|
||||||
|
def schedule(self, func, trigger, name=None):
|
||||||
|
if use_APScheduler:
|
||||||
|
return self.scheduler.add_job(func=func, trigger=trigger, name=name)
|
||||||
|
|
||||||
|
# Expects a lambda expression for the task
|
||||||
|
def schedule_task(self, task, user=None, name=None, hidden=False, trigger=None):
|
||||||
|
if use_APScheduler:
|
||||||
|
def scheduled_task():
|
||||||
|
worker_task = task()
|
||||||
|
worker_task.scheduled = True
|
||||||
|
WorkerThread.add(user, worker_task, hidden=hidden)
|
||||||
|
return self.schedule(func=scheduled_task, trigger=trigger, name=name)
|
||||||
|
|
||||||
|
# Expects a list of lambda expressions for the tasks
|
||||||
|
def schedule_tasks(self, tasks, user=None, trigger=None):
|
||||||
|
if use_APScheduler:
|
||||||
|
for task in tasks:
|
||||||
|
self.schedule_task(task[0], user=user, trigger=trigger, name=task[1], hidden=task[2])
|
||||||
|
|
||||||
|
# Expects a lambda expression for the task
|
||||||
|
def schedule_task_immediately(self, task, user=None, name=None, hidden=False):
|
||||||
|
if use_APScheduler:
|
||||||
|
def immediate_task():
|
||||||
|
WorkerThread.add(user, task(), hidden)
|
||||||
|
return self.schedule(func=immediate_task, trigger=DateTrigger(), name=name)
|
||||||
|
|
||||||
|
# Expects a list of lambda expressions for the tasks
|
||||||
|
def schedule_tasks_immediately(self, tasks, user=None):
|
||||||
|
if use_APScheduler:
|
||||||
|
for task in tasks:
|
||||||
|
self.schedule_task_immediately(task[0], user, name="immediately " + task[1], hidden=task[2])
|
||||||
|
|
||||||
|
# Remove all jobs
|
||||||
|
def remove_all_jobs(self):
|
||||||
|
self.scheduler.remove_all_jobs()
|
|
@ -0,0 +1,100 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2021 OzzieIsaacs
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import os.path
|
||||||
|
from google_auth_oauthlib.flow import InstalledAppFlow
|
||||||
|
from google.auth.transport.requests import Request
|
||||||
|
from googleapiclient.discovery import build
|
||||||
|
from google.oauth2.credentials import Credentials
|
||||||
|
|
||||||
|
from datetime import datetime
|
||||||
|
import base64
|
||||||
|
from flask_babel import gettext as _
|
||||||
|
from ..constants import CONFIG_DIR
|
||||||
|
from .. import logger
|
||||||
|
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
SCOPES = ['openid', 'https://www.googleapis.com/auth/gmail.send', 'https://www.googleapis.com/auth/userinfo.email']
|
||||||
|
|
||||||
|
def setup_gmail(token):
|
||||||
|
# If there are no (valid) credentials available, let the user log in.
|
||||||
|
creds = None
|
||||||
|
if "token" in token:
|
||||||
|
creds = Credentials(
|
||||||
|
token=token['token'],
|
||||||
|
refresh_token=token['refresh_token'],
|
||||||
|
token_uri=token['token_uri'],
|
||||||
|
client_id=token['client_id'],
|
||||||
|
client_secret=token['client_secret'],
|
||||||
|
scopes=token['scopes'],
|
||||||
|
)
|
||||||
|
creds.expiry = datetime.fromisoformat(token['expiry'])
|
||||||
|
|
||||||
|
if not creds or not creds.valid:
|
||||||
|
# don't forget to dump one more time after the refresh
|
||||||
|
# also, some file-locking routines wouldn't be needless
|
||||||
|
if creds and creds.expired and creds.refresh_token:
|
||||||
|
creds.refresh(Request())
|
||||||
|
else:
|
||||||
|
cred_file = os.path.join(CONFIG_DIR, 'gmail.json')
|
||||||
|
if not os.path.exists(cred_file):
|
||||||
|
raise Exception(_("Found no valid gmail.json file with OAuth information"))
|
||||||
|
flow = InstalledAppFlow.from_client_secrets_file(
|
||||||
|
os.path.join(CONFIG_DIR, 'gmail.json'), SCOPES)
|
||||||
|
creds = flow.run_local_server(port=0)
|
||||||
|
user_info = get_user_info(creds)
|
||||||
|
return {
|
||||||
|
'token': creds.token,
|
||||||
|
'refresh_token': creds.refresh_token,
|
||||||
|
'token_uri': creds.token_uri,
|
||||||
|
'client_id': creds.client_id,
|
||||||
|
'client_secret': creds.client_secret,
|
||||||
|
'scopes': creds.scopes,
|
||||||
|
'expiry': creds.expiry.isoformat(),
|
||||||
|
'email': user_info
|
||||||
|
}
|
||||||
|
return {}
|
||||||
|
|
||||||
|
def get_user_info(credentials):
|
||||||
|
user_info_service = build(serviceName='oauth2', version='v2',credentials=credentials)
|
||||||
|
user_info = user_info_service.userinfo().get().execute()
|
||||||
|
return user_info.get('email', "")
|
||||||
|
|
||||||
|
def send_messsage(token, msg):
|
||||||
|
log.debug("Start sending e-mail via Gmail")
|
||||||
|
creds = Credentials(
|
||||||
|
token=token['token'],
|
||||||
|
refresh_token=token['refresh_token'],
|
||||||
|
token_uri=token['token_uri'],
|
||||||
|
client_id=token['client_id'],
|
||||||
|
client_secret=token['client_secret'],
|
||||||
|
scopes=token['scopes'],
|
||||||
|
)
|
||||||
|
creds.expiry = datetime.fromisoformat(token['expiry'])
|
||||||
|
if creds and creds.expired and creds.refresh_token:
|
||||||
|
creds.refresh(Request())
|
||||||
|
service = build('gmail', 'v1', credentials=creds)
|
||||||
|
message_as_bytes = msg.as_bytes() # the message should converted from string to bytes.
|
||||||
|
message_as_base64 = base64.urlsafe_b64encode(message_as_bytes) # encode in base64 (printable letters coding)
|
||||||
|
raw = message_as_base64.decode() # convert to something JSON serializable
|
||||||
|
body = {'raw': raw}
|
||||||
|
|
||||||
|
(service.users().messages().send(userId='me', body=body).execute())
|
||||||
|
log.debug("E-mail send successfully via Gmail")
|
|
@ -0,0 +1,146 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2019 OzzieIsaacs, pwr
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import time
|
||||||
|
from functools import reduce
|
||||||
|
import requests
|
||||||
|
|
||||||
|
from goodreads.client import GoodreadsClient
|
||||||
|
from goodreads.request import GoodreadsRequest
|
||||||
|
import xmltodict
|
||||||
|
|
||||||
|
try:
|
||||||
|
import Levenshtein
|
||||||
|
except ImportError:
|
||||||
|
Levenshtein = False
|
||||||
|
|
||||||
|
from .. import logger
|
||||||
|
from ..clean_html import clean_string
|
||||||
|
|
||||||
|
class my_GoodreadsClient(GoodreadsClient):
|
||||||
|
|
||||||
|
def request(self, *args, **kwargs):
|
||||||
|
"""Create a GoodreadsRequest object and make that request"""
|
||||||
|
req = my_GoodreadsRequest(self, *args, **kwargs)
|
||||||
|
return req.request()
|
||||||
|
|
||||||
|
class GoodreadsRequestException(Exception):
|
||||||
|
def __init__(self, error_msg, url):
|
||||||
|
self.error_msg = error_msg
|
||||||
|
self.url = url
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return self.url, ':', self.error_msg
|
||||||
|
|
||||||
|
|
||||||
|
class my_GoodreadsRequest(GoodreadsRequest):
|
||||||
|
|
||||||
|
def request(self):
|
||||||
|
resp = requests.get(self.host+self.path, params=self.params,
|
||||||
|
headers={"User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:125.0) "
|
||||||
|
"Gecko/20100101 Firefox/125.0"})
|
||||||
|
if resp.status_code != 200:
|
||||||
|
raise GoodreadsRequestException(resp.reason, self.path)
|
||||||
|
if self.req_format == 'xml':
|
||||||
|
data_dict = xmltodict.parse(resp.content)
|
||||||
|
return data_dict['GoodreadsResponse']
|
||||||
|
else:
|
||||||
|
raise Exception("Invalid format")
|
||||||
|
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
_client = None # type: GoodreadsClient
|
||||||
|
|
||||||
|
# GoodReads TOS allows for 24h caching of data
|
||||||
|
_CACHE_TIMEOUT = 23 * 60 * 60 # 23 hours (in seconds)
|
||||||
|
_AUTHORS_CACHE = {}
|
||||||
|
|
||||||
|
|
||||||
|
def connect(key=None, enabled=True):
|
||||||
|
global _client
|
||||||
|
|
||||||
|
if not enabled or not key:
|
||||||
|
_client = None
|
||||||
|
return
|
||||||
|
|
||||||
|
if _client:
|
||||||
|
# make sure the configuration has not changed since last we used the client
|
||||||
|
if _client.client_key != key:
|
||||||
|
_client = None
|
||||||
|
|
||||||
|
if not _client:
|
||||||
|
_client = my_GoodreadsClient(key, None)
|
||||||
|
|
||||||
|
|
||||||
|
def get_author_info(author_name):
|
||||||
|
now = time.time()
|
||||||
|
author_info = _AUTHORS_CACHE.get(author_name, None)
|
||||||
|
if author_info:
|
||||||
|
if now < author_info._timestamp + _CACHE_TIMEOUT:
|
||||||
|
return author_info
|
||||||
|
# clear expired entries
|
||||||
|
del _AUTHORS_CACHE[author_name]
|
||||||
|
|
||||||
|
if not _client:
|
||||||
|
log.warning("failed to get a Goodreads client")
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
author_info = _client.find_author(author_name=author_name)
|
||||||
|
except Exception as ex:
|
||||||
|
# Skip goodreads, if site is down/inaccessible
|
||||||
|
log.warning('Goodreads website is down/inaccessible? %s', ex.__str__())
|
||||||
|
return
|
||||||
|
|
||||||
|
if author_info:
|
||||||
|
author_info._timestamp = now
|
||||||
|
author_info.safe_about = clean_string(author_info.about)
|
||||||
|
_AUTHORS_CACHE[author_name] = author_info
|
||||||
|
return author_info
|
||||||
|
|
||||||
|
|
||||||
|
def get_other_books(author_info, library_books=None):
|
||||||
|
# Get all identifiers (ISBN, Goodreads, etc) and filter author's books by that list so we show fewer duplicates
|
||||||
|
# Note: Not all images will be shown, even though they're available on Goodreads.com.
|
||||||
|
# See https://www.goodreads.com/topic/show/18213769-goodreads-book-images
|
||||||
|
|
||||||
|
if not author_info:
|
||||||
|
return
|
||||||
|
|
||||||
|
identifiers = []
|
||||||
|
library_titles = []
|
||||||
|
if library_books:
|
||||||
|
identifiers = list(reduce(lambda acc, book: acc + [i.val for i in book.identifiers if i.val], library_books, []))
|
||||||
|
library_titles = [book.title for book in library_books]
|
||||||
|
|
||||||
|
for book in author_info.books:
|
||||||
|
if book.isbn in identifiers:
|
||||||
|
continue
|
||||||
|
if isinstance(book.gid, int):
|
||||||
|
if book.gid in identifiers:
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
if book.gid["#text"] in identifiers:
|
||||||
|
continue
|
||||||
|
|
||||||
|
if Levenshtein and library_titles:
|
||||||
|
goodreads_title = book._book_dict['title_without_series']
|
||||||
|
if any(Levenshtein.ratio(goodreads_title, title) > 0.7 for title in library_titles):
|
||||||
|
continue
|
||||||
|
|
||||||
|
yield book
|
|
@ -0,0 +1,167 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2019 OzzieIsaacs, pwr
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import base64
|
||||||
|
|
||||||
|
from flask_simpleldap import LDAP, LDAPException
|
||||||
|
from flask_simpleldap import ldap as pyLDAP
|
||||||
|
from flask import current_app
|
||||||
|
from .. import constants, logger
|
||||||
|
|
||||||
|
try:
|
||||||
|
from ldap.pkginfo import __version__ as ldapVersion
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
class LDAPLogger(object):
|
||||||
|
|
||||||
|
def write(self, message):
|
||||||
|
try:
|
||||||
|
log.debug(message.strip("\n").replace("\n", ""))
|
||||||
|
except Exception:
|
||||||
|
log.debug("Logging Error")
|
||||||
|
|
||||||
|
|
||||||
|
class mySimpleLDap(LDAP):
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def init_app(app):
|
||||||
|
super(mySimpleLDap, mySimpleLDap).init_app(app)
|
||||||
|
app.config.setdefault('LDAP_LOGLEVEL', 0)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def initialize(self):
|
||||||
|
"""Initialize a connection to the LDAP server.
|
||||||
|
|
||||||
|
:return: LDAP connection object.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
log_level = 2 if current_app.config['LDAP_LOGLEVEL'] == logger.logging.DEBUG else 0
|
||||||
|
conn = pyLDAP.initialize('{0}://{1}:{2}'.format(
|
||||||
|
current_app.config['LDAP_SCHEMA'],
|
||||||
|
current_app.config['LDAP_HOST'],
|
||||||
|
current_app.config['LDAP_PORT']), trace_level=log_level, trace_file=LDAPLogger())
|
||||||
|
conn.set_option(pyLDAP.OPT_NETWORK_TIMEOUT,
|
||||||
|
current_app.config['LDAP_TIMEOUT'])
|
||||||
|
conn = self._set_custom_options(conn)
|
||||||
|
conn.protocol_version = pyLDAP.VERSION3
|
||||||
|
if current_app.config['LDAP_USE_TLS']:
|
||||||
|
conn.start_tls_s()
|
||||||
|
return conn
|
||||||
|
except pyLDAP.LDAPError as e:
|
||||||
|
raise LDAPException(self.error(e.args))
|
||||||
|
|
||||||
|
|
||||||
|
_ldap = mySimpleLDap()
|
||||||
|
|
||||||
|
def init_app(app, config):
|
||||||
|
if config.config_login_type != constants.LOGIN_LDAP:
|
||||||
|
return
|
||||||
|
|
||||||
|
app.config['LDAP_HOST'] = config.config_ldap_provider_url
|
||||||
|
app.config['LDAP_PORT'] = config.config_ldap_port
|
||||||
|
app.config['LDAP_CUSTOM_OPTIONS'] = {pyLDAP.OPT_REFERRALS: 0}
|
||||||
|
if config.config_ldap_encryption == 2:
|
||||||
|
app.config['LDAP_SCHEMA'] = 'ldaps'
|
||||||
|
else:
|
||||||
|
app.config['LDAP_SCHEMA'] = 'ldap'
|
||||||
|
if config.config_ldap_authentication > constants.LDAP_AUTH_ANONYMOUS:
|
||||||
|
if config.config_ldap_authentication > constants.LDAP_AUTH_UNAUTHENTICATE:
|
||||||
|
if config.config_ldap_serv_password_e is None:
|
||||||
|
config.config_ldap_serv_password_e = ''
|
||||||
|
app.config['LDAP_PASSWORD'] = config.config_ldap_serv_password_e
|
||||||
|
else:
|
||||||
|
app.config['LDAP_PASSWORD'] = ""
|
||||||
|
app.config['LDAP_USERNAME'] = config.config_ldap_serv_username
|
||||||
|
else:
|
||||||
|
app.config['LDAP_USERNAME'] = ""
|
||||||
|
app.config['LDAP_PASSWORD'] = ""
|
||||||
|
if bool(config.config_ldap_cert_path):
|
||||||
|
app.config['LDAP_CUSTOM_OPTIONS'].update({
|
||||||
|
pyLDAP.OPT_X_TLS_REQUIRE_CERT: pyLDAP.OPT_X_TLS_DEMAND,
|
||||||
|
pyLDAP.OPT_X_TLS_CACERTFILE: config.config_ldap_cacert_path,
|
||||||
|
pyLDAP.OPT_X_TLS_CERTFILE: config.config_ldap_cert_path,
|
||||||
|
pyLDAP.OPT_X_TLS_KEYFILE: config.config_ldap_key_path,
|
||||||
|
pyLDAP.OPT_X_TLS_NEWCTX: 0
|
||||||
|
})
|
||||||
|
|
||||||
|
app.config['LDAP_BASE_DN'] = config.config_ldap_dn
|
||||||
|
app.config['LDAP_USER_OBJECT_FILTER'] = config.config_ldap_user_object
|
||||||
|
|
||||||
|
app.config['LDAP_USE_TLS'] = bool(config.config_ldap_encryption == 1)
|
||||||
|
app.config['LDAP_USE_SSL'] = bool(config.config_ldap_encryption == 2)
|
||||||
|
app.config['LDAP_OPENLDAP'] = bool(config.config_ldap_openldap)
|
||||||
|
app.config['LDAP_GROUP_OBJECT_FILTER'] = config.config_ldap_group_object_filter
|
||||||
|
app.config['LDAP_GROUP_MEMBERS_FIELD'] = config.config_ldap_group_members_field
|
||||||
|
app.config['LDAP_LOGLEVEL'] = config.config_log_level
|
||||||
|
try:
|
||||||
|
_ldap.init_app(app)
|
||||||
|
except ValueError:
|
||||||
|
if bool(config.config_ldap_cert_path):
|
||||||
|
app.config['LDAP_CUSTOM_OPTIONS'].pop(pyLDAP.OPT_X_TLS_NEWCTX)
|
||||||
|
try:
|
||||||
|
_ldap.init_app(app)
|
||||||
|
except RuntimeError as e:
|
||||||
|
log.error(e)
|
||||||
|
except RuntimeError as e:
|
||||||
|
log.error(e)
|
||||||
|
|
||||||
|
|
||||||
|
def get_object_details(user=None,query_filter=None):
|
||||||
|
return _ldap.get_object_details(user, query_filter=query_filter)
|
||||||
|
|
||||||
|
|
||||||
|
def bind():
|
||||||
|
return _ldap.bind()
|
||||||
|
|
||||||
|
|
||||||
|
def get_group_members(group):
|
||||||
|
return _ldap.get_group_members(group)
|
||||||
|
|
||||||
|
|
||||||
|
def basic_auth_required(func):
|
||||||
|
return _ldap.basic_auth_required(func)
|
||||||
|
|
||||||
|
|
||||||
|
def bind_user(username, password):
|
||||||
|
'''Attempts a LDAP login.
|
||||||
|
|
||||||
|
:returns: True if login succeeded, False if login failed, None if server unavailable.
|
||||||
|
'''
|
||||||
|
try:
|
||||||
|
if _ldap.get_object_details(username):
|
||||||
|
result = _ldap.bind_user(username, password)
|
||||||
|
log.debug("LDAP login '%s': %r", username, result)
|
||||||
|
return result is not None, None
|
||||||
|
return None, None # User not found
|
||||||
|
except (TypeError, AttributeError, KeyError) as ex:
|
||||||
|
error = ("LDAP bind_user: %s" % ex)
|
||||||
|
return None, error
|
||||||
|
except LDAPException as ex:
|
||||||
|
if ex.message == 'Invalid credentials':
|
||||||
|
error = "LDAP admin login failed"
|
||||||
|
return None, error
|
||||||
|
if ex.message == "Can't contact LDAP server":
|
||||||
|
# log.warning('LDAP Server down: %s', ex)
|
||||||
|
error = ('LDAP Server down: %s' % ex)
|
||||||
|
return None, error
|
||||||
|
else:
|
||||||
|
error = ('LDAP Server error: %s' % ex.message)
|
||||||
|
return None, error
|
|
@ -0,0 +1,271 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2020 pwr
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import threading
|
||||||
|
import abc
|
||||||
|
import uuid
|
||||||
|
import time
|
||||||
|
|
||||||
|
try:
|
||||||
|
import queue
|
||||||
|
except ImportError:
|
||||||
|
import Queue as queue
|
||||||
|
from datetime import datetime
|
||||||
|
from collections import namedtuple
|
||||||
|
|
||||||
|
from cps import logger
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
# task 'status' consts
|
||||||
|
STAT_WAITING = 0
|
||||||
|
STAT_FAIL = 1
|
||||||
|
STAT_STARTED = 2
|
||||||
|
STAT_FINISH_SUCCESS = 3
|
||||||
|
STAT_ENDED = 4
|
||||||
|
STAT_CANCELLED = 5
|
||||||
|
|
||||||
|
# Only retain this many tasks in dequeued list
|
||||||
|
TASK_CLEANUP_TRIGGER = 20
|
||||||
|
|
||||||
|
QueuedTask = namedtuple('QueuedTask', 'num, user, added, task, hidden')
|
||||||
|
|
||||||
|
|
||||||
|
def _get_main_thread():
|
||||||
|
for t in threading.enumerate():
|
||||||
|
if t.__class__.__name__ == '_MainThread':
|
||||||
|
return t
|
||||||
|
raise Exception("main thread not found?!")
|
||||||
|
|
||||||
|
|
||||||
|
class ImprovedQueue(queue.Queue):
|
||||||
|
def to_list(self):
|
||||||
|
"""
|
||||||
|
Returns a copy of all items in the queue without removing them.
|
||||||
|
"""
|
||||||
|
|
||||||
|
with self.mutex:
|
||||||
|
return list(self.queue)
|
||||||
|
|
||||||
|
|
||||||
|
# Class for all worker tasks in the background
|
||||||
|
class WorkerThread(threading.Thread):
|
||||||
|
_instance = None
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def get_instance(cls):
|
||||||
|
if cls._instance is None:
|
||||||
|
cls._instance = WorkerThread()
|
||||||
|
return cls._instance
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
threading.Thread.__init__(self)
|
||||||
|
|
||||||
|
self.dequeued = list()
|
||||||
|
|
||||||
|
self.doLock = threading.Lock()
|
||||||
|
self.queue = ImprovedQueue()
|
||||||
|
self.num = 0
|
||||||
|
self.start()
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def add(cls, user, task, hidden=False):
|
||||||
|
ins = cls.get_instance()
|
||||||
|
ins.num += 1
|
||||||
|
username = user if user is not None else 'System'
|
||||||
|
log.debug("Add Task for user: {} - {}".format(username, task))
|
||||||
|
ins.queue.put(QueuedTask(
|
||||||
|
num=ins.num,
|
||||||
|
user=username,
|
||||||
|
added=datetime.now(),
|
||||||
|
task=task,
|
||||||
|
hidden=hidden
|
||||||
|
))
|
||||||
|
|
||||||
|
@property
|
||||||
|
def tasks(self):
|
||||||
|
with self.doLock:
|
||||||
|
tasks = self.queue.to_list() + self.dequeued
|
||||||
|
return sorted(tasks, key=lambda x: x.num)
|
||||||
|
|
||||||
|
def cleanup_tasks(self):
|
||||||
|
with self.doLock:
|
||||||
|
dead = []
|
||||||
|
alive = []
|
||||||
|
for x in self.dequeued:
|
||||||
|
(dead if x.task.dead else alive).append(x)
|
||||||
|
|
||||||
|
# if the ones that we need to keep are within the trigger, do nothing else
|
||||||
|
delta = len(self.dequeued) - len(dead)
|
||||||
|
if delta > TASK_CLEANUP_TRIGGER:
|
||||||
|
ret = alive
|
||||||
|
else:
|
||||||
|
# otherwise, loop off the oldest dead tasks until we hit the target trigger
|
||||||
|
ret = sorted(dead, key=lambda y: y.task.end_time)[-TASK_CLEANUP_TRIGGER:] + alive
|
||||||
|
|
||||||
|
self.dequeued = sorted(ret, key=lambda y: y.num)
|
||||||
|
|
||||||
|
# Main thread loop starting the different tasks
|
||||||
|
def run(self):
|
||||||
|
main_thread = _get_main_thread()
|
||||||
|
while main_thread.is_alive():
|
||||||
|
try:
|
||||||
|
# this blocks until something is available. This can cause issues when the main thread dies - this
|
||||||
|
# thread will remain alive. We implement a timeout to unblock every second which allows us to check if
|
||||||
|
# the main thread is still alive.
|
||||||
|
# We don't use a daemon here because we don't want the tasks to just be abruptly halted, leading to
|
||||||
|
# possible file / database corruption
|
||||||
|
item = self.queue.get(timeout=1)
|
||||||
|
except queue.Empty:
|
||||||
|
time.sleep(1)
|
||||||
|
continue
|
||||||
|
|
||||||
|
with self.doLock:
|
||||||
|
# add to list so that in-progress tasks show up
|
||||||
|
self.dequeued.append(item)
|
||||||
|
|
||||||
|
# once we hit our trigger, start cleaning up dead tasks
|
||||||
|
if len(self.dequeued) > TASK_CLEANUP_TRIGGER:
|
||||||
|
self.cleanup_tasks()
|
||||||
|
|
||||||
|
# sometimes tasks (like Upload) don't actually have work to do and are created as already finished
|
||||||
|
if item.task.stat is STAT_WAITING:
|
||||||
|
# CalibreTask.start() should wrap all exceptions in its own error handling
|
||||||
|
item.task.start(self)
|
||||||
|
|
||||||
|
# remove self_cleanup tasks and hidden "System Tasks" from list
|
||||||
|
if item.task.self_cleanup or item.hidden:
|
||||||
|
self.dequeued.remove(item)
|
||||||
|
|
||||||
|
self.queue.task_done()
|
||||||
|
|
||||||
|
def end_task(self, task_id):
|
||||||
|
ins = self.get_instance()
|
||||||
|
for __, __, __, task, __ in ins.tasks:
|
||||||
|
if str(task.id) == str(task_id) and task.is_cancellable:
|
||||||
|
task.stat = STAT_CANCELLED if task.stat == STAT_WAITING else STAT_ENDED
|
||||||
|
|
||||||
|
|
||||||
|
class CalibreTask:
|
||||||
|
__metaclass__ = abc.ABCMeta
|
||||||
|
|
||||||
|
def __init__(self, message):
|
||||||
|
self._progress = 0
|
||||||
|
self.stat = STAT_WAITING
|
||||||
|
self.error = None
|
||||||
|
self.start_time = None
|
||||||
|
self.end_time = None
|
||||||
|
self.message = message
|
||||||
|
self.id = uuid.uuid4()
|
||||||
|
self.self_cleanup = False
|
||||||
|
self._scheduled = False
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def run(self, worker_thread):
|
||||||
|
"""The main entry-point for this task"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def name(self):
|
||||||
|
"""Provides the caller some human-readable name for this class"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
@abc.abstractmethod
|
||||||
|
def is_cancellable(self):
|
||||||
|
"""Does this task gracefully handle being cancelled (STAT_ENDED, STAT_CANCELLED)?"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def start(self, *args):
|
||||||
|
self.start_time = datetime.now()
|
||||||
|
self.stat = STAT_STARTED
|
||||||
|
|
||||||
|
# catch any unhandled exceptions in a task and automatically fail it
|
||||||
|
try:
|
||||||
|
self.run(*args)
|
||||||
|
except Exception as ex:
|
||||||
|
self._handleError(str(ex))
|
||||||
|
log.error_or_exception(ex)
|
||||||
|
|
||||||
|
self.end_time = datetime.now()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def stat(self):
|
||||||
|
return self._stat
|
||||||
|
|
||||||
|
@stat.setter
|
||||||
|
def stat(self, x):
|
||||||
|
self._stat = x
|
||||||
|
|
||||||
|
@property
|
||||||
|
def progress(self):
|
||||||
|
return self._progress
|
||||||
|
|
||||||
|
@progress.setter
|
||||||
|
def progress(self, x):
|
||||||
|
if not 0 <= x <= 1:
|
||||||
|
raise ValueError("Task progress should within [0, 1] range")
|
||||||
|
self._progress = x
|
||||||
|
|
||||||
|
@property
|
||||||
|
def error(self):
|
||||||
|
return self._error
|
||||||
|
|
||||||
|
@error.setter
|
||||||
|
def error(self, x):
|
||||||
|
self._error = x
|
||||||
|
|
||||||
|
@property
|
||||||
|
def runtime(self):
|
||||||
|
return (self.end_time or datetime.now()) - self.start_time
|
||||||
|
|
||||||
|
@property
|
||||||
|
def dead(self):
|
||||||
|
"""Determines whether or not this task can be garbage collected
|
||||||
|
|
||||||
|
We have a separate dictating this because there may be certain tasks that want to override this
|
||||||
|
"""
|
||||||
|
# By default, we're good to clean a task if it's "Done"
|
||||||
|
return self.stat in (STAT_FINISH_SUCCESS, STAT_FAIL, STAT_ENDED, STAT_CANCELLED)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def self_cleanup(self):
|
||||||
|
return self._self_cleanup
|
||||||
|
|
||||||
|
@self_cleanup.setter
|
||||||
|
def self_cleanup(self, is_self_cleanup):
|
||||||
|
self._self_cleanup = is_self_cleanup
|
||||||
|
|
||||||
|
@property
|
||||||
|
def scheduled(self):
|
||||||
|
return self._scheduled
|
||||||
|
|
||||||
|
@scheduled.setter
|
||||||
|
def scheduled(self, is_scheduled):
|
||||||
|
self._scheduled = is_scheduled
|
||||||
|
|
||||||
|
def _handleError(self, error_message):
|
||||||
|
self.stat = STAT_FAIL
|
||||||
|
self.progress = 1
|
||||||
|
self.error = error_message
|
||||||
|
|
||||||
|
def _handleSuccess(self):
|
||||||
|
self.stat = STAT_FINISH_SUCCESS
|
||||||
|
self.progress = 1
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return self.name
|
|
@ -0,0 +1,478 @@
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# This file is part of the Calibre-Web (https://github.com/janeczku/calibre-web)
|
||||||
|
# Copyright (C) 2018-2019 OzzieIsaacs, cervinko, jkrehm, bodybybuddha, ok11,
|
||||||
|
# andy29485, idalin, Kyosfonica, wuqi, Kennyl, lemmsh,
|
||||||
|
# falgh1, grunjol, csitko, ytils, xybydy, trasba, vrabe,
|
||||||
|
# ruben-herold, marblepebble, JackED42, SiphonSquirrel,
|
||||||
|
# apetresc, nanu-c, mutschler
|
||||||
|
#
|
||||||
|
# This program is free software: you can redistribute it and/or modify
|
||||||
|
# it under the terms of the GNU General Public License as published by
|
||||||
|
# the Free Software Foundation, either version 3 of the License, or
|
||||||
|
# (at your option) any later version.
|
||||||
|
#
|
||||||
|
# This program is distributed in the hope that it will be useful,
|
||||||
|
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
# GNU General Public License for more details.
|
||||||
|
#
|
||||||
|
# You should have received a copy of the GNU General Public License
|
||||||
|
# along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
import sys
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from flask import Blueprint, flash, redirect, request, url_for, abort
|
||||||
|
from flask_babel import gettext as _
|
||||||
|
from flask_login import current_user, login_required
|
||||||
|
from sqlalchemy.exc import InvalidRequestError, OperationalError
|
||||||
|
from sqlalchemy.sql.expression import func, true
|
||||||
|
|
||||||
|
from . import calibre_db, config, db, logger, ub
|
||||||
|
from .render_template import render_title_template
|
||||||
|
from .usermanagement import login_required_if_no_ano
|
||||||
|
|
||||||
|
log = logger.create()
|
||||||
|
|
||||||
|
shelf = Blueprint('shelf', __name__)
|
||||||
|
|
||||||
|
|
||||||
|
@shelf.route("/shelf/add/<int:shelf_id>/<int:book_id>", methods=["POST"])
|
||||||
|
@login_required
|
||||||
|
def add_to_shelf(shelf_id, book_id):
|
||||||
|
xhr = request.headers.get('X-Requested-With') == 'XMLHttpRequest'
|
||||||
|
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||||
|
if shelf is None:
|
||||||
|
log.error("Invalid shelf specified: %s", shelf_id)
|
||||||
|
if not xhr:
|
||||||
|
flash(_("Invalid shelf specified"), category="error")
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
return "Invalid shelf specified", 400
|
||||||
|
|
||||||
|
if not check_shelf_edit_permissions(shelf):
|
||||||
|
if not xhr:
|
||||||
|
flash(_("Sorry you are not allowed to add a book to that shelf"), category="error")
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
return "Sorry you are not allowed to add a book to the that shelf", 403
|
||||||
|
|
||||||
|
book_in_shelf = ub.session.query(ub.BookShelf).filter(ub.BookShelf.shelf == shelf_id,
|
||||||
|
ub.BookShelf.book_id == book_id).first()
|
||||||
|
if book_in_shelf:
|
||||||
|
log.error("Book %s is already part of %s", book_id, shelf)
|
||||||
|
if not xhr:
|
||||||
|
flash(_("Book is already part of the shelf: %(shelfname)s", shelfname=shelf.name), category="error")
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
return "Book is already part of the shelf: %s" % shelf.name, 400
|
||||||
|
|
||||||
|
maxOrder = ub.session.query(func.max(ub.BookShelf.order)).filter(ub.BookShelf.shelf == shelf_id).first()
|
||||||
|
if maxOrder[0] is None:
|
||||||
|
maxOrder = 0
|
||||||
|
else:
|
||||||
|
maxOrder = maxOrder[0]
|
||||||
|
|
||||||
|
if not calibre_db.session.query(db.Books).filter(db.Books.id == book_id).one_or_none():
|
||||||
|
log.error("Invalid Book Id: %s. Could not be added to shelf %s", book_id, shelf.name)
|
||||||
|
if not xhr:
|
||||||
|
flash(_("%(book_id)s is a invalid Book Id. Could not be added to Shelf", book_id=book_id),
|
||||||
|
category="error")
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
return "%s is a invalid Book Id. Could not be added to Shelf" % book_id, 400
|
||||||
|
|
||||||
|
shelf.books.append(ub.BookShelf(shelf=shelf.id, book_id=book_id, order=maxOrder + 1))
|
||||||
|
shelf.last_modified = datetime.utcnow()
|
||||||
|
try:
|
||||||
|
ub.session.merge(shelf)
|
||||||
|
ub.session.commit()
|
||||||
|
except (OperationalError, InvalidRequestError) as e:
|
||||||
|
ub.session.rollback()
|
||||||
|
log.error_or_exception("Settings Database error: {}".format(e))
|
||||||
|
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
|
||||||
|
if "HTTP_REFERER" in request.environ:
|
||||||
|
return redirect(request.environ["HTTP_REFERER"])
|
||||||
|
else:
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
if not xhr:
|
||||||
|
log.debug("Book has been added to shelf: {}".format(shelf.name))
|
||||||
|
flash(_("Book has been added to shelf: %(sname)s", sname=shelf.name), category="success")
|
||||||
|
if "HTTP_REFERER" in request.environ:
|
||||||
|
return redirect(request.environ["HTTP_REFERER"])
|
||||||
|
else:
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
return "", 204
|
||||||
|
|
||||||
|
|
||||||
|
@shelf.route("/shelf/massadd/<int:shelf_id>", methods=["POST"])
|
||||||
|
@login_required
|
||||||
|
def search_to_shelf(shelf_id):
|
||||||
|
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||||
|
if shelf is None:
|
||||||
|
log.error("Invalid shelf specified: {}".format(shelf_id))
|
||||||
|
flash(_("Invalid shelf specified"), category="error")
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
|
||||||
|
if not check_shelf_edit_permissions(shelf):
|
||||||
|
log.warning("You are not allowed to add a book to the shelf".format(shelf.name))
|
||||||
|
flash(_("You are not allowed to add a book to the shelf"), category="error")
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
|
||||||
|
if current_user.id in ub.searched_ids and ub.searched_ids[current_user.id]:
|
||||||
|
books_for_shelf = list()
|
||||||
|
books_in_shelf = ub.session.query(ub.BookShelf).filter(ub.BookShelf.shelf == shelf_id).all()
|
||||||
|
if books_in_shelf:
|
||||||
|
book_ids = list()
|
||||||
|
for book_id in books_in_shelf:
|
||||||
|
book_ids.append(book_id.book_id)
|
||||||
|
for searchid in ub.searched_ids[current_user.id]:
|
||||||
|
if searchid not in book_ids:
|
||||||
|
books_for_shelf.append(searchid)
|
||||||
|
else:
|
||||||
|
books_for_shelf = ub.searched_ids[current_user.id]
|
||||||
|
|
||||||
|
if not books_for_shelf:
|
||||||
|
log.error("Books are already part of {}".format(shelf.name))
|
||||||
|
flash(_("Books are already part of the shelf: %(name)s", name=shelf.name), category="error")
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
|
||||||
|
maxOrder = ub.session.query(func.max(ub.BookShelf.order)).filter(ub.BookShelf.shelf == shelf_id).first()[0] or 0
|
||||||
|
|
||||||
|
for book in books_for_shelf:
|
||||||
|
maxOrder += 1
|
||||||
|
shelf.books.append(ub.BookShelf(shelf=shelf.id, book_id=book, order=maxOrder))
|
||||||
|
shelf.last_modified = datetime.utcnow()
|
||||||
|
try:
|
||||||
|
ub.session.merge(shelf)
|
||||||
|
ub.session.commit()
|
||||||
|
flash(_("Books have been added to shelf: %(sname)s", sname=shelf.name), category="success")
|
||||||
|
except (OperationalError, InvalidRequestError) as e:
|
||||||
|
ub.session.rollback()
|
||||||
|
log.error_or_exception("Settings Database error: {}".format(e))
|
||||||
|
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
|
||||||
|
else:
|
||||||
|
log.error("Could not add books to shelf: {}".format(shelf.name))
|
||||||
|
flash(_("Could not add books to shelf: %(sname)s", sname=shelf.name), category="error")
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
|
||||||
|
|
||||||
|
@shelf.route("/shelf/remove/<int:shelf_id>/<int:book_id>", methods=["POST"])
|
||||||
|
@login_required
|
||||||
|
def remove_from_shelf(shelf_id, book_id):
|
||||||
|
xhr = request.headers.get('X-Requested-With') == 'XMLHttpRequest'
|
||||||
|
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||||
|
if shelf is None:
|
||||||
|
log.error("Invalid shelf specified: {}".format(shelf_id))
|
||||||
|
if not xhr:
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
return "Invalid shelf specified", 400
|
||||||
|
|
||||||
|
# if shelf is public and use is allowed to edit shelfs, or if shelf is private and user is owner
|
||||||
|
# allow editing shelfs
|
||||||
|
# result shelf public user allowed user owner
|
||||||
|
# false 1 0 x
|
||||||
|
# true 1 1 x
|
||||||
|
# true 0 x 1
|
||||||
|
# false 0 x 0
|
||||||
|
|
||||||
|
if check_shelf_edit_permissions(shelf):
|
||||||
|
book_shelf = ub.session.query(ub.BookShelf).filter(ub.BookShelf.shelf == shelf_id,
|
||||||
|
ub.BookShelf.book_id == book_id).first()
|
||||||
|
|
||||||
|
if book_shelf is None:
|
||||||
|
log.error("Book %s already removed from %s", book_id, shelf)
|
||||||
|
if not xhr:
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
return "Book already removed from shelf", 410
|
||||||
|
|
||||||
|
try:
|
||||||
|
ub.session.delete(book_shelf)
|
||||||
|
shelf.last_modified = datetime.utcnow()
|
||||||
|
ub.session.commit()
|
||||||
|
except (OperationalError, InvalidRequestError) as e:
|
||||||
|
ub.session.rollback()
|
||||||
|
log.error_or_exception("Settings Database error: {}".format(e))
|
||||||
|
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
|
||||||
|
if "HTTP_REFERER" in request.environ:
|
||||||
|
return redirect(request.environ["HTTP_REFERER"])
|
||||||
|
else:
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
if not xhr:
|
||||||
|
flash(_("Book has been removed from shelf: %(sname)s", sname=shelf.name), category="success")
|
||||||
|
if "HTTP_REFERER" in request.environ:
|
||||||
|
return redirect(request.environ["HTTP_REFERER"])
|
||||||
|
else:
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
return "", 204
|
||||||
|
else:
|
||||||
|
if not xhr:
|
||||||
|
log.warning("You are not allowed to remove a book from shelf: {}".format(shelf.name))
|
||||||
|
flash(_("Sorry you are not allowed to remove a book from this shelf"),
|
||||||
|
category="error")
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
return "Sorry you are not allowed to remove a book from this shelf", 403
|
||||||
|
|
||||||
|
|
||||||
|
@shelf.route("/shelf/create", methods=["GET", "POST"])
|
||||||
|
@login_required
|
||||||
|
def create_shelf():
|
||||||
|
shelf = ub.Shelf()
|
||||||
|
return create_edit_shelf(shelf, page_title=_("Create a Shelf"), page="shelfcreate")
|
||||||
|
|
||||||
|
|
||||||
|
@shelf.route("/shelf/edit/<int:shelf_id>", methods=["GET", "POST"])
|
||||||
|
@login_required
|
||||||
|
def edit_shelf(shelf_id):
|
||||||
|
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||||
|
if not check_shelf_edit_permissions(shelf):
|
||||||
|
flash(_("Sorry you are not allowed to edit this shelf"), category="error")
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
return create_edit_shelf(shelf, page_title=_("Edit a shelf"), page="shelfedit", shelf_id=shelf_id)
|
||||||
|
|
||||||
|
|
||||||
|
@shelf.route("/shelf/delete/<int:shelf_id>", methods=["POST"])
|
||||||
|
@login_required
|
||||||
|
def delete_shelf(shelf_id):
|
||||||
|
cur_shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||||
|
try:
|
||||||
|
if not delete_shelf_helper(cur_shelf):
|
||||||
|
flash(_("Error deleting Shelf"), category="error")
|
||||||
|
else:
|
||||||
|
flash(_("Shelf successfully deleted"), category="success")
|
||||||
|
except InvalidRequestError as e:
|
||||||
|
ub.session.rollback()
|
||||||
|
log.error_or_exception("Settings Database error: {}".format(e))
|
||||||
|
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
|
||||||
|
|
||||||
|
@shelf.route("/simpleshelf/<int:shelf_id>")
|
||||||
|
@login_required_if_no_ano
|
||||||
|
def show_simpleshelf(shelf_id):
|
||||||
|
return render_show_shelf(2, shelf_id, 1, None)
|
||||||
|
|
||||||
|
|
||||||
|
@shelf.route("/shelf/<int:shelf_id>", defaults={"sort_param": "order", 'page': 1})
|
||||||
|
@shelf.route("/shelf/<int:shelf_id>/<sort_param>", defaults={'page': 1})
|
||||||
|
@shelf.route("/shelf/<int:shelf_id>/<sort_param>/<int:page>")
|
||||||
|
@login_required_if_no_ano
|
||||||
|
def show_shelf(shelf_id, sort_param, page):
|
||||||
|
return render_show_shelf(1, shelf_id, page, sort_param)
|
||||||
|
|
||||||
|
|
||||||
|
@shelf.route("/shelf/order/<int:shelf_id>", methods=["GET", "POST"])
|
||||||
|
@login_required
|
||||||
|
def order_shelf(shelf_id):
|
||||||
|
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||||
|
if shelf and check_shelf_view_permissions(shelf):
|
||||||
|
if request.method == "POST":
|
||||||
|
to_save = request.form.to_dict()
|
||||||
|
books_in_shelf = ub.session.query(ub.BookShelf).filter(ub.BookShelf.shelf == shelf_id).order_by(
|
||||||
|
ub.BookShelf.order.asc()).all()
|
||||||
|
counter = 0
|
||||||
|
for book in books_in_shelf:
|
||||||
|
setattr(book, 'order', to_save[str(book.book_id)])
|
||||||
|
counter += 1
|
||||||
|
# if order different from before -> shelf.last_modified = datetime.utcnow()
|
||||||
|
try:
|
||||||
|
ub.session.commit()
|
||||||
|
except (OperationalError, InvalidRequestError) as e:
|
||||||
|
ub.session.rollback()
|
||||||
|
log.error_or_exception("Settings Database error: {}".format(e))
|
||||||
|
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
|
||||||
|
|
||||||
|
result = list()
|
||||||
|
if shelf:
|
||||||
|
result = calibre_db.session.query(db.Books) \
|
||||||
|
.join(ub.BookShelf, ub.BookShelf.book_id == db.Books.id, isouter=True) \
|
||||||
|
.add_columns(calibre_db.common_filters().label("visible")) \
|
||||||
|
.filter(ub.BookShelf.shelf == shelf_id).order_by(ub.BookShelf.order.asc()).all()
|
||||||
|
return render_title_template('shelf_order.html', entries=result,
|
||||||
|
title=_("Change order of Shelf: '%(name)s'", name=shelf.name),
|
||||||
|
shelf=shelf, page="shelforder")
|
||||||
|
else:
|
||||||
|
abort(404)
|
||||||
|
|
||||||
|
|
||||||
|
def check_shelf_edit_permissions(cur_shelf):
|
||||||
|
if not cur_shelf.is_public and not cur_shelf.user_id == int(current_user.id):
|
||||||
|
log.error("User {} not allowed to edit shelf: {}".format(current_user.id, cur_shelf.name))
|
||||||
|
return False
|
||||||
|
if cur_shelf.is_public and not current_user.role_edit_shelfs():
|
||||||
|
log.info("User {} not allowed to edit public shelves".format(current_user.id))
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def check_shelf_view_permissions(cur_shelf):
|
||||||
|
try:
|
||||||
|
if cur_shelf.is_public:
|
||||||
|
return True
|
||||||
|
if current_user.is_anonymous or cur_shelf.user_id != current_user.id:
|
||||||
|
log.error("User is unauthorized to view non-public shelf: {}".format(cur_shelf.name))
|
||||||
|
return False
|
||||||
|
except Exception as e:
|
||||||
|
log.error(e)
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
# if shelf ID is set, we are editing a shelf
|
||||||
|
def create_edit_shelf(shelf, page_title, page, shelf_id=False):
|
||||||
|
sync_only_selected_shelves = current_user.kobo_only_shelves_sync
|
||||||
|
# calibre_db.session.query(ub.Shelf).filter(ub.Shelf.user_id == current_user.id).filter(ub.Shelf.kobo_sync).count()
|
||||||
|
if request.method == "POST":
|
||||||
|
to_save = request.form.to_dict()
|
||||||
|
if not current_user.role_edit_shelfs() and to_save.get("is_public") == "on":
|
||||||
|
flash(_("Sorry you are not allowed to create a public shelf"), category="error")
|
||||||
|
return redirect(url_for('web.index'))
|
||||||
|
is_public = 1 if to_save.get("is_public") == "on" else 0
|
||||||
|
if config.config_kobo_sync:
|
||||||
|
shelf.kobo_sync = True if to_save.get("kobo_sync") else False
|
||||||
|
if shelf.kobo_sync:
|
||||||
|
ub.session.query(ub.ShelfArchive).filter(ub.ShelfArchive.user_id == current_user.id).filter(
|
||||||
|
ub.ShelfArchive.uuid == shelf.uuid).delete()
|
||||||
|
ub.session_commit()
|
||||||
|
shelf_title = to_save.get("title", "")
|
||||||
|
if check_shelf_is_unique(shelf_title, is_public, shelf_id):
|
||||||
|
shelf.name = shelf_title
|
||||||
|
shelf.is_public = is_public
|
||||||
|
if not shelf_id:
|
||||||
|
shelf.user_id = int(current_user.id)
|
||||||
|
ub.session.add(shelf)
|
||||||
|
shelf_action = "created"
|
||||||
|
flash_text = _("Shelf %(title)s created", title=shelf_title)
|
||||||
|
else:
|
||||||
|
shelf_action = "changed"
|
||||||
|
flash_text = _("Shelf %(title)s changed", title=shelf_title)
|
||||||
|
try:
|
||||||
|
ub.session.commit()
|
||||||
|
log.info("Shelf {} {}".format(shelf_title, shelf_action))
|
||||||
|
flash(flash_text, category="success")
|
||||||
|
return redirect(url_for('shelf.show_shelf', shelf_id=shelf.id))
|
||||||
|
except (OperationalError, InvalidRequestError) as ex:
|
||||||
|
ub.session.rollback()
|
||||||
|
log.error_or_exception(ex)
|
||||||
|
log.error_or_exception("Settings Database error: {}".format(ex))
|
||||||
|
flash(_("Oops! Database Error: %(error)s.", error=ex.orig), category="error")
|
||||||
|
except Exception as ex:
|
||||||
|
ub.session.rollback()
|
||||||
|
log.error_or_exception(ex)
|
||||||
|
flash(_("There was an error"), category="error")
|
||||||
|
return render_title_template('shelf_edit.html',
|
||||||
|
shelf=shelf,
|
||||||
|
title=page_title,
|
||||||
|
page=page,
|
||||||
|
kobo_sync_enabled=config.config_kobo_sync,
|
||||||
|
sync_only_selected_shelves=sync_only_selected_shelves)
|
||||||
|
|
||||||
|
|
||||||
|
def check_shelf_is_unique(title, is_public, shelf_id=False):
|
||||||
|
if shelf_id:
|
||||||
|
ident = ub.Shelf.id != shelf_id
|
||||||
|
else:
|
||||||
|
ident = true()
|
||||||
|
if is_public == 1:
|
||||||
|
is_shelf_name_unique = ub.session.query(ub.Shelf) \
|
||||||
|
.filter((ub.Shelf.name == title) & (ub.Shelf.is_public == 1)) \
|
||||||
|
.filter(ident) \
|
||||||
|
.first() is None
|
||||||
|
|
||||||
|
if not is_shelf_name_unique:
|
||||||
|
log.error("A public shelf with the name '{}' already exists.".format(title))
|
||||||
|
flash(_("A public shelf with the name '%(title)s' already exists.", title=title),
|
||||||
|
category="error")
|
||||||
|
else:
|
||||||
|
is_shelf_name_unique = ub.session.query(ub.Shelf) \
|
||||||
|
.filter((ub.Shelf.name == title) & (ub.Shelf.is_public == 0) &
|
||||||
|
(ub.Shelf.user_id == int(current_user.id))) \
|
||||||
|
.filter(ident) \
|
||||||
|
.first() is None
|
||||||
|
|
||||||
|
if not is_shelf_name_unique:
|
||||||
|
log.error("A private shelf with the name '{}' already exists.".format(title))
|
||||||
|
flash(_("A private shelf with the name '%(title)s' already exists.", title=title),
|
||||||
|
category="error")
|
||||||
|
return is_shelf_name_unique
|
||||||
|
|
||||||
|
|
||||||
|
def delete_shelf_helper(cur_shelf):
|
||||||
|
if not cur_shelf or not check_shelf_edit_permissions(cur_shelf):
|
||||||
|
return False
|
||||||
|
shelf_id = cur_shelf.id
|
||||||
|
ub.session.delete(cur_shelf)
|
||||||
|
ub.session.query(ub.BookShelf).filter(ub.BookShelf.shelf == shelf_id).delete()
|
||||||
|
ub.session.add(ub.ShelfArchive(uuid=cur_shelf.uuid, user_id=cur_shelf.user_id))
|
||||||
|
ub.session_commit("successfully deleted Shelf {}".format(cur_shelf.name))
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def change_shelf_order(shelf_id, order):
|
||||||
|
result = calibre_db.session.query(db.Books).outerjoin(db.books_series_link,
|
||||||
|
db.Books.id == db.books_series_link.c.book)\
|
||||||
|
.outerjoin(db.Series).join(ub.BookShelf, ub.BookShelf.book_id == db.Books.id) \
|
||||||
|
.filter(ub.BookShelf.shelf == shelf_id).order_by(*order).all()
|
||||||
|
for index, entry in enumerate(result):
|
||||||
|
book = ub.session.query(ub.BookShelf).filter(ub.BookShelf.shelf == shelf_id) \
|
||||||
|
.filter(ub.BookShelf.book_id == entry.id).first()
|
||||||
|
book.order = index
|
||||||
|
ub.session_commit("Shelf-id:{} - Order changed".format(shelf_id))
|
||||||
|
|
||||||
|
|
||||||
|
def render_show_shelf(shelf_type, shelf_id, page_no, sort_param):
|
||||||
|
shelf = ub.session.query(ub.Shelf).filter(ub.Shelf.id == shelf_id).first()
|
||||||
|
|
||||||
|
# check user is allowed to access shelf
|
||||||
|
if shelf and check_shelf_view_permissions(shelf):
|
||||||
|
if shelf_type == 1:
|
||||||
|
# order = [ub.BookShelf.order.asc()]
|
||||||
|
if sort_param == 'pubnew':
|
||||||
|
change_shelf_order(shelf_id, [db.Books.pubdate.desc()])
|
||||||
|
if sort_param == 'pubold':
|
||||||
|
change_shelf_order(shelf_id, [db.Books.pubdate])
|
||||||
|
if sort_param == 'abc':
|
||||||
|
change_shelf_order(shelf_id, [db.Books.sort])
|
||||||
|
if sort_param == 'zyx':
|
||||||
|
change_shelf_order(shelf_id, [db.Books.sort.desc()])
|
||||||
|
if sort_param == 'new':
|
||||||
|
change_shelf_order(shelf_id, [db.Books.timestamp.desc()])
|
||||||
|
if sort_param == 'old':
|
||||||
|
change_shelf_order(shelf_id, [db.Books.timestamp])
|
||||||
|
if sort_param == 'authaz':
|
||||||
|
change_shelf_order(shelf_id, [db.Books.author_sort.asc(), db.Series.name, db.Books.series_index])
|
||||||
|
if sort_param == 'authza':
|
||||||
|
change_shelf_order(shelf_id, [db.Books.author_sort.desc(),
|
||||||
|
db.Series.name.desc(),
|
||||||
|
db.Books.series_index.desc()])
|
||||||
|
page = "shelf.html"
|
||||||
|
pagesize = 0
|
||||||
|
else:
|
||||||
|
pagesize = sys.maxsize
|
||||||
|
page = 'shelfdown.html'
|
||||||
|
|
||||||
|
result, __, pagination = calibre_db.fill_indexpage(page_no, pagesize,
|
||||||
|
db.Books,
|
||||||
|
ub.BookShelf.shelf == shelf_id,
|
||||||
|
[ub.BookShelf.order.asc()],
|
||||||
|
True, config.config_read_column,
|
||||||
|
ub.BookShelf, ub.BookShelf.book_id == db.Books.id)
|
||||||
|
# delete chelf entries where book is not existent anymore, can happen if book is deleted outside calibre-web
|
||||||
|
wrong_entries = calibre_db.session.query(ub.BookShelf) \
|
||||||
|
.join(db.Books, ub.BookShelf.book_id == db.Books.id, isouter=True) \
|
||||||
|
.filter(db.Books.id == None).all()
|
||||||
|
for entry in wrong_entries:
|
||||||
|
log.info('Not existing book {} in {} deleted'.format(entry.book_id, shelf))
|
||||||
|
try:
|
||||||
|
ub.session.query(ub.BookShelf).filter(ub.BookShelf.book_id == entry.book_id).delete()
|
||||||
|
ub.session.commit()
|
||||||
|
except (OperationalError, InvalidRequestError) as e:
|
||||||
|
ub.session.rollback()
|
||||||
|
log.error_or_exception("Settings Database error: {}".format(e))
|
||||||
|
flash(_("Oops! Database Error: %(error)s.", error=e.orig), category="error")
|
||||||
|
|
||||||
|
return render_title_template(page,
|
||||||
|
entries=result,
|
||||||
|
pagination=pagination,
|
||||||
|
title=_("Shelf: '%(name)s'", name=shelf.name),
|
||||||
|
shelf=shelf,
|
||||||
|
page="shelf")
|
||||||
|
else:
|
||||||
|
flash(_("Error opening shelf. Shelf does not exist or is not accessible"), category="error")
|
||||||
|
return redirect(url_for("web.index"))
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue