1
0
mirror of synced 2025-02-15 18:02:39 +01:00
artemis/dbutils.py

77 lines
2.3 KiB
Python
Raw Normal View History

2024-01-09 03:07:04 -05:00
#!/usr/bin/env python3
import argparse
2024-11-14 12:36:22 +07:00
import asyncio
import logging
2024-11-14 12:36:22 +07:00
from os import W_OK, access, environ, mkdir, path
2024-01-09 03:07:04 -05:00
import yaml
2024-01-09 13:57:59 -05:00
from core.config import CoreConfig
2024-11-14 12:36:22 +07:00
from core.data import Data
use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-16 19:10:29 +00:00
async def main():
parser = argparse.ArgumentParser(description="Database utilities")
2023-03-09 11:38:58 -05:00
parser.add_argument(
"--config", "-c", type=str, help="Config folder to use", default="config"
)
parser.add_argument(
"--version",
"-v",
type=str,
help="Version of the database to upgrade/rollback to",
)
parser.add_argument("--email", "-e", type=str, help="Email for the new user")
2024-01-09 03:07:04 -05:00
parser.add_argument("--access_code", "-a", type=str, help="Access code for new/transfer user", default="00000000000000000000")
2024-01-11 20:48:27 -05:00
parser.add_argument("--message", "-m", type=str, help="Revision message")
parser.add_argument("action", type=str, help="create, upgrade, downgrade, create-owner, migrate, create-revision, create-autorevision")
args = parser.parse_args()
environ["ARTEMIS_CFG_DIR"] = args.config
cfg = CoreConfig()
if path.exists(f"{args.config}/core.yaml"):
cfg_dict = yaml.safe_load(open(f"{args.config}/core.yaml"))
cfg.update(cfg_dict)
if not path.exists(cfg.server.log_dir):
mkdir(cfg.server.log_dir)
if not access(cfg.server.log_dir, W_OK):
print(
f"Log directory {cfg.server.log_dir} NOT writable, please check permissions"
)
exit(1)
data = Data(cfg)
if args.action == "create":
use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-16 19:10:29 +00:00
await data.create_database()
2024-01-09 03:07:04 -05:00
elif args.action == "upgrade":
data.schema_upgrade(args.version)
2023-03-18 02:12:58 -04:00
2024-03-04 00:50:51 -05:00
elif args.action == "downgrade":
if not args.version:
logging.getLogger("database").error(f"Version argument required for downgrade")
exit(1)
data.schema_downgrade(args.version)
elif args.action == "create-owner":
use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-16 19:10:29 +00:00
await data.create_owner(args.email, args.access_code)
2024-01-09 13:57:59 -05:00
elif args.action == "migrate":
use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-16 19:10:29 +00:00
await data.migrate()
2024-01-11 20:48:27 -05:00
elif args.action == "create-revision":
use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-16 19:10:29 +00:00
await data.create_revision(args.message)
2024-01-11 20:48:27 -05:00
elif args.action == "create-autorevision":
use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-16 19:10:29 +00:00
await data.create_revision_auto(args.message)
2024-01-09 03:07:04 -05:00
else:
logging.getLogger("database").info(f"Unknown action {args.action}")
use SQL's limit/offset pagination for nextIndex/maxCount requests (#185) Instead of retrieving the entire list of items/characters/scores/etc. at once (and even store them in memory), use SQL's `LIMIT ... OFFSET ...` pagination so we only take what we need. Currently only CHUNITHM uses this, but this will also affect maimai DX and O.N.G.E.K.I. once the PR is ready. Also snuck in a fix for CHUNITHM/maimai DX's `GetUserRivalMusicApi` to respect the `userRivalMusicLevelList` sent by the client. ### How this works Say we have a `GetUserCharacterApi` request: ```json { "userId": 10000, "maxCount": 700, "nextIndex": 0 } ``` Instead of getting the entire character list from the database (which can be very large if the user force unlocked everything), add limit/offset to the query: ```python select(character) .where(character.c.user == user_id) .order_by(character.c.id.asc()) .limit(max_count + 1) .offset(next_index) ``` The query takes `maxCount + 1` items from the database to determine if there is more items than can be returned: ```python rows = ... if len(rows) > max_count: # return only max_count rows next_index += max_count else: # return everything left next_index = -1 ``` This has the benefit of not needing to load everything into memory (and also having to store server state, as seen in the [`SCORE_BUFFER` list](https://gitea.tendokyu.moe/Hay1tsme/artemis/src/commit/2274b42358d9ef449ca541a46ce654b846ce7f7c/titles/chuni/base.py#L13).) Reviewed-on: https://gitea.tendokyu.moe/Hay1tsme/artemis/pulls/185 Co-authored-by: beerpsi <beerpsi@duck.com> Co-committed-by: beerpsi <beerpsi@duck.com>
2024-11-16 19:10:29 +00:00
if __name__ == "__main__":
asyncio.run(main())