Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage_status/1 should return :up for in-memory database #34

Open
sntran opened this issue May 9, 2021 · 8 comments
Open

storage_status/1 should return :up for in-memory database #34

sntran opened this issue May 9, 2021 · 8 comments
Labels
enhancement New feature or request

Comments

@sntran
Copy link

sntran commented May 9, 2021

Currently, the function checks if the database file exists. This returns false for in-memory.

Which in turn fails Phoenix.Ecto.CheckRepoStatus during development.

I would like to see if we can make the function return true instead.

@warmwaffles
Copy link
Member

@sntran unfortunately with how the connections are pooled, your in memory database will always be empty because each new connection opens its own database. This is mostly an issue with how db_connection handles pooled connections and long lived connections.

Although you are right, we should return :up whenever an in memory database is used.

@japplegame
Copy link

japplegame commented Jun 14, 2021

unfortunately with how the connections are pooled, your in memory database will always be empty because each new connection opens its own database.

If you use cache=shared option, then all connections will share the same in-memory database.

rc = sqlite3_open("file:memdb1?mode=memory&cache=shared", &db);

This allows separate database connections to share the same in-memory database. Of course, all database connections sharing the in-memory database need to be in the same process. The database is automatically deleted and memory is reclaimed when the last connection to the database closes.

SQLite docs: In-memory Databases And Shared Cache

But I ran into a very strange error when I was trying to use an in-memory database. Everything works fine until you try to perform two parallel transactions, one of which performs a write operation.
The error is
database table is locked
Please note, not database is locked, not database is busy, but database table is locked.
I have no idea what that means.

@warmwaffles warmwaffles added the bug Something isn't working label Jun 14, 2021
@warmwaffles
Copy link
Member

I haven't had time to really dive into this lately, but I don't know how the shared cache is going to work when the DB is on a different machine than the one trying to access it. Could be an unfounded fear. Unsure yet.

@warmwaffles warmwaffles added enhancement New feature or request and removed bug Something isn't working labels May 15, 2022
@warmwaffles
Copy link
Member

I'm still not sure if this is something that we should support. DBConnection is pretty entangled into ecto and I don't think we can guarantee connections being in the same process due to it.

@AlexR2D2
Copy link

AlexR2D2 commented May 15, 2023

I ran into the same problem with Phoenix.Ecto.CheckRepoStatus

I have the number of simple Phoenix Apps for Admin. Each time when an application start it opens the in-memory sqlite DB, when run migration to create user/token tables and register only one user with the Admin username and password from the ENV variables. So, this way I have the login functionality for simple Admin tool with completely autogenerated code for authentication (mix phx.gen.auth). All I have written is code to run migration and register user at app startup. Very handy and securely. :)

@rhcarvalho
Copy link

tldr: storage_status/1 incorrectly returns down for database paths using SQLite's URI format file:/path/to/db.


I was trying to use an in-memory database using the memdb VFS:

# config/dev.exs

config :phoenix_app, PhoenixApp.Repo,
  database: "file:/db?vfs=memdb",
  pool_size: 5,
  stacktrace: true,
  show_sensitive_data_on_connection_error: true

https://www.sqlite.org/forum/forumpost/c1004073ca76117f
https://sqlite.org/inmemorydb.html
https://sqlite.org/releaselog/3_36_0.html

Realized the error from Phoenix.Ecto.CheckRepoStatus in development is due to how storage_status/1 is implemented and tracked here.

Further, I decided to drop the ?vfs=memdb and point to a writable path -- like so:

config :phoenix_app, PhoenixApp.Repo,
  database: "file:/tmp/data.db",
  pool_size: 5,
  stacktrace: true,
  show_sensitive_data_on_connection_error: true

Which was also problematic, but possibly a different issue?! Even though the "file:" prefix is a valid part of SQLite URI, it makes ecto_sqlite3 think the database doesn't exist:

[info] Running PhoenixAppWeb.Endpoint with cowboy 2.10.0 at 127.0.0.1:4000 (http)
[info] Access PhoenixAppWeb.Endpoint at http://localhost:4000
[error] #PID<0.497.0> running Phoenix.Endpoint.SyncCodeReloadPlug (connection #PID<0.496.0>, stream id 1) terminated
Server: localhost:4000 (http)
Request: GET /
** (exit) an exception was raised:
    ** (Phoenix.Ecto.StorageNotCreatedError) the storage is not created for repo: PhoenixApp.Repo. Try running `mix ecto.create` in the command line to create it
        (phoenix_ecto 4.4.2) lib/phoenix_ecto/check_repo_status.ex:49: Phoenix.Ecto.CheckRepoStatus.check_storage_up!/1
        (phoenix_ecto 4.4.2) lib/phoenix_ecto/check_repo_status.ex:32: anonymous fn/3 in Phoenix.Ecto.CheckRepoStatus.call/2

@warmwaffles
Copy link
Member

warmwaffles commented Jul 27, 2023

@rhcarvalho I'll take a look at this soon, we should handle this.

We are also in an experimental phase changing how the connection is pooled.

https://www.sqlite.org/forum/forumpost/c1004073ca76117f

This is an incredibly interesting conversation.

@rhcarvalho
Copy link

We are also in an experimental phase changing how the connection is pooled.

I didn't know about it, thanks for the pointers, exciting developments 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants