Most applications don't actually need concurrent access. SQLite handles concurrent reads without any issues, with writes requiring exclusive locks. As long as your queries are fast and your write load is minimal, you won't really have any problems.
What you describe is the way SQLite worked originally. With the newer WAL mode, things are slightly better -- you can still only have one active write transaction at a time, but writers no longer block readers.
So I read about WAL mode down here and cheered because it sounded like it would solve the occasional "database is locked" error that the app I am working on is bumping into.
The database is only opened by 2 users on the same machine. One is a normal user, the other one is root for a daemon process. That by itself might be an uncommon scenario.
So I tried it out this morning and found that any writes are invisible unless I restart the app to close the database. For my use case that isn't an improvement as writes made by either user should be visible by the other user. Even tried it with setting read_uncommitted to true, but that did not help either.
Of course it is possible I am still doing something wrong, but at this moment it doesn't look like the WAL journal mode is an option for my app.
A pity as I expected -without WAL- to be able to read when another process is writing, well just a delayed read would be fine, but instead there's a -database is locked- error that pops up to the user.
Yeah, they both can write, although almost all of the writes are done by the daemon process and the normal user (the GUI process) reads and processes the results. The 'database is locked' problem seemed to happen most while the daemon user is writing and the normal user is reading.
For the moment I added a patch to my apps whereby the applications handle the locking by itself at a slightly higher level as I got a bit tired of the problem.
This is done via a separate lock file that is opened exclusively before any write action and closed after the write. By doing that I can simply delay the reads for a bit when the GUI process tests to open the lock file and that appears to have cured most problems.
It's a tiny bit more advanced as the above, but that's basically it and it appears to have cured most issues.
edit: might have misread your question, was it about the WAL journal mode? Yes the processes do commit the transactions they write. I need the results immediately, not after sqlite decides to process the WAL journal.
When staying in normal journal mode the app sees the data just fine and the data is committed directly in that case. Updates/Deletes are all pretty much instant and any queries results are correct.
There might be an issue with the database drivers I depend on (FireDAC) in that layer I even go as far as closing the tables on each query/update after a commit.
The problem with normal journal mode is the lock error popping up.
When I switch to WAL journal mode the data no longer appears to be written directly even when turning autocommit back on.
So while WAL mode appears to fix the lock issue, the data only gets committed on closing the database connection.
As a result the GUI process can't interact with the daemon process anymore as it only sees old data.
Opening and closing the database on each insert/update/delete to force the data to be written simply isn't an option.
The reason this was happening was not because of SQLite, but due to how the FireDAC driver handles the locking.
That driver had a setting "BusyTimeout" which supposedly takes care of a lock waiting time.
According to the documentation it has a default timeout setting of 10 seconds. That clearly did not work, I even had set it manually, still to no effect.
The other day I figured to try and set this via an SQLite pragma setting... (busy_timeout)
I've not seen a "database is locked" issue since then and I've completely removed my manual locking layer, so "case solved" and it certainly wasn't SQLite to blame.
> Most applications don't actually need concurrent access
That's an interesting claim. I would rephrase that to, "Are their more http calls that use concurrent connections to a DB, or standalone applications that do not?" I would wager the former.
Even on most websites, I suspect the need for concurrent, long-lived write transactions is much rarer than people assume. If your write transactions are short-lived, then sequential execution is a reasonable approximation of (slow) concurrency, at which point it's a question of load whether that's good enough. But the window in which it's not good enough is very slim - hardware simply isn't all that concurrent in the first place, and as you scale, some sharding strategy is required anyhow.
So the more plausible limitation is long-lived write transactions; e.g. where a write cannot be committed until after some other confirmation occurs, possibly over the network. That simply won't work well at all in sqlite - not that it's a great strategy to use on other DBs...
> The SQLite website (https://www.sqlite.org/) uses SQLite itself, of course, and as of this writing (2015) it handles about 400K to 500K HTTP requests per day, about 15-20% of which are dynamic pages touching the database. Each dynamic page does roughly 200 SQL statements. This setup runs on a single VM that shares a physical server with 23 others and yet still keeps the load average below 0.1 most of the time.
I think its fair to assume that the sqlite site could be redesigned to meet most of its functionality as a largely static site, but that would come at a loss of functionality. And obviously it's a form of dogfooding, but that's not objectionable, right?
SQLite is not such a great choice for the server side. SQLite typically get's used on the client side portion, either as a "caching db" for offline work, or for client side only programs without a server backend.
It is the inside the engine on how to do things. There are exact steps that need to be done the way the document reads to make it happen.
There is a also big section on how to corrupt the database. It's a heads up that if you decide to do shortcuts there will not be a happy ending.
A little more complicated doing concurrent use than with something like MySQL, but there is much more engine on the MySQL side. If multiple concurrent users with high transaction levels, SQLite may not be your best first choice.
SQLite has always supported multiple readers, which is the most common concurrent access pattern. Before the WAL (Write-ahead log) was introduced, a writer would block readers (and vice versa), but with a WAL, a single writer and multiple readers do not block each others.
SQLite does not have multi writer concurrency (usually MVCC as in Oracle/MySQL/PostgreSQL or optimstic transactions like Backplane). If you need those, SQLite is not for you.