[CLOSED] Hybrid GM (Dev-Log)

Started by ., Feb 27, 2015, 07:07 PM

Previous topic - Next topic

.

#30
SQLITE: THE NAIVE WRITE

For this benchmark I'll create a table [Accounts] and insert 10,000 dummy accounts in it:

StartBenchmark(" SQLITE WRITE ");

local db = SQLite.Connection("test.db");

StepBenchmark("Connection");

db.Exec(@"CREATE TABLE IF NOT EXISTS [Accounts] (
    [ID] INTEGER  PRIMARY KEY NOT NULL,
    [Name] VARCHAR(32)  UNIQUE NOT NULL,
    [Pass] VARCHAR(128)  UNIQUE NOT NULL,
    [Active] INTEGER DEFAULT 0 NULL
);");

StepBenchmark("Creation");

for (local i = 0; i <= 10000; ++i)
{
    db.ExecF("INSERT INTO [Accounts] (ID, Name, Pass, Active) VALUES (%d, 'user_%d', 'password_%d', %d);"
                i, i, i, i % 2);
}

StepBenchmark("Insertion");

StopBenchmark(" SQLITE WRITE ");

Benchmark results:
[USR] ---------------------------- SQLITE WRITE ----------------------------
[USR]
[INF] Connection      :         407 microseconds
[INF] Creation        :      116224 microseconds
[INF] Insertion       :   742909447 microseconds
[INF] Total           :   743026078 microseconds
[USR]
[USR] ---------------------------- SQLITE WRITE ----------------------------

At this point you're probably like: WTF SLC? 12 Minutes! How can SQLite be thousands of times slower than INI files? And now you've let me know that you didn't pay attention to the benchmark title. Otherwise you'd have seen that uppercase/bold/red word.

Quote from: SQLite FAQ(19) INSERT is really slow - I can only do few dozen INSERTs per second

Actually, SQLite will easily do 50,000 or more INSERT statements per second on an average desktop computer. But it will only do a few dozen transactions per second. Transaction speed is limited by the rotational speed of your disk drive. A transaction normally requires two complete rotations of the disk platter, which on a 7200RPM disk drive limits you to about 60 transactions per second.
Transaction speed is limited by disk drive speed because (by default) SQLite actually waits until the data really is safely stored on the disk surface before the transaction is complete. That way, if you suddenly lose power or if your OS crashes, your data is still safe. For details, read about atomic commit in SQLite..

By default, each INSERT statement is its own transaction. But if you surround multiple INSERT statements with BEGIN...COMMIT then all the inserts are grouped into a single transaction. The time needed to commit the transaction is amortized over all the enclosed insert statements and so the time per insert statement is greatly reduced.

Another option is to run PRAGMA synchronous=OFF. This command will cause SQLite to not wait on data to reach the disk surface, which will make write operations appear to be much faster. But if you lose power in the middle of a transaction, your database file might go corrupt.

Source.

Optimizations behind the scene:
  • N/A

Bottlenecks behind the scene:
  • Because each query has it's own transaction and transactions are very slow. Inherently makes each individual query take a significant amount time.
  • Each query has to be compiled individually. It's the same as compiling your script 10,000 times for no reason at all.
  • Others which will be discussed later.


SQLITE: READ

For this bench mark I'll be reading and processing the file I've previously created and also do some minor validation on the data:

StartBenchmark(" SQLITE READ ");

local db = SQLite.Connection("test.db");

StepBenchmark("Connection");

local acc_id, acc_name, acc_pass, acc_active;

for (local i = 0; i <= 10000; ++i)
{
    local stmt = db.QueryF("SELECT * FROM [Accounts] WHERE ID=%d;", i);

    if (!stmt.Step())
    {
        throw "this account does not exist";
    }

    acc_id = stmt.Get("ID").Integer;
    acc_name = stmt.Get("Name").String;
    acc_pass = stmt.Get("Pass").String;
    acc_active = stmt.Get("Active").Boolean;
}

StepBenchmark("Processing");

StopBenchmark(" SQLITE READ ");

Benchmark results:
[USR] ----------------------------- SQLITE READ ----------------------------
[USR]
[INF] Connection      :         304 microseconds
[INF] Processing      :      740621 microseconds
[INF] Total           :      740925 microseconds
[USR]
[USR] ----------------------------- SQLITE READ ----------------------------

Much slower than INI. 0.74 seconds compared to 0.17 seconds from INI.

Optimizations behind the scene:
  • N/A

Bottlenecks behind the scene:
  • Each query has to be compiled individually. It's the same as compiling your script 10,000 times for no reason at all.
  • Others which will be discussed later.
.

.

#31
SQLITE: WRITE OPTIMIZATION #1

In this benchmark I will use the same code as in the naive write approach except I will tell SQLite to simply hand-off the data to the OS for writing and then continue.

All I did was add this line of code after opening the database:
db.Exec("PRAGMA synchronous = OFF");
NOTE: Read about the dangers of PRAGMA synchronous OFF.

The resulting code was:
StartBenchmark(" SQLITE WRITE ");

local db = SQLite.Connection("test.db");

db.Exec("PRAGMA synchronous = OFF");

StepBenchmark("Connection");

db.Exec(@"CREATE TABLE IF NOT EXISTS [Accounts] (
    [ID] INTEGER  PRIMARY KEY NOT NULL,
    [Name] VARCHAR(32)  UNIQUE NOT NULL,
    [Pass] VARCHAR(128)  UNIQUE NOT NULL,
    [Active] INTEGER DEFAULT 0 NULL
);");

StepBenchmark("Creation");

for (local i = 0; i <= 10000; ++i)
{
    db.ExecF("INSERT INTO [Accounts] (ID, Name, Pass, Active) VALUES (%d, 'user_%d', 'password_%d', %d);"
                i, i, i, i % 2);
}

StepBenchmark("Insertion");

StopBenchmark(" SQLITE WRITE ");

Benchmark results:
[USR] ---------------------------- SQLITE WRITE ----------------------------
[USR]
[INF] Connection      :         865 microseconds
[INF] Creation        :        1486 microseconds
[INF] Insertion       :     4442478 microseconds
[INF] Total           :     4444829 microseconds
[USR]
[USR] ---------------------------- SQLITE WRITE ----------------------------

Much better. Right? But still behind INI. 4.50 seconds is way more than 0.40 seconds.



SQLITE: WRITE OPTIMIZATION #2

In this benchmark I will take advantage of transactions to write large amounts of data in bulk. But also keep the previous optimization. This optimization does not apply to regular code where you barely use 1-2 insert queries every second. But let's try to out-perform the INI bulk write speeds.

StartBenchmark(" SQLITE WRITE ");

local db = SQLite.Connection("test.db");

db.Exec("PRAGMA synchronous = OFF");

StepBenchmark("Connection");

db.Exec(@"CREATE TABLE IF NOT EXISTS [Accounts] (
    [ID] INTEGER  PRIMARY KEY NOT NULL,
    [Name] VARCHAR(32)  UNIQUE NOT NULL,
    [Pass] VARCHAR(128)  UNIQUE NOT NULL,
    [Active] INTEGER DEFAULT 0 NULL
);");

StepBenchmark("Creation");

{
    local sqtrans = SQLite.Transaction(db);

    for (local i = 0; i <= 10000; ++i)
    {
        db.ExecF("INSERT INTO [Accounts] (ID, Name, Pass, Active) VALUES (%d, 'user_%d', 'password_%d', %d);"
                    i, i, i, i % 2);
    }

    sqtrans.Commit();
}

StepBenchmark("Insertion");

StopBenchmark(" SQLITE WRITE ");

If you notice all I did was to wrap the loop in a single transaction.

Benchmark results:
[USR] ---------------------------- SQLITE WRITE ----------------------------
[USR]
[INF] Connection      :         748 microseconds
[INF] Creation        :        1574 microseconds
[INF] Insertion       :      210182 microseconds
[INF] Total           :      212504 microseconds
[USR]
[USR] ---------------------------- SQLITE WRITE ----------------------------

Who's the winner now? 0.21 seconds compared to 0.40 seconds for INI.



SQLITE: WRITE OPTIMIZATION #3

In this benchmark I will extend the previous benchmark by using a pre-compiled statement. As I've mentioned before SQLite has to compile that query each time you execute it. But wouldn't it be nice if we could compile that once and just updating the values we want to insert?

All I did this time was to create the statement once:
    local stmt = SQLite.Statement(db, "INSERT INTO [Accounts] (ID, Name, Pass, Active) VALUES (?, ?, ?, ?);");
And then simply updating the values that I want to insert:
        stmt.Reset();
        stmt.IBindV(1, i);
        stmt.IBindS(2, "user_"+i);
        stmt.IBindS(3, "password_"+i);
        stmt.IBindV(4, i % 2);
        stmt.Step();

NOTE: This is also the most efficient method of avoiding SQL injection on your server.

The resulting code was:
StartBenchmark(" SQLITE WRITE ");

local db = SQLite.Connection("test.db");

db.Exec("PRAGMA synchronous = OFF");

StepBenchmark("Connection");

db.Exec(@"CREATE TABLE IF NOT EXISTS [Accounts] (
    [ID] INTEGER  PRIMARY KEY NOT NULL,
    [Name] VARCHAR(32)  UNIQUE NOT NULL,
    [Pass] VARCHAR(128)  UNIQUE NOT NULL,
    [Active] INTEGER DEFAULT 0 NULL
);");

StepBenchmark("Creation");

{
    local stmt = SQLite.Statement(db, "INSERT INTO [Accounts] (ID, Name, Pass, Active) VALUES (?, ?, ?, ?);");
    local sqtrans = SQLite.Transaction(db);

    for (local i = 0; i <= 10000; ++i)
    {
        stmt.Reset();
        stmt.IBindV(1, i);
        stmt.IBindS(2, "user_"+i);
        stmt.IBindS(3, "password_"+i);
        stmt.IBindV(4, i % 2);
        stmt.Step();
    }

    sqtrans.Commit();
}

StepBenchmark("Insertion");

StopBenchmark(" SQLITE WRITE ");

Benchmark results:
[USR] ---------------------------- SQLITE WRITE ----------------------------
[USR]
[INF] Connection      :         760 microseconds
[INF] Creation        :        1563 microseconds
[INF] Insertion       :      139581 microseconds
[INF] Total           :      141904 microseconds
[USR]
[USR] ---------------------------- SQLITE WRITE ----------------------------

Seems we could go even faster than INI. This time the whole operation took around 0.13 seconds compared to the 0.40 seconds from INI.



SQLITE: WRITE OPTIMIZATION #4

In this benchmark I will try to go even further by taking a small risk. I will attempt to force the rollback journal to be stored in memory.

All I did this time was to add this line after opening the database:
db.Exec("PRAGMA journal_mode = MEMORY");
NOTE: This has the disadvantage that  if you lose power or your program crashes during a transaction your database could be left in a corrupt state with a partially-completed transaction. Normally, the plugin tries to avoid crashes as much as possible and in most cases the transaction is rolled back automatically if something failed. But still, this approach has a red flag on it.

The resulting code was:
StartBenchmark(" SQLITE WRITE ");

local db = SQLite.Connection("test.db");

db.Exec("PRAGMA synchronous = OFF");
db.Exec("PRAGMA journal_mode = MEMORY");

StepBenchmark("Connection");

db.Exec(@"CREATE TABLE IF NOT EXISTS [Accounts] (
    [ID] INTEGER  PRIMARY KEY NOT NULL,
    [Name] VARCHAR(32)  UNIQUE NOT NULL,
    [Pass] VARCHAR(128)  UNIQUE NOT NULL,
    [Active] INTEGER DEFAULT 0 NULL
);");

StepBenchmark("Creation");

{
    local stmt = SQLite.Statement(db, "INSERT INTO [Accounts] (ID, Name, Pass, Active) VALUES (?, ?, ?, ?);");
    local sqtrans = SQLite.Transaction(db);

    for (local i = 0; i <= 10000; ++i)
    {
        stmt.Reset();
        stmt.IBindV(1, i);
        stmt.IBindS(2, "user_"+i);
        stmt.IBindS(3, "password_"+i);
        stmt.IBindV(4, i % 2);
        stmt.Step();
    }

    sqtrans.Commit();
}

StepBenchmark("Insertion");

StopBenchmark(" SQLITE WRITE ");

Benchmark results:
[USR] ---------------------------- SQLITE WRITE ----------------------------
[USR]
[INF] Connection      :         753 microseconds
[INF] Creation        :        1199 microseconds
[INF] Insertion       :      139712 microseconds
[INF] Total           :      141664 microseconds
[USR]
[USR] ---------------------------- SQLITE WRITE ----------------------------

Unfortunately this was as much as it could be optimized. On my system it didn't have any benefits at all. Simply because everything was executed in a single transaction. You need a few more transactions to see the benefits. But on a system with slow IO speeds. This will definitely start to show up no mater the number of transactions.



So there you go folks. If you truly want to optimize something you should know that there are always ways to do it. You just happen to don't know about them or simply ignore all forms of optimizations as I've seen some of you do very frequently. In the next post I'll go into detail about what SQLite allows you to do and INI doesn't.
.

.

#32
SQLITE: READ OPTIMIZATION #1

In this benchmark I will try to optimize the read operation so that we can match the one of the INI benchmark or even obtain better results. First thing that comes to mind is to use pre-compiled statements like we did in the write benchmarks.

All I did was to created the statement once:
local stmt = SQLite.Statement(db, "SELECT * FROM [Accounts] WHERE ID=?;");
And bind the values that I want to work with:
    stmt.Reset();
    stmt.IBindV(1, i);

NOTE: This is also the most efficient method of avoiding SQL injection on your server.

The resulting code was:
StartBenchmark(" SQLITE READ ");

local db = SQLite.Connection("test.db");

StepBenchmark("Connection");

local acc_id, acc_name, acc_pass, acc_active;

local stmt = SQLite.Statement(db, "SELECT * FROM [Accounts] WHERE ID=?;");

for (local i = 0; i <= 10000; ++i)
{
    stmt.Reset();
    stmt.IBindV(1, i);

    if (!stmt.Step())
    {
        throw "this account does not exist";
    }

    acc_id = stmt.Get("ID").Integer;
    acc_name = stmt.Get("Name").String;
    acc_pass = stmt.Get("Pass").String;
    acc_active = stmt.Get("Active").Boolean;
}

StepBenchmark("Processing");

StopBenchmark(" SQLITE READ ");

Benchmark result:
[USR] ----------------------------- SQLITE READ ----------------------------
[USR]
[INF] Connection      :         226 microseconds
[INF] Processing      :      521106 microseconds
[INF] Total           :      521332 microseconds
[USR]
[USR] ----------------------------- SQLITE READ ----------------------------

Still slower than INI. 0.52 seconds is much slower than 0.17 seconds.



SQLITE: READ OPTIMIZATION #2

In this benchmark I'll be extending on the previous benchmark by removing the need to search for column names on each iteration. We know that in our loop we are going to retrieve the same columns over and over and over. Wouldn't be nice if we could retrieve them once and use them continuously after without having to run a search every time?

All I did was to retrieve the columns once outside the loop:
local col_id = stmt.Get("ID")
    , col_name = stmt.Get("Name")
    , col_pass = stmt.Get("Pass")
    , col_active = stmt.Get("Active");

And then use them whenever I want to query information about the current row in the statement:
    acc_id = col_id.Integer;
    acc_name = col_name.String;
    acc_pass = col_pass.String;
    acc_active = col_active.Boolean;

The resulting code was:
tBenchmark(" SQLITE READ ");

local db = SQLite.Connection("test.db");

StepBenchmark("Connection");

local acc_id, acc_name, acc_pass, acc_active;

local stmt = SQLite.Statement(db, "SELECT * FROM [Accounts] WHERE ID=?;");

local col_id = stmt.Get("ID")
    , col_name = stmt.Get("Name")
    , col_pass = stmt.Get("Pass")
    , col_active = stmt.Get("Active");

for (local i = 0; i <= 10000; ++i)
{
    stmt.Reset();
    stmt.IBindV(1, i);

    if (!stmt.Step())
    {
        throw "this account does not exist";
    }

    acc_id = col_id.Integer;
    acc_name = col_name.String;
    acc_pass = col_pass.String;
    acc_active = col_active.Boolean;
}

StepBenchmark("Processing");

StopBenchmark(" SQLITE READ ");

Benchmark result:
[USR] ----------------------------- SQLITE READ ----------------------------
[USR]
[INF] Connection      :         238 microseconds
[INF] Processing      :      429721 microseconds
[INF] Total           :      429959 microseconds
[USR]
[USR] ----------------------------- SQLITE READ ----------------------------

Still behind the INI. 0.42 is still slower than 0.17 we had with INI.



SQLITE: READ OPTIMIZATION #3

In this benchmark I'll be extending on the previous code by removing the need to query the database for each account. This code also shows what you can do with SQLite but not with INI. Which is to retrieve data in bulk. Of course, this is not always the case with most programs but just to show what SQLite can do in these kinds of situations.

All I did was to remove the where clause from the query:
local stmt = SQLite.Statement(db, "SELECT * FROM [Accounts];");
And also use a different kind of loop to iterate the selected data:
while (stmt.Step())
{
    acc_id = col_id.Integer;
    acc_name = col_name.String;
    acc_pass = col_pass.String;
    acc_active = col_active.Boolean;
}

The resulting code was:
StartBenchmark(" SQLITE READ ");

local db = SQLite.Connection("test.db");

StepBenchmark("Connection");

local acc_id, acc_name, acc_pass, acc_active;

local stmt = SQLite.Statement(db, "SELECT * FROM [Accounts];");

local col_id = stmt.Get("ID")
    , col_name = stmt.Get("Name")
    , col_pass = stmt.Get("Pass")
    , col_active = stmt.Get("Active");

while (stmt.Step())
{
    acc_id = col_id.Integer;
    acc_name = col_name.String;
    acc_pass = col_pass.String;
    acc_active = col_active.Boolean;
}

StepBenchmark("Processing");

if (acc_id != 10000)
{
    throw "there was an issue with the query";
}

StopBenchmark(" SQLITE READ ");

Benchmark results:
[USR] ----------------------------- SQLITE READ ----------------------------
[USR]
[INF] Connection      :         245 microseconds
[INF] Processing      :       54239 microseconds
[INF] Total           :       54484 microseconds
[USR]
[USR] ----------------------------- SQLITE READ ----------------------------

There we go. Now we're starting to see the true power of SQLite. 0.054 seconds is a much better result than 0.17 seconds from INI.



So there you go folks. Again demonstrating that if you truly want to optimize something, you can definitely achieve better results. It all depends on the situation that you're in. In the next post I'll take the current SQLite benchmarks to extreme by using in memory databases to show how much disk IO impacts performance.
.

.

#33
SQLITE: WRITE/READ IN-MEMORY DATABASE

The reason I've created this benchmark is because the INI file is stored in memory and I wanted to see a true one on one comparison. In this benchmark I'll be combining the code from from write optimization #3 and read optimization #2. Which means that I'll be searching the database for each individual account.

The resulting code will be:
StartBenchmark(" SQLITE MEMORY READ/WRITE ");

local db = SQLite.Connection(":memory:");

db.Exec("PRAGMA synchronous = OFF");

StepBenchmark("Connection");

db.Exec(@"CREATE TABLE IF NOT EXISTS [Accounts] (
    [ID] INTEGER  PRIMARY KEY NOT NULL,
    [Name] VARCHAR(32)  UNIQUE NOT NULL,
    [Pass] VARCHAR(128)  UNIQUE NOT NULL,
    [Active] INTEGER DEFAULT 0 NULL
);");

StepBenchmark("Creation");

{
    local stmt = SQLite.Statement(db, "INSERT INTO [Accounts] (ID, Name, Pass, Active) VALUES (?, ?, ?, ?);");
    local sqtrans = SQLite.Transaction(db);

    for (local i = 0; i <= 10000; ++i)
    {
        stmt.Reset();
        stmt.IBindV(1, i);
        stmt.IBindS(2, "user_"+i);
        stmt.IBindS(3, "password_"+i);
        stmt.IBindV(4, i % 2);
        stmt.Step();
    }

    sqtrans.Commit();
}

StepBenchmark("Insertion");

local acc_id, acc_name, acc_pass, acc_active;

local stmt = SQLite.Statement(db, "SELECT * FROM [Accounts] WHERE ID=?;");

local col_id = stmt.Get("ID")
    , col_name = stmt.Get("Name")
    , col_pass = stmt.Get("Pass")
    , col_active = stmt.Get("Active");

for (local i = 0; i <= 10000; ++i)
{
    stmt.Reset();
    stmt.IBindV(1, i);

    if (!stmt.Step())
    {
        throw "this account does not exist";
    }

    acc_id = col_id.Integer;
    acc_name = col_name.String;
    acc_pass = col_pass.String;
    acc_active = col_active.Boolean;
}

StepBenchmark("Processing");

StopBenchmark(" SQLITE MEMORY READ/WRITE ");

Benchmark results:
[USR] ---------------------- SQLITE MEMORY READ/WRITE ----------------------
[USR]
[INF] Connection      :         319 microseconds
[INF] Creation        :        1578 microseconds
[INF] Insertion       :      137226 microseconds
[INF] Processing      :       83858 microseconds
[INF] Total           :      222981 microseconds
[USR]
[USR] ---------------------- SQLITE MEMORY READ/WRITE ----------------------

If you notice something funny. The write speed still hasn't changed from 0.13 seconds. Which means that the real bottleneck here was the actual Squirrel code. Whereas the read code decreased from 0.42 seconds to a mere 0.083 seconds. And if you remember, this read code was not the fastest one.



SQLITE: WRITE/READ BULK IN-MEMORY DATABASE

In this benchmark I'll be expanding on the previous code except I'll be using the code from optimized read #3 which reads the entire data at once.

The resulting code was:
StartBenchmark(" SQLITE MEMORY READ/WRITE ");

local db = SQLite.Connection(":memory:");

db.Exec("PRAGMA synchronous = OFF");

StepBenchmark("Connection");

db.Exec(@"CREATE TABLE IF NOT EXISTS [Accounts] (
    [ID] INTEGER  PRIMARY KEY NOT NULL,
    [Name] VARCHAR(32)  UNIQUE NOT NULL,
    [Pass] VARCHAR(128)  UNIQUE NOT NULL,
    [Active] INTEGER DEFAULT 0 NULL
);");

StepBenchmark("Creation");

{
    local stmt = SQLite.Statement(db, "INSERT INTO [Accounts] (ID, Name, Pass, Active) VALUES (?, ?, ?, ?);");
    local sqtrans = SQLite.Transaction(db);

    for (local i = 0; i <= 10000; ++i)
    {
        stmt.Reset();
        stmt.IBindV(1, i);
        stmt.IBindS(2, "user_"+i);
        stmt.IBindS(3, "password_"+i);
        stmt.IBindV(4, i % 2);
        stmt.Step();
    }

    sqtrans.Commit();
}

StepBenchmark("Insertion");

local acc_id, acc_name, acc_pass, acc_active;

local stmt = SQLite.Statement(db, "SELECT * FROM [Accounts];");

local col_id = stmt.Get("ID")
    , col_name = stmt.Get("Name")
    , col_pass = stmt.Get("Pass")
    , col_active = stmt.Get("Active");

while (stmt.Step())
{
    acc_id = col_id.Integer;
    acc_name = col_name.String;
    acc_pass = col_pass.String;
    acc_active = col_active.Boolean;
}

StepBenchmark("Processing");

StopBenchmark(" SQLITE MEMORY READ/WRITE ");

Benchmark results:
[USR] ---------------------- SQLITE MEMORY READ/WRITE ----------------------
[USR]
[INF] Connection      :         324 microseconds
[INF] Creation        :        1113 microseconds
[INF] Insertion       :      137058 microseconds
[INF] Processing      :       49567 microseconds
[INF] Total           :      188062 microseconds
[USR]
[USR] ---------------------- SQLITE MEMORY READ/WRITE ----------------------

Again, the same write speed. And a small bump in read speed from 0.054 seconds to 0.049 seconds. Which means, that we've reached the point where the actual Squirrel code is the bottleneck.
.

.

#34
SQLITE: WRITE OPTIMIZATION USING QUEUES

In this benchmark I'll be extending on the code from the write optimization #2 which took about 0.21 seconds to execute and required explicit use of transactions. The reason I've created this benchmark is to demonstrate the effectiveness of the queue implementation that the plugin/module comes with. Internally this uses a transaction to send all or a portion of the queued queries. Queued queries are never executed when created. Instead, they're executed when you want and they're executed in a single transaction.

The resulting code was:
StartBenchmark(" SQLITE WRITE ");

local db = SQLite.Connection("test.db");

db.Exec("PRAGMA synchronous = OFF");

StepBenchmark("Connection");

db.Exec(@"CREATE TABLE IF NOT EXISTS [Accounts] (
    [ID] INTEGER  PRIMARY KEY NOT NULL,
    [Name] VARCHAR(32)  UNIQUE NOT NULL,
    [Pass] VARCHAR(128)  UNIQUE NOT NULL,
    [Active] INTEGER DEFAULT 0 NULL
);");

StepBenchmark("Creation");

// Allocate space upfront for 10000 queries
db.ReserveQueue(10000);

StepBenchmark("Allocation");

for (local i = 0; i <= 10000; ++i)
{
    db.QueueF("INSERT INTO [Accounts] (ID, Name, Pass, Active) VALUES (%d, 'user_%d', 'password_%d', %d);"
                i, i, i, i % 2);
}

StepBenchmark("Queue");

db.Flush(this, function(status, query)
{
    // Log the incident
    SqLog.Err("Failed to flush query: %s", query);
    SqLog.Inf("=> Reason: %s", db.ErrMsg);
    // Continue flushing the remaining queries
    return true;
});

StepBenchmark("Flush");

StopBenchmark(" SQLITE WRITE ");

Benchmark result:
[USR] ---------------------------- SQLITE WRITE ----------------------------
[USR]
[INF] Connection      :         770 microseconds
[INF] Creation        :        1478 microseconds
[INF] Allocation      :         817 microseconds
[INF] Queue           :       18525 microseconds
[INF] Flush           :      176447 microseconds
[INF] Total           :      198037 microseconds
[USR]
[USR] ---------------------------- SQLITE WRITE ----------------------------

It's not a significant performance bump to just 0.17 seconds plus the time to queue them. However, if you take into account the fact that each query has it's own transaction. In real code when you need to execute a query here and there. This approach will have a significant impact.



This approach is best used with a routine that flushes a certain amount of queries at certain intervals. For example the following code flushes 10 queries every second:
SqRoutine.Create(this, function(num)
{
    db.Flush(num, this, function(status, query)
    {
        // Log the incident
        SqLog.Err("Failed to flush query: %s", query);
        SqLog.Inf("=> Reason: %s", db.ErrMsg);
        // Continue flushing the remaining queries
        return true;
    });
}, 1000, 0, 10).SetTag("DatabaseFlush");

Or the following which flushes all queries every 4 seconds;
SqRoutine.Create(this, function()
{
    db.Flush(this, function(status, query)
    {
        // Log the incident
        SqLog.Err("Failed to flush query: %s", query);
        SqLog.Inf("=> Reason: %s", db.ErrMsg);
        // Continue flushing the remaining queries
        return true;
    });
}, 4000, 0).SetTag("DatabaseFlush");

It's just an example of what can be achieved.
.

.

Implemented a TCC module tonight to generate new squirrel API at runtime or to move performance critical code to native machine code if Squirrel is lagging.

.

.

I am looking for a decent host for the wiki. The one that I'm currently using might not be stable for long. The host only needs PHP (no database) and FTP access because the documentation is stored in plain text. And FTP is needed for making bulk changes.

If you have any suggestions for a decent stable host then feel free to let me know. Preferably by PM.
.

.

#37
I am closing this topic because the plug-in is quite stable for use in game-mods. Even though the plug-in is in beta, it is quite stable. There isn't too much documentation because I can't maintain the plug-in and documentation at the same time. The plug-in expects someone that can actually read his way through the source code (quite simple once you get the hang of it). Future documentation will be available here. It's a temporary host for now.

The plug-in itself offers at least as much as the official. Actually, a lot more than that.

The SQLite module is probable the most complete module out of all.
The IRC follows being also complete. Currently waiting to implement a way of adding colors to messages.
The XML module is quite functional but missing some way of iterating on document nodes.
The JSON module is still a work in progress.
The MySQL module is also a work in progress. Works but offers only primitive ways of retrieving data.
The MaxmindDB module is also a work in progress. Not even functional and should not be used or attempted to compile.
The Mongoose module is also a work in progress. Not even functional and should not be used or attempted to compile.

Please note that this is quite an advanced scripting plugin. If you've never experienced with higher level programming concepts then this plug-in is not for you. This plug-in is meant for people that want more than what the official plug-in provides and tries to address all the issues and limitations currently found in the official plug-in.

The plug-in is provided in such a way that it's easy for anyone to compile it. On windows, all you need is mingw-w64 and code-blocks and on Linux you pretty much use gcc instead of mingw.

If you cannot build binaries yourself then you can probably ask me or @Drake on #SLC @LUNet A topic with binaries will be made when an official release is made after all modules are complete.

Many thanks to @Drake helping out with testing the plug-in and being an early adopter.

Plug-in repository: https://github.com/iSLC/VCMP-SqMod
.