It seems to me that your full database can easily be held "in memory".
Assuming that the table with 12 million records (and 3 fields per
record) is mostly numeric or other "compact" data, it appears that the
total storage required is on the order of 100 Megabytes, well within
the capacity of a server-class desktop computer (or even a current
It probably makes sense to manage the data with a database, esp. since
you already have a clear notion of "tables" that comprise it. But all
the read-only data could be loaded, as the application starts, into
arrays of specific C++ structures that are optimized for the
queries/algorithms to be executed.
Holding the "static" data in a database allows for backups and
maintenance to be carried out with standard tools.
Since your application sounds like it is very computational, C++ is a
natural implementation language. Of course if the "calculation" is in
fact highly combinatorial in nature, there might be some argument for
using another language, but generally C++ will give the best
performance (at a cost of somewhat more difficult code to maintain
than for other "high level" languages).
Probably you will want to design the application for
"multi-threading". I think this is what you are getting at with your
notion of multi-agent design. A single computer with multiple CPUs is
no longer an exotic machine, and the read-only data can be shared
easily among multiple "worker" threads, each grabbing a "job" from the
jobs-table and running with it.
Since the data is being retrieved from memory for the immediate
application, it probably matters little what database software is used
to "retain" the data while the application is not running. I'd feel
comfortable using MySQL or perhaps even trying GNU SQL Server, both
open source database management software that would be free of
A database management engine tries to "cache" the most useful
information in memory, in order to optimize its query performance. In
your case the entire dataset should fit into memory, so the critical
design aspects will be things like creating indexes that make it quick
to locate the information needed.
I feel comfortable with the "hardware" design of putting the
application on a single box; it has the flavor of a mostly "batch"
operation. If you want some specific ideas about sourcing the GIS
data needed for the application, I'll need to hear something further
about the kind of data required.
The operating system could logically be either a server version of
Windows, or a server version of a GNU licensed operating system, e.g.
Linux. Here's the "mother" site of many open source projects:
Consensus seems to be that Linux is much "lighter" in its operating
system overhead and demands on computer memory than is Windows, so
perhaps there's a performance edge there for your highly computational
There are a variety of ways to go about locating programmers or
software development firms on the Internet. I cannot recommend any
from first-hand experience but I'd be happy to share a few sites that
I've come across:
For a small business this site allows you to post task/project
descriptions and receive bids on proposed work. My impression is that
they place funds "in escrow" which are released on satisfactory
completion of milestones, providing a certain amount of protection for
both parties. Ratings/feedback are kept for both "providers" based on
evaluations completed by "consumers".
A somewhat similar if less well known outfit.
Seems to have a slightly different "granularity" in that a project
team may be assembled using a coterie of providers, rather than as a
result of accepting a single provider's bid.
Another place to shop your projects, probably distinguished by its
emphasis on business to business commitments rather than on pick-up
players working a solo angle.
Please let me know if some part of my answer could benefit from