B1Framework's Data Access Layer and
BFC's Database Library both supply facilities that are not provided by the current generation of commercial database management systems, but are logical extensions that are often necessary for large-scale development.
These database libraries are designed to make it simple to write efficient,
dependable, database applications capable of handling large numbers of simultaneous users and processors.
The components have been carefully optimized for each supported DBMS, to achieve reliable transaction processing with a minimum amount of database locking. There are low-level optimizations to prevent database deadlock and improve performance and concurrency for database-intensive applications.
Programming of multi-user database systems is simplified because of the streamlined interfaces.
They relieve developers from much of the complexity of building
scalable applications that can perform well for large numbers of users and processes, or large amounts of data. These
component libraries provides efficient solutions to
issues such as:
- Application crashes that leave half-baked (inconsistent) data in the database (making rapid recovery impossible)
- Multiple users simultaneously (incorrectly) modifying the same records
- Users getting inconsistent views (snapshots) of data when other users are modifying that data
- Large-scale record sorts and transmissions of large numbers of records
Since the early 1990's, Microsoft has provided a wide variety of
database interfaces (ODBC, MFC database classes, DAO, RDO, OLE DB,
ADO, ADO.NET, LINQ, and the Entity Framework). None has offered a framework for the rapid
development of applications that can handle the thorny issues
multi-user and scalability issues.
LINQ and the Entity Framework offer an improved programming
interface but do little to improve performance. The latest Microsoft database interfaces require substantial custom design and programming to
implement efficient data access for commercial, multi-user systems.
|
For example, the ADO.NET "disconnected data set" model has proven to be fraught with
performance problems as the database table sizes increase. Returning an entire
result set or a collection of result sets is often impractical for large-scale
systems. An underlying objective of both of Base One's database
libraries is to prevent the
all-too-common disaster of database applications that work perfectly
in prototype, but perform dismally when put into production.
One frequent cause of severe performance problems is the failure to plan for the true costs of transaction processing,
required
to maintain data integrity. Base One's components
minimize or eliminate altogether the need to develop extra code to prevent database locking protocols from causing
serious interference between multiple users and processes.
Building on Base One's middleware results in solutions that keep the demands on the database to a minimum. For example, automated client-side caching of both data and metadata
outside the database helps prevent database connection
"overload", and "optimistic" concurrency control
automatically minimizes the length of time locks need to be held. This
assures efficient, rigorous handling of data access collisions, such as multiple users trying to modify the same records or index pages.
The database libraries automatically handle many such arcane, but critically
important details necessary for fast transaction processing. These techniques allows a much higher degree of concurrency
than more cumbersome distributed transaction protocols, which are commonly
over-used. Most of the time, programmers simply can use faster, basic
transaction commit/rollback logic to tie together recoverable sequences of
operations. The efficiency of Base One's approach to distributed processing
has been proven in production financial systems that handle high volumes of
data.
|