Skip to main content

Posts

Optimistic Locking vs Pessimistic Locking

This post takes the DB transaction discussion to one step further. Two ways the application developers handle DB concurrency are optimistic locking and pessimistic locking. While transaction isolation levels are used to implement pessimistic locking, optimistic level is implemented using versioning and timestamp columns. More details on these concurrency measures are provided below. Optimistic Locking : As the name suggests, it takes an optimistic approach. It allows the concurrency issues to happen and then takes actions to handle that. Hence, there is no preventive measure. It is suitable for a database having relatively large number of records and less users making concurrency possibility low. It doesn't lock the rows, but used version or timestamp columns to check for updates. It's pretty easy to implement. Pessimistic Locking : This is in total opposite to the optimistic locking, here it prevents the concurrency issues beforehand by locking the DB rows so that these...
Recent posts

Transaction ISOLATION levels

This post is dedicated to another important topic related to DB, the 'Transaction Isolation Levels'. There are four different types of transaction isolation levels. While most of the databases support all these four, but some like PostGRES do not support all. Isolation levels controls how much of the uncommitted data from a particular transaction is visible to other transactions. Before delving more into various isolation levels, lets try to understand various scenarios of inconsistencies that occur when two or more transactions operate on the same data at the same time. Dirty Read : This is the case when one transaction reads the data which contains uncommitted data from other transaction. If the other transaction rolls back the operation, the first transaction still has that invalid data which leads to inconsistencies. Non-repeatable Read : This occurs when a same query inside same transaction leads to different results if executed repeatedly. In such situations, one tran...

DB transaction ACID properties

DB transaction is a combination of different operations. If not performed in a proper manner, different transactions working on the same data at the same time may leave the data in corrupted state, effecting the application. In this article, I am going to illustrate DB transaction ACID properties through an example of money transfer application between two different accounts A and B. To begin with, lets suppose that accounts A and B both have initial balance of $100. ACID stands for Atomicity , Consistency , Isolation and Durability . Let's try to understand these one by one. Atomicity : This is the property that mandates that if a transaction is started, either all the operations which are part of the transaction need to be completed by end of the transaction completion as a single unit of work or none of the operations needs to be completed. It is maintained by transaction management component. If a debit of $10 is made from account A, then the corresponding credit of $10 al...

JDBC connection pooling

In this post, I am going to talk briefly about what JDBC connection pooling is and why it is important to use this technique in the DB driven applications. A typical JDBC connection revolves around following aspects -  open a connection to the database by using the corresponding JDBC driver open a TCP socket to read and write data over the socket close the socket close the connection As can be deduced from the above aspects, creation of a JDBC connection is pretty time expensive operation. Generally speaking, the application shouldn't spend much time on JDBC connection creation while performing an operation involving DB interaction as it would add to the latency. Until unless, it becomes very much needed, JDBC connection creation for each and every DB operation should be avoided for the sake of having a faster application. The alternative to this is to have a pool of readily available JDBC connections in the application runtime. Whenever the application needs a DB conn...

SSL/TLS certificate handling in Java

Java has very good support for SSL/TLS enabled network connections. But before going into that, let's understand the hierarchy of certificate chaining. There are two certificates involved in the whole process of SSL connection establishment. Root Certificate - certificate of credibility of the certificate issuing authorities (CA) like Symantec, GeoTrust, Thawte, DigiCert, GlobalSign etc  Intermediate Certificate  - these certificates are provided to different service providers by the certificate issuing authorities When a client tries to connect to a server using an SSL connection, the server responds with  Intermediate Certificate  and the response body. The client then checks for the validity of the  Intermediate Certificate  by checking whether that certificate have been issues by a trusted CA by looking for the  Root Certificate . Once this validation is successful, the connection is established. It is important to note here that the clients...

SSL TLS tidbits

SSL (Secure Socket Layer) was developed by Netscape and the version 2.0 had a public release in 1995. They didn't release the first version. That was followed by SSL 3.0 in 1996. TLS (Transport Layer Security) was release in 1999 as a newer version of SSL and based on SSL 3.0. Later TLS 1.1 (in 2006), TLS 1.2 (in 2008) were released with new improvements. The present TLS version is 1.3 (release in 2018). SSL 1.0 (not released) -> SSL 2.0 -> SSL 3.0 -> TLS 1.0 -> TLS 1.1 -> TLS 1.2 -> TLS 1.3 As TLS is the latest incarnation of the SSL standard, it's advised to use TLS over SSL. SSL had vulnerabilities like POODLE, DROWN . The certificates don't determine the protocol (SSL/TLS). It's the application server configuration that determines the protocol. Vendors issue certificates to use with SSL and TLS and hence, certificates are not dependent on protocols. SSL and TLS are different cryptographically in the same way as the different versions of S...

Java collection series - miscellaneous

Java Vector is a legacy class. And it is significantly faster in comparison to a list obtained through Collections.synchronizedList(). Vector has loads of legacy operations and hence the manipulations in Vector needs to be done through the List interface, otherwise you won't be able to replace the implementation at a later time. Arrays.asList() is better choice if the list is of fixed size and any kind of size mutation of the collection results in UnsupportedOperationException. The underlying array is updated whenever the list is updated (or vice-versa), but the array reference isn't retained. Collections.nCopies() is another convenient mini-implementation which can be useful in two ways - initialize a newly created list with n null values (need not be only null values) -  new ArrayList (Collections.nCopies(1000, (Type)null)  grow an existing list -  lovablePets.addAll(Collections.nCopies(69, "fruit bat")) Collections.singleton()/Collectio...