on 14 MAY by Steve Jones, author for Datafloq

Loading data into memory before processing it has always been the way that programs in the past have utilized RAM. One of the things that we’ve learned over the years is that having a database in memory makes for the fastest level of performance possible. According to Aleahmad et al. based on their studies in 2006, in-memory databases tended to perform more efficientlywith large data sets than on-disk systems.

As the corporate world starts moving towards Big Data and IoT as crucial parts of their IT strategy, the need for efficient database systems that can handle vast amounts of streaming data is becoming critical. Many of these corporations have turned to in-memory computing (IMC) to meet the needs of their database processing needs.

The Development of In-Memory Computing

IMC was created as a response to the need for up-to-date information from data sources to facilitate corporate decision-making. In the earliest days of corporate database architecture, the standard database was an analytical database (OLAP) which imported data from a transactional database (OLTP) after running it through ETL operations every so often to make it palatable to the analytical database. IMC was designed to combine these systems into a hybrid transactional/analytical system (HTAP) allowing for near real-time processing of data sets.

In recent years, the decreasing costs of RAM combined with innovative solutions for network processing have seen a strong emergence of IMC as an architecture. As more and more systems are developed and open-source solutions present themselves, HTAP is en route to becoming a cost-effective method for companies to establish in-memory computing solutions of their own.

The Architecture of In-Memory Computing

As the journal ISACA notes, in-memory computing systems require all the data that they’re working on to be stored in RAM. New interface systems combined with pipelining and compression makes it a viable solution to store most of the data in RAM while the rest remains on-disk, ready to be loaded when needed. When constructing an in-memory computing system, the architecture has to be built around the premise of memory coming first.

This “memory-centric” design encourages us to ensure that the most recent (or most important) data is available both in-memory and on-disk to increase processing efficiency. Semiconductor Engineering mentions that there is research underway to improve the processing efficiency of DRAM by processing data directly on the RAM instead of moving it from memory to processor and back. At current, however, systems aim to keep as much relevant data in RAM as possible while having quick access to the rest of the database stored on-disk. An additional benefit that these systems provide is that there’s no need for data to be loaded back into RAM after a reboot.

Loading an extensive data set into memory after a reboot could take a long time depending on how large that data set is, and being able to compute using on-disk resources while the system reloads the data set can make for rapid system state recovery.

The Difference between Memory-Centric and Disk Caching

Based on the descriptions, it’s easy to mistake in-memory architecture with just another disk-caching system. An architecture that is focused on memory provides a more scalable and flexible system that allows for data to “overflow” into the on-disk storage. Additionally, the system keeps important and regularly referenced data both on-disk and in-memory to speed up the system overall. Memory-centric architectures allow for a more optimized system performance while letting a company limit its infrastructure costs.

What’s in Store for In-Memory Computing?

At present, the need for real-time processing of data to develop insights is a necessity in many different facets of the business ranging from exploiting omnichannel marketing to real-time regulatory compliance. Shortly, however, companies will need to address further strains on their existing database infrastructure if they intend to implement new technology such as IoT. Even web and mobile applications add to this demand. Memory-centric architecture addresses all of these potential bottlenecks, giving companies a scalable alternative for their database management.

Using a combination of RAM and SSD implementations a company can customize the cost of their memory-centric architecture to fit their needs and budget. The cost-effective nature of this solution makes it an attractive option for all sizes of business.

Article Author: Steve Jones
Digital Transformation, Big Data & Analytics, Business Architecture and SaaS industry leader who is focused on driving business value by helping companies transform their IT to align it to their future business ambitions and leverage new approaches around information to help businesses make better decisions. A published author on the subject of Digital Transformation, Business Architecture, Big Data, AI and the future of information in business and a regular industry presenter.

Collected at:  https://datafloq.com/read/memory-centric-architecture-future-computing/6374?utm_source=Datafloq%20newsletter&utm_campaign=efb8b23b05-EMAIL_CAMPAIGN_2019_05_20_10_36&utm_medium=email&utm_term=0_655692fdfd-efb8b23b05-95392789 

2 thoughts on “Why Memory-Centric Architecture Is The Future Of In-Memory Computing”

Leave a Reply

Your email address will not be published. Required fields are marked *