In-Memory Computing Planet® Blogs and Events

IMC Planet presents in-memory computing blogs and events from around the world. Read the latest in-memory computing news here. Submit your RSS feed or post your upcoming events and help the in-memory computing community stay up to date on the latest developments.

Nov
03
2011
Posted by In-Memory Computing Blog
Parallel execution is key to achieve sub second response time for queries processing large sets of data. The independence of tuples within columns enables easy partitioning and therefore supports parallel processing. We leverage this fact by partitioning database tasks on large data sets into as many jobs as threads are available on a given node. This way, the maximal utilization of any supported hardware can be achieved.

Please also see our podcast on this technology concept.
 
Oct
31
2011
Posted by In-Memory Computing Blog
To achieve the highest level of operational efficiency, the data of multiple customers can be consolidated onto a single HANA server. Such consolidation is key when HANA is provisioned in an on-demand setting, a service which SAP plans to provide in the future. Multi-tenancy allows making HANA accessible for smaller customers at lower cost, as a benefit from the consolidation.
Already today…
 
Oct
27
2011
Posted by In-Memory Computing Blog
Compression defines the process of reducing the amount of storage place needed to represent a certain set of information. Typically, a compression algorithm tries to exploit redundancy in the available information to reduce the amount required storage. The biggest difference between compression algorithms is the amount of time that is required to compress and decompress a certain piece or all of the information. More complex compression algorithms will sort and perform complex analysis of the input data to achieve the highest compression ratio. For in-memory databases, compression is applied to reduce the amount of data that is transferred along the memory channels…
 
Oct
24
2011
Posted by In-Memory Computing Blog
In contrast to the hardware development until the early 2000 years, todays processing power does no longer scale in terms of processing speed, but degree of parallelism. Today, modern system architectures provide server boards with up to eight separate CPUs where each CPU has up to twelve separate cores. This tremendous amount of processing power should be exploited as much as possible to achieve the highest possible throughout for transactional and analytical applications. For modern enterprise applications it becomes imperative to reduce the amount of sequential work and develop the application in a way…
 
Feb
09
2011
Posted by In-Memory Computing Blog
Cloud Services in combination with high performance in-memory computing will change how enterprises work. Currently, most of the data is stored in silos of slow disk-based row-oriented database systems. Besides, transactional data is not stored in the same database as analytical data, but in separate data warehouses and gets replicated in batch jobs. Consequently, instant real time analytics are not possible and company leaders often have to make decisions in a very short time frame based on insufficient information.

This is about to change. In the last decade, hardware architectures have evolved dramatically. Multi core architectures and the availability of large amounts of main memory at low costs are about to set new breakthroughs in the software industry. It has become possible to store data sets of whole companies entirely in main memory, which offers a performance that is orders of magnitudes faster than disk. Traditional disks are one of the remaining mechanical…
 
Feb
07
2011
Posted by In-Memory Computing Blog
In-memory data management technology in combination with highly parallel processing has a tremendous impact on business applications, for example, by having all enterprise data instantly available for analytical needs. Guided by Hasso Plattner, we, a team of researchers under the supervision of Alexander Zeier at the Hasso Plattner Institute, analyzed and evaluated how business applications are developed and used starting in 2006.

I am [...] very excited about the potential that the in-memory database technology offers to my business.
 
Feb
02
2011
Posted by In-Memory Computing Blog
Traditionally, the database market divides into transaction processing (OLTP) and analytical processing (OLAP) workloads. OLTP workloads are characterized by a mix of reads and writes to a few rows at a time, typically through a B+Tree or other index structures. Conversely, OLAP applications are characterized by bulk updates and large sequential scans spanning few columns but many rows of the database, for example to compute aggregate values. Typically, those two workloads are supported by two different types of database systems transaction processing systems and warehousing systems.
In fact, enterprise applications today are primarily focused on the day-to-day transaction processing needed to run the business while the analytical processing necessary to understand and manage the business is added on after the fact. In contrast to this classification, single applications such as Available-To-Promise (ATP) or Demand Planning exist, which cannot be exclusively referred…
 
Dec
01
2010
Posted by In-Memory Computing Blog
In the past few years in-memory computing has found its application in many areas. The most notable application is still the use in enterprise software systems, both analytical and transactional. However, the performance advantages of in-memory computing can also be exploited in a number of other scenarios. Today, I will describe our experience of application of an in-memory computing engine in a source code search scenario.

Without a doubt source code search is an important software engineering activity. Empirical studies report that up to 30% of development tasks are related to search [1]. In our experience, we discovered that 36% of search queries were structural patterns, keywords or identifiers. Moreover, the most frequent search target during software maintenance is a statement.

 
Nov
02
2010
Posted by In-Memory Computing Blog
In this blog entry, I want to summarize some emerging ideas about, how OLTP can be performed on distributed systems by weakening consistency.
Scaling out servers is a common approach to achieve a higher performance or higher throughput. If systems are scaled out, more servers are used to handle workload. Using more servers often involves the usage of distributed transactions and partitioning. However, distributed transactions according to ACID which can be achieved using two phase commits are expensive, as they increase the latency of a transaction and weaken the availability or the resilience to network partitions.

CAP Theorem and ACID 2.0

According to the CAP theorem, consistency, availability, and partitioning cannot be achieved at the same time. Since availability is a…
 
May
14
2010
Posted by In-Memory Computing Blog
In our research area "In-Memory Data Management for Enterprise Applications" we propose to reunite two systems supporting major enterprise operations: Online Transactional Processing (OLTP) and Online Analytical Processing (OLAP) systems. The two systems basically work on the same set of business data, using differently optimized data structures for their specific use cases. Latest developments in in-memory database technology and column-oriented data organization explored, for example, in the HYRISE project or the "In-Memory Data Management" bachelor's project encourage the idea of integrating the two workloads, or at least major parts of them, into one system.
In this blog post I will make the case for a new benchmark that…
 
Feb
16
2010
Posted by In-Memory Computing Blog

A few months back in December 2009 Intel Labs announced a new many core research prototype called SCC. SCC stands for single cloud chip and adds up 48 Intel-Architecture (IA) cores on a single chip, which is the largest number ever put on a single CPU. Last week on February 12th Intel Labs held the SCC Symposium inviting researchers to get to know this chip in more detail. The goal of this symposium was to allow researchers have a really close look at the chip and its capabilities. The idea is that attending researchers apply for access to such a system in order to explore the possibilities of many core computing.