GridGain Systems product manager and Apache® Ignite™ PMC Chair Denis Magda will be the featured speaker at the SF Big Analytics Meetup on Sept. 13.
In-Memory Computing Blogs and Events
How the WaveRunner API Enables Tomorrow’s SDDC Innovation, Today
Guest blog post by Chris Busse, CTO at APIvista
In my consulting work, I encourage enterprises of many sizes to use standardized APIs across their business areas. This means I’m often called upon to explain what an application programming interface is to non-technical stakeholders. Many technologists say, "APIs are like LEGO bricks," but I prefer a more complete description: "They’re a way developers can make their software available to future developers to integrate with, but without having to know today what those future developers want to do with the software." Put yet…
Intel will be hosting a live Webinar with GigaSpaces on September 20 at 11.00 am EST (8:00 am PST) to discuss Bridging the Memory-Storage Gap to Accelerate Fast Data Analytics, giving you a chance to learn about evolving trends and our
The post Intel to Host Webinar with GigaSpaces on Bridging the Memory-Storage Gap appeared first on Insights into In-Memory Computing and Real-time Analytics.
This article will walk through the steps required to get Kubernetes and Apache Ignite deployed on Amazon Web Services (AWS).
We're raffling off a limited number of conference tickets each week to the 3rd annual In-Memory Computing Summit, Oct. 24-25 at the South San Francisco Conference Center.
Are you ready to experience data like never before? Take a journey behind the scenes and travel at the speed of in-memory in the new A Data Journey Augmented Reality experience, powered by SAP HANA – coming soon to SAP TechEd Las Vegas 2017.
Watch SAP data management solutions unfold before your eyes, and experience millions of live data points being processed in…
If you're planning to implement a highly available and distributed in-memory computing architecture, then I invite to join our free webinar tomorrow (Sept. 12) at 6:30 p.m. Pacific time (9:30 p.m. Eastern). This webinar will provide you with everything you need to know to get started.
Ignite is the in-memory computing platformthat is durable, strongly consistent, and highly availablewith powerful SQL, key-value and processing APIsStarting with 2.1 release, Apache Ignite has become one of a very few in-memory computing systems that provides its own distributed persistence layer. Essentially, users do not have to integrate Ignite with any type of 3rd party databases (although such integration is still supported), and start using Ignite as a primary storage of their data on disk and in memory.So, what makes Ignite data storage unique? Let us look at a few important features provided by Ignite. You will probably notice that some of these features can also be seen in other data storage systems. However, it is the combination of these features in one cohesive platform that makes Ignite stand out among others.
1. Durable Memory
Ignite durable memory component treats RAM not just a caching layer, but…
I trust you’ve already heard or read about SAP Digital Boardroom. Or maybe you’ve even seen one of the many demo showcases we’ve been running at customer events across the globe (always, with fantastic feedback on this amazing solution)? In case you missed it, you can easily enjoy a glimpse of this awesome experience with an overview of how Northern Gas Networks monitors operations with SAP Digital Boardroom in this video or see how it enables the SAP board to run the business in real time (you can watch a…
GridGain organizes two excellent in-memory computing meetups – one in the San Francisco Bay Area and the other nearly 3,000 miles away in New York City. Both will have meetings this month… and the NYC meetup will make its inaugural gathering Sept. 26 in the heart of Times Square.
Decision Trees are a type of Supervised Machine Learning (that is you explain what the input is and what the corresponding output is in the training data) where the data is continuously split according to a certain parameter. The tree can be explained by two entities, namely decision nodes and leaves. The leaves are the decisions or the final outcomes. And the decision nodes are where the data is split.
An example of a decision tree can be explained using above binary tree. Let’s say you want to predict whether a person is fit given their information like age, eating habit, and physical activity, etc. The decision nodes here are questions like ‘What’s the age?’, ‘Does he exercise?’, ‘Does he eat a lot…
Spatial Analytics, Predictive Analytics, Artificial Intelligence/Machine Learning and More.
As you have seen in our previous blog postings in this series, SAP HANA is more than a database technology; it is a platform for digital transformation. In this series, Karl-Heinz Hoffman introduced you to SAP HANA (Why ISVs should consider SAP HANA) , explained the benefits of the underlying core technology and explained how ISVs can adopt SAP HANA (SAP HANA Adoption Strategies for ISVs). In my previous post, I described the benefits of…
I'm looking forward to next month's In-Memory Computing Summit North America at the South San Francisco Conference Center. To pique your interest a bit more, below is a sneak peek of the breakout session agenda for the conference, which runs from Oct. 24-25.
There are significant innovations in the offering of SAP in the SQL Data Warehouse* (SQL DW) space. These innovations have taken place over the past few years, but together provide a solution with which you can build an SQL DW in a different way then you did before. I’m going to write a series of blogs about this. (See the end of the blog for my definition of SQL DW)
To be clear, for Data Warehousing, SAP offers two flavors: SAP Business Warehouse and SAP HANA SQL DW. You can also perfectly combine the two. However, in this blog I’m treating only the SQL DW topic.
The “Classic” SQL DW
SAP has been supporting the SQL DW for a quite a while already – mostly through products acquired in the past decade. SAP products like Data Services, Information Steward, and PowerDesigner served thousands of SAP Data Warehouse implementations for decades, and still do.
How does such a data warehouse work? Pick any supported database to run these standalone…
This article will focus on how to create an Apache Ignite cluster that can support the reading and writing of user-defined objects in a common storage format. This is particularly useful in situations where applications need to work with objects but these objects will be accessed by different programming languages and frameworks. Apache Ignite supports a binary format that is particularly useful for this task. We will look at how to achieve the goal of interoperability using some short programming examples.