In-Memory Computing Blogs and Events

IMCPlanet.org presents in-memory computing blogs and events from around the world. Read the latest in-memory computing news here. Submit your RSS feed or post your upcoming events and help the in-memory computing community stay up to date on the latest developments.

Nov
27
2017
Posted by Redis Labs on Monday 27 November 2017, 12:43

We are super excited to announce the general availability of Redis Enterprise 5.0 today. This latest software pack is now available with new modules on our downloads page.
With the 5.0 release, Redis Enterprise provides higher availability, scaling, performance and unmatched cost efficiency over open source Redis.
Here’s what’s new:
Active-Active Geo-Distributed Deployments Based on CRDTs
Developing globally distributed applications can be challenging, as developers have to think about race conditions on concurrent writes across regions and complex combinations of events under geo-failovers. New Redis CRDTs (conflict-free replicated data types) simplify this task by using built-in smarts that handle conflicting writes based on the data type in use. Instead of depending on simplistic “last-writer-wins” conflict resolution, geo-distributed Redis Enterprise combines…

Nov
27
2017
Posted by MemSQL Blog on Monday 27 November 2017, 09:00

Amazon will finish an exciting year by bringing together thousands of people to connect, collaborate, and learn at AWS re:Invent from November 27 – December 1 in Las Vegas.
Whether you are a cloud beginner or an experienced user, you will learn something new at AWS re:Invent. This event is designed to educate attendees about the AWS platform, and help develop the skills to design, deploy, and operate infrastructure and applications.
MemSQL is exhibiting in the Venetian Sands Expo Hall, so stop by our booth #1200 to view a demo and talk to our subject matter experts.
This year, AWS re:Invent will offer even more breakout sessions led by AWS subject matter experts and top customers. This informative mixture of lectures, demonstrations, and guest speakers is geared towards keeping attendees informed on technical content, customer stories, and new product announcements.
Here are our suggested top breakout sessions for you to attend at the event.

Big…

Nov
27
2017
Posted by Insights into In-Memory Computing and Real-time Analytics on Monday 27 November 2017, 08:00

As we discussed on CommPro.Biz, retailers are preparing for Black Friday and Cyber Monday to not only handle scale and uptime but to also manage real-time customer interactions with the help of AI marketing technology. It isn’t just for the
The post How Machine Learning AI is Undeniably Reinventing the Retail Industry  appeared first on Insights into In-Memory Computing and Real-time Analytics.

Nov
27
2017
Posted by SAP HANA on Monday 27 November 2017, 06:00

“Retailers are finding that they need to have the insight to act in the moment…to tap existing and future opportunities of profitable growth.”
Achim Schneider
Global Head of Retail Business Unit, SAP SE

The world of retail is changing at lightning speed – with evolving consumer expectations, tightening decision gaps and ever-growing data volumes. Keeping up with the pace of change is no easy feat, and requires the help of agile technologies to power the day-to-day and innovate for tomorrow. Today’s digitally armed consumers seek seamless personalized shopping…

Nov
27
2017
Posted by IBM Systems Blog: In the Making on Monday 27 November 2017, 03:00

I’ve talked with many banking executives about how their business is changing and what they need to do to compete in this dynamic and turbulent marketplace. Maintaining trust with clients is core. The financial services sector had 65 percent more cybercrime attacks than average, based on an analysis of security incidents from 2016. In this digital age, customer expectations are incredibly high with respect to security and personalized services. These attacks can put trust at risk. Once that trust is gone and customer confidence is lost, the company’s future is in jeopardy.
Maintaining deep trust is just one element of the equation. The survival kit for today’s market requires that financial institutions also understand and cater to the empowered customer and find ways to innovate to higher profits.
Can any platform address all three of these areas? Yes. And its name is IBM Z.…

Nov
26
2017
Posted by the morning paper on Sunday 26 November 2017, 22:00

SVE: Distributed video processing at Facebook scale Huang et al., SOSP’17
SVE (Streaming Video Engine) is the video processing pipeline that has been in production at Facebook for the past two years. This paper gives an overview of its design and rationale. And it certainly got me thinking: suppose I needed to build a video processing pipeline, what would I do? Using one of the major cloud platforms, simple video uploading encoding, and transcoding is almost a tutorial level use case. Here’s A Cloud Guru describing how they built a transcoding pipeline in AWS using Elastic Transcoder, S3, and Lambda, in less than a day. Or you could use the video encoding and transcoding…

Nov
23
2017
Posted by KurzweilAI » Blog on Thursday 23 November 2017, 22:00

show: Dream Big Podcast | link
episode title: Inventor Ray Kurzweil gazes into the future | link
episode: no. 59
date: November 20, 2017

podcast | listen to the show here
In this episode no. 59 of Dream Big Podcast :
Guest Ray Kurzweil explores what it’s like to be an inventor, how he gets inspiration for his projects & ideas — plus a little on the clever super-heroine of his novel who achieves amazing things through the power of creative thinking.

letter | about the…

Nov
23
2017
Posted by the morning paper on Thursday 23 November 2017, 22:00

On the information bottleneck theory of deep learning Anonymous et al., ICLR’18 submission
Last week we looked at the Information bottleneck theory of deep learning paper from Schwartz-Viz & Tishby (Part I,Part II). I really enjoyed that paper and the different light it shed on what’s happening inside deep neural networks. Sathiya Keerthi got in touch with me to share today’s paper, a blind submission to ICLR’18, in which the authors conduct a critical analysis of some of the information bottleneck theory findings. It’s an important update pointing out some of the limitations of the approach. Sathiya gave a recent talk summarising…

Nov
22
2017
Posted by the morning paper on Wednesday 22 November 2017, 22:00

KV-Direct: High-performance in-memory key-value store with programmable NIC Li et al., SOSP’17
We’ve seen some pretty impressive in-memory datastores in past editions of The Morning Paper, including FaRM, RAMCloud, and DrTM. But nothing that compares with KV-Direct:

With 10 programmable NIC cards in a commodity server, we achieve 1.22 billion KV operations per second, which is almost an order-of-magnitude improvement over existing systems, setting a new milestone for a general-purpose in-memory key-value store.

Check out the bottom line in this comparison table from the evaluation:

Nov
22
2017
Posted by Xoriant Blog on Wednesday 22 November 2017, 21:44

Blockchain recently has become a buzzword. It’s known to be a revolutionary technology for performing outright transactions between parties without the involvement of an intermediary.
Let’s understand why blockchain before diving into the nitty-gritties with the help of an example. The current traditional electronic system relies absolutely on intermediaries for a transaction to be carried out. For instance, participants in a business network entrust bank (middleman) for transacting payments between them [Figure(a)].
Introduction to BlockchainFigure (a)
The problem with such a setup is that if at any point the bank is hacked then the dependent participant’s records become ambiguous and inconsistent. The participants would have to keep faith in the…

Nov
22
2017
Posted by SAP HANA on Wednesday 22 November 2017, 04:58

You don’t create a Data Warehouse (DW) through a wizard. And as you have so much freedom to create your DW in one way or another, you need a design. The question is, how and where do you make it? In this blog post I make the argument for Enterprise Architect Designer, the tool that fits agile DW designs.
Let’s take a step back though: in the first blog I outlined the large changes SAP went through to provide the SAP HANA SQL Data Warehouse. Now it’s time to introduce the tools to design and build the SAP HANA SQL DW. To guide you through it, the first posts of this blog series represent the “key steps” a data warehouse practitioner takes to build a DW: Design, Develop, Deploy, and Run.

Nov
22
2017
Posted by IBM Systems Blog: In the Making on Wednesday 22 November 2017, 04:00

Your transaction data is a valuable corporate asset. Savvy leaders know that in today’s data-driven world, your organization must be able to fully utilize all meaningful data. To succeed, IT capabilities should help you unlock the inherent value of your data and thereby gain a market advantage. But first, there are important strategic and operational priorities to consider.
With the ongoing rise of digital business transformation projects, the modern organization already has massive amounts of stored data – and it’s growing rapidly. Here’s the key challenge to overcome: how to store all the most important data in a scalable, efficient way so your talented team can get to the insightful nuggets that provide a competitive edge.
Therefore, CIOs and CTOs must be ready to deploy adaptable hybrid IT infrastructure so that they can improve agility, performance, reliability and cost efficiency. As a result, their organizations can harness the benefits of IT delivered services…

Nov
21
2017
Posted by the morning paper on Tuesday 21 November 2017, 22:00

Canopy: an end-to-end performance tracing and analysis system Kaldor et al., SOSP’17
In 2014, Facebook published their work on ‘The Mystery Machine,’ describing an approach to end-to-end performance tracing and analysis when you can’t assume a perfectly instrumented homogeneous environment. Three years on, and a new system, Canopy, has risen to take its place. Whereas the Mystery Machine took 2 hours to compute a model from 1.3M traces, Canopy can scale well beyond this.

Canopy has been deployed in production at Facebook for the past 2 years, where it generates and processes 1.3 billion traces per day spanning end-user devices, web servers, and backend services, and backs 129 performance datasets ranging from high-…

Nov
21
2017
Posted by GridGain Systems Blog on Tuesday 21 November 2017, 14:44

In the previous article - part 1, overview - I talked about why you need distributed data structures (hereinafter - RSD) and disassembled several options offered by the distributed cache Apache Ignite. Today I want to talk about the details of the implementation of specific RSD, as well as a small educational program on distributed caches. To begin with, at least in the case of Apache Ignite, RDBs are not implemented from scratch, but are a superstructure over a distributed cache.

Nov
21
2017
Posted by GridGain Systems Blog on Tuesday 21 November 2017, 14:18