Shell Getting More Clarity from Analytics
By Steve Rosenbush: Royal Dutch Shell plc is finishing a trial of next-generation database
technology and plans this year to begin deployment of the software throughout its business, a sign that advanced computing is moving ever-deeper into the corporate mainstream.
Shell is currently in the midst of a trial of SAP AG ’s HANA database technology, and there will be “first deployments later this year,” Johan Krebbers, Shell group IT architect, told CIO Journal.
HANA is an “in-memory” database, which means that it stores information in a computer’s main memory,
instead of on a disk. That reduces response times when users make a query, because the database can execute the task with fewer commands and doesn’t need to move back and forth between different forms of memory. Also, information in main memory typically is accessed on a random basis, while information on remote drives is often accessed in a specific order, which can be slower.
HANA will be used in the two main parts of Shell’s business, including the exploration and drilling side, known as upstream, and the manufacturing and customer-facing side, known as downstream, according to Krebbers. “SAP HANA [is] to be used in Shell to replace the existing SAP data warehouses in Upstream, Downstream, etc,” Krebbers said in a email.
Given Shell’s size—it is the largest company in the world, by revenue—such a broad rollout is bound to be one of the largest deployments of HANA to date. As the Wall Street Journal reported earlier this month, more and more large companies are using alternative databases because conventional databases weren’t designed for the era of Big Data.
Five years ago, Shell had about five petabytes of data, according to Krebbers. Now it has 40 to 60 petabytes, and that volume of information is expected to grow by “multiple factors,” he said.
“If you talk about in-memory, we are looking at how we can get split second response times to large data sources,” Krebbers said in a phone call. “Today, it might take hours to get results. ” The company has “mainly seen an increase in the use of the technology in Shell surface and sub surface” work, he said.
The volume of data used in the exploration and drilling business has grown exponentially in recent years, according to Tom Riddle, manager of imaging and interpretation at Shell. “The data set just in a couple of years has grown 10-fold,” Riddle told CIO Journal in a phone call. He said that the volume of processed data in a geologic survey has grown over the last 20 years to 100 gigabits from 10 gigabits. The raw, unprocessed data takes up 1,000 times that computer space, he said.
The ability to collect more data from newer, cheaper sensors that can be deployed en masse—and to analyze it—makes a huge difference in Shell’s business. “What we have found, is that the more data we can get, the more clarity we can get in the image,” Riddle said.
That data has helped the company locate oil and gas reserves in areas that were once inaccessible or thought to be depleted. That is because oil explorers like Shell have seen an improvement in their ability to collect low-frequency seismic data, which is used to capture the details of large objects, and high frequency data, which captures information about finer objects.
“Bigger data sets improve capabilities at both extremes of the spectrum and reduce the amount of interference from noise. They allow you to see things lying at the bottom, better than before. It also allows you to see how things change horizontally a lot more accurately, which is just as important,” Riddle said.
The technology is opening up areas of Brazil and the Gulf of Mexico for exploration and recovery, according to Riddle. “I think we thought the Gulf of Mexico was dead several times, and then these advances in data came along. It was just amazing. It breathed new life into it, absolutely,” he said.
Category: Uncategorized