Big Four & Leading Accounting and Consulting Firms – news, opinion and career opportunities for aspiring & current professionals & alumni

Big Data Projects Struggling To Be Profitable

By Rob Starr, Big4.com Content Manager

Capgemini and Informatica have released new joint research and it finds that currently less than 1/3 of big data projects are profitable, with the majority of organizations still having significant work to do in order to make the most of their investment.

Ownership of big data projects is also gradually shifting away from IT to different project owners from across the C-suite, from COO to CFOs. Additionally, big data project ownership was revealed to be a key indicator of profitability, with COOs more than twice as likely to be running profitable Big Data initiatives as CIOs. Steve Jones, Global SVP of Big Data Practice at Capgemini answered some of our questions.

Why are less than 1/3 of big data projects profitable?

There are two core reasons for this, the first is the timescale for investment return and the second is the technology driven nature of some initiatives. The first isn’t a cause for a concern, as it is simply that the level of investment made isn’t yet covered by the anticipated returns. This is often the case where the firm has made a strategic business decision to move towards Big Data and therefore is investing in the governance and operational changes which will ultimately ensure the value but which early on are the costs.

The second group is more worrying as those are programs which have been conceived by IT or

Steve Jones

Steve Jones

by technology ‘fashionistas’ in the business and are more interested in the technology architecture than the business outcome. These programs are unlikely to ever demonstrate an ROI. What is noticeable in this report and other reports we have done is that these sorts of programs often complain about a lack of business engagement or a lack of clear goals.

What are the trends with the ownership of big data projects?

Simply put: if the project shows business value, then the business becomes engaged; the more the business becomes engaged and is able to realize the value of analytics within operational processes, the more it becomes a core part of the business work and the business directly takes over. What is interesting is that we are seeing now something that is analogous to the explosion of SaaS where business leaders drove SaaS adoption outside of IT. Where IT departments are too slow, or too fixated on data warehouses we are seeing ‘rogue IT’ projects leveraging cloud and “as a Service” approaches.

How and why is big data becoming more business critical?

There are two main reasons. The first is to do with Big AND Fast Data, the ability to react and embed change into operational processes (rather than simply having a report), which is having a significant impact.  The second is that the goal of data science and analytics isn’t to reproduce the reports of yesterday, but to start producing models that forecast the future, the ability to guide future decisions changes the perception of information.

The second reason is simple: Excel.

Traditional Data Warehouses are classic IT projects: they are big, centralized, and everyone must comply. Excel on the other hand is small, agile and focused on the specific problems of the individual or department. Because Big Data can bring together all of the information but then present multiple views, it is answering the challenges presented by Excel, and Excel remains the number one BI tool for a reason.

What are  the top challenges in operationalizing big data?

The main challenges in operationalizing big data are industrializing data ingestion and distillation, and ensuring the meta-data and particularly business meta-data is captured.  It’s very easy to build a quick Java application that solves a problem for one area but to truly operationalize big data it needs to industrialize and commoditize the acquisition on information.

Business meta-data is another core challenge. IT departments are used to technical meta-data – the schema – but are not used to capturing the actual business purpose and business service that the information is generated from and consumed by. Historically we’ve only had one view – the schema – but now we need to have different schemas for different views.

The final big challenge is cultural. IT departments are used to BI being the last bastion of waterfall, all constrained by the single schema mentality.  The data governance, change management and other aspects all come back to that single ‘enterprise’ schema.  In the new world it’s all about agile delivery, collaborative governance and connecting information more than dictating the view. This is a large cultural change for many IT departments.

Are there differences between the U.S. and Europe?

The US kicked off the Big Data revolution, and did so as it often does in these spaces as a technology driven piece out of Silicon Valley. What is really interesting is that Europe was hugely lagging behind until late 2015, at which point there was a dramatic shift.  I wrote recently that Europe may well have a 2nd mover advantage. The US has learnt a huge amount about the technology and already has huge numbers of clusters in operation. The challenge is how to evolve that into something that is business transformative, since they effectively now have a legacy Big Data problem in places. Europe meanwhile has sat back, missed the early gains but is now looking (generally) at Big Data as a strategic imperative which means that in some places, they are able to leapfrog a generation of learnings.

What are some of the other takeaways from the report?

I think a big take away is that Big Data has grown up. If you look back only 5 years ago, almost no-one was talking about data governance and Big Data. SQL was a dirty word in the Big Data community, it was all about how NoSQL was going to take over the world. Today we see the importance of data security, the importance of governance, the importance of know what your data actually means and how you connect it.

For me the biggest takeaway is that Big Data has gone from being a technology silo to being the new managed substrate for information. When firms are talking about the challenges of data security and you have business leaders directly taking control of programs, it’s clear that Big Data is no longer a pet project.

What needs to be done?

Beyond industrialization of ingestion and distillation, what really needs to be done is to focus on what matters. Don’t constrain programs by applying old school data quality thinking, but instead look at how you can rapidly connect information and make bad information visible, but inform the business that it is bad. All too often we spend months cleaning data that today is used from an Excel spreadsheet without any lineage or quality. If we can automate that spreadsheet then we’ve already added value, if the business then needs the quality improved, we can do so from there.

Above all, what this report and a few other recent reports show, is the importance of recognizing that as Big Data becomes a new firm information substrate for a business, you need to start from day one with that mindset. That is why some projects haven’t yet delivered value, because they are building robust foundations for future growth. This isn’t an isolated technology project anymore, it’s a new way of working with information.

 

 

Share this post:

Comments are closed.