Big Data Implications

Analytics, Business Intelligence, Posts, Supply Chain, Warehouse Management

Big data and big data analytics pose a series of implications and challenges.  Organizations that seek to become analytical competitors must have an established analytics culture consisting of  well trained employees who are using the right enabling technologies.  However, these organizations face challenges maintaining consumer privacy while they collect and use sensitive information.

In parts one and two of this four part series on big data, we looked at what makes data ‘big’ and how it can benefit organizations that apply the right analytics.  In part three, we will look at the cultural, technological, and ethical implications of big data and advanced analytics.

Culture and Skills

Without the right culture and skillsets in place, the ability to derive value from big data is highly diminished.  Analytical competitors have leaders who believe deeply in the benefits of analytics, have established entrepreneurial test and learn cultures, have pushed the envelope in terms of technology innovation, and use analytics to drive their strategy and pursue new endeavors and products.  Developing the right skillsets is key because most companies do not have enough people who are proficient in analytics.  So what to do?

Well, a great approach is to partner with a technology service provider who can provide mentorship, consisting of end user training and on-going guidance to flatten the so called ‘learning curve’ while at the same time delivering value.  If you don’t have the capacity or capability, you need to look outside the organization and partner with those who can help you.  Otherwise, you are going to be left in the dust.


A main technological implication is that conventional data storage systems fail to accommodate the volume, velocity, and variety of data, and that there is a need for a data infrastructure that can not only handle big data but allow for quick analysis and queries as well.

An emerging open source project called Hadoop has been developed to store very large and unstructured data sets that a traditional RDBMS cannot store.  Hadoop uses a distributed file system (HDFS) to split large data sets and spread them across a cluster of servers.  A software framework called Map Reduce, invented by Google to index the Web,  is used to dispatch tasks evenly across the data so each server carries out the task directly on the data on the local disk and returns the result.  It’s relatively cheap,  scales wonderfully and allows for high speed access and processing against very large and complex data sets.  However, it’s new, complex and mainly used by the big Internet players.  Gartner’s 2014 study on BI and analytics spending intentions identified it as a future state technology but that less than 5% of organizations actively use it.

What’s important to understand is that as big data pushes the boundaries of technology, technology needs to adapt and so do you.  You may not be an Amazon or Walmart right now, but being aware of the data that you need to be successful requires that your overall data and analytics infrastructure is up to the task.


Without the right safeguards in place, the potential of big data may be thwarted by consumer backlash.  One only needs to look at recent reports on how the National Security Agency (NSA) in the United States allegedly tracked individuals by accessing  social media data.  It is no surprise to see the likes of Mark Zuckerberg lobbying the U.S. government to implement better policies to protect individuals.  After all, Facebook is betting a lot of its future on gathering as much data on individuals as possible, so the prospect of users moving away from social media would be catastrophic.  Government policies are a good start, but companies need to practice what they preach as well.  ‘Fate, it seems, is not without a sense of irony’, one could say.

A leading authority on the topic of big data privacy is the Stanford University School of Law.  One paper suggests that big data ethics should focus on the appropriate use of big data and transparency on how ‘machine learning’ is used to predict outcomes.  The challenge here is that these algorithms are competitive differentiators for their respective companies.  Opening up these ‘black boxes’ would essentially eliminate that advantage.  In other words, ‘not gonna happen’.  Ultimately, analytics should be used for the betterment of the consumer experience.  The minute you use big data for your benefit and not for the consumer, they will leave.  This is the fine line upon which purveyors of analytics must tread.

So how does this all come together into an analytics strategy for the supply chain?  Gartner has found that the supply chain industry has been relatively slow in adopting big data and big data analytics.  In the final segment of this series, we will take a better look at how the supply chain industry can take advantage of this huge opportunity.

Back to List View
Supply Chain Brief