Over the past several decades, enterprise technology has consistently followed a trail that’s been blazed by top consumer tech brands. This has certainly been true of delivery models – first there were software CDs, then the cloud, and now all kinds of mobile apps. In tandem with this shift, the way we build applications has changed and we’re increasingly learning the benefits of taking a mobile-first approach to software development.
Case in point: Facebook, which of course began as a desktop app, struggled to keep up with emerging mobile-first experiences like Instagram and WhatsApp, and ended up acquiring them for billions of dollars to play catch up.
The Predictive-First Revolution
Recent events like the acquisition of RelateIQ by Salesforce demonstrate that we’re at the beginning of another shift toward a new age of predictive-first applications. The value of data science and predictive analytics has been proven again and again in the consumer landscape by products like Siri, Waze and Pandora.
Big consumer brands are going even deeper, investing in artificial intelligence (AI) models such as “deep learning.” Earlier this year, Google spent $400 million to snap up AI company DeepMind, and just a few weeks ago, Twitter bought another sophisticated machine-learning startup called MadBits. Even Microsoft is jumping on the bandwagon, with claims that its “Project Adam” network is faster than the leading AI system, Google Brain, and that its Cortana virtual personal assistant is smarter than Apple’s Siri.
The battle for the best data science is clearly underway. Expect even more data-intelligent applications to emerge beyond the ones you use every day like Google web search. In fact, this shift is long overdue for enterprise software.
Predictive-first developers are well poised to overtake the incumbents because predictive apps enable people to work smarter and reduce their workloads even more dramatically than last decade’s basic data bookkeeping approaches to customer relationship management, enterprise resource planning and human resources systems.
Look at how Bluenose is using predictive analytics to help companies engage at-risk customers and identify drivers of churn, how Stripe’s payments solution is leveraging machine learning to detect fraud, or how Gild is mining big data to help companies identify the best talent.
These products are revolutionizing how companies operate by using machine learning and predictive modeling techniques to factor in thousands of signals about whatever problem a business is trying to solve, and feeding that insight directly into day-to-day decision workflows. But predictive technologies aren’t the kind of tools you can just add later. Developers can’t bolt predictive onto CRM, marketing automation, applicant tracking, or payroll platforms after the fact. You need to think predictive from day one to fully reap the benefits.
By focusing on predictive as your key product differentiator, you can more easily prioritize a minimum viable product that is intuitive and powerful. When you cut to the core of the most critical problems your product is trying to solve with data, you can design super simple flows that leverage the data in the right ways. Then you’ll have a leg up and can avoid wasting time on features that unnecessarily require your users to set up complex configurations and preferences.
Focus is also the reason many desktop apps are switching to a mobile-first approach rather than building every bell and whistle they can implement on a big screen and trying to make the hard trade-offs to carve out a differentiated mobile version later.
Pandora did this well by bringing smart recommendations into its product early on, which simplified the experience dramatically. Instead of building a music experience that had all the bells and whistles of iTunes, they focused on suggesting and predicting the next best song for the user. That focus gave them unique positioning so they could stand out from a marketing perspective. In addition, it meant they could develop world class technology and make their recommendations top notch versus stretching their technical assets across too many “me-too” features.
Simplification is particularly important for predictive companies, because getting tangled up in too many of your customers’ processes and data sources behind the enterprise firewall all at once often lands you in the consulting trap. That’s why the majority of business intelligence vendor revenues actually come from support, consulting and services – i.e. customizing models and data extraction, transformation and loading scripts for each customer.
Starting with predictive from the get-go is the best way to become an expert on the data you’re partying on. Then you can truly understand things like how to track or snapshot all your fields properly, where data can go in and out, and all the “hackery” required to work with third-party systems like Salesforce and Marketo (you’ll often need to develop workarounds that even insider folks there may not be aware of).
When you’ve invested in learning about the particular sources of data, it’s much easier to focus on the things that will help you break through – like designing your system in a way that’s amenable for predictive modeling, building the right model features, and creating a scalable user experience.
Also keep in mind that predictive-first means saving all the data you can, just as Google does. The search giant has gone to great lengths to build the most advanced data infrastructure and data centers on the planet in order to maintain and protect its data. Google treats data like it’s gold waiting to be mined, and it is time everyone else follow suit. Predictive applications require a well-thought out architecture that captures a broad swath of signals for training predictive models.
Unfortunately, today’s enterprise software is rooted in legacy approaches from back when disk space was expensive and data analytics (let alone predictive analytics) was in its infancy. I imagine if CRM systems were built from the ground up today — with a predictive-first mindset — their design would be very different, they would be more aggressive in data snapshotting and collection (for example, turning history tracking on for all fields by default), and their workflows and trigger functions would be based on predictions as opposed to manual, fragile, non-adaptive rules.
To scale a predictive solution for enterprise use, you need to pick one precise problem and source of truth for your data, work with a few customers to start and go deep. During this process, you should determine common denominators, revamp your product around those features, and test it until you nail repeatability. Then blow it out to many more customers, break the product, and repeat the earlier steps until you get it just right. This requires unbelievable product focus and data science work that’s only been seen in the consumer world until now.
No one’s built a repeatable predictive application in the enterprise before because it’s really hard to do without getting caught up in consulting around individual customer requirements. And this iteration process is nearly impossible under the weight of owning a popular platform that’s predictive-last, because you’re pressured to scale your solution immediately to thousands of customers.
When you’re developing for legacy systems, you don’t have the time and patience to test with a few customers and learn and grow. In addition, this issue is compounded by the fact that the new workflows you’re generating fundamentally question and cannibalize your existing manual rule-based workflows.
It’s Time to Make a Stand with Predictive
Rather than providing basic data bookkeeping services, enterprise software developers should be rethinking how people work and finding new ways to leverage data that make employees more efficient and effective. If we do this well, the next generation of business applications will prove vital for companies of all types and sizes as they make the shift to becoming data-driven businesses.
Comments on this article are closed.