Select Page

    Editors Note: “There’s a lot of sci-fi-level buzz lately about smart machines and software bots that will use big data and the Internet of things to become autonomous actors, such as to schedule your personal tasks, drive your car or a delivery truck, manage your finances, ensure compliance with and adjust your medical activities, build and perhaps even design cars and smartphones, and of course connect you to the products and services that it decides you should use.”

     Real or virtual? The two faces of machine learning
    By Galen Gruman

    That’s Silicon Valley’s path for artificial intelligence/machine learning, predictive analytics, big data, and the Internet of things. But there’s another path that gets much less attention: the real world. It too uses AI, analytics, big data, and the Internet of things (aka the industrial Internet in this context), though not in the same manner. Whether you’re looking to choose a next-frontier career path or simply understand what’s going on in technology, it’s important to note the differences.

    A recent conversation with Colin Parris, the chief scientist at manufacturing giant General Electric, crystalized in my mind the different paths that the combination of machine learning, big data, and IoT are on. It’s a difference worth understanding.

    The real-world path

    In the real world — that is, the world of physical objects — computational advances are focused on perfecting models of those objects and the environments in which they operate. Engineers and scientists are trying to build simulacra so that they can model, test, and predict from those virtual versions what will happen in the real world.

    As Parris explained, the goal of these simulacra is to predict when (and what) maintenance is needed, so airplanes, turbines, and so forth aren’t taken offline for regular inspections and maintenance checks. Another goal is to predict failure before it happens, so airplanes don’t lose their engines or catch fire in midflight, turbines don’t overheat and collapse, and so forth.

    Those are long-held goals of engineering simulations; modern computing technology has made those simulacra more and more accurate, allowing them to be used increasingly as virtual twins of the real thing. Higher computing power, big data storage and processing, and connectivity of devices via sensors, local processors, and networks (the industrial Internet) have made those virtual twins more and more possible. That means less guesswork (“extrapolation,” in engineering parlance) and more certainty, which means fewer high-cost failures and fewer large-cost planned service outages for checks.

    There’s another goal, made possible only recently by those industrial Internet technology advances: machine-to-machine learning. Parris’ example was a windmill farm. Old turbines could share their experience and status with new ones, so new ones could adjust their models based on the local experience, as well as validate their local responses based on the experiences of other turbines before making adjustments or signaling an alarm.

    The same notions and advances anchor the self-driving car efforts, which have long roots in robotics and AI work at Carnegie-Mellon University, MIT, IBM Research, and other organizations. (I was editing papers on these topics 30 years ago at IEEE!) But they have become more possible due to those advances in computing, networking, big data analytics, and sensors.

    All of these industrial Internet and robotics notions rely on highly accurate models and measurements: the more perfect, the better. That’s engineering in a nutshell.

    The probabilistic path

    Then there’s the other approach to virtual assistants, bots, and recommendation engines. This is where much of Silicon Valley has been focused, mainly for marketing activities: Amazon product recommendations, Google search results, Facebook recommendations, “intelligent” marketing and ad targeting, and virtual assistants like Google Now, Siri, and Cortana.

    Those aren’t at all like physical objects. In fact, they’re very different in key ways that mean what you’re computing, analyzing, and ultimately doing shouldn’t — and can’t — be about perfection.

    Think about search results: There are no perfect results. If there were, my perfect is not your perfect. It’s all situational, contextual, and transitory. Google is doing a “good enough” match between your search terms and the knowledge it has cataloged on the Internet. It adjusts results based on the information Google has gathered about you, as well as on what most people tend to click as a rough guide to the good-enough results.

    That’s a probabilistic system. It applies equally to marketing and advertising (Silicon Valley’s big AI and big data focus for the last decade) as it does to search, recommendations, and all the other stuff we read about. Much of the machine learning research is about optimizing these kinds of systems through feedback fabrics.

    “Probabilistic” does not mean “inaccurate is OK,” of course. But it does mean “accurate” is in the eye of the beholder, so there’s both more freedom to be good enough and significantly more effort needed to understand all the legitimate options. A simulacrum of an engine needs to be an exact match of that engine, but for probabilistic analysis it needs to accept for a sometimes broad variety of possible realities and do the best it can under the circumstances.

    If you think about how autocorrect and speech-to-text technologies work, you know what I mean. Language is not math, and for grammar, terminology, definitions, and syntax, there are both many legitimate variations and many illegitimate variations. Plus, many of those illegitimate variations are in wide use by those who don’t know better — so the algorithms contend with bad information that the users say is correct. Plus, language evolves, at different rates among different populations.

    It’s frankly amazing that today’s systems do as well as they do. But they’re nowhere near the perfected-model status of an aircraft-engine simulacrum at GE, Pratt & Whitney, or Rolls-Royce. And they never will be.

    The probabilistic path goes beyond marketing, of course, even if that’s what we see in most consumer technologies. The same techniques have long been used to optimize delivery routes for UPS and FedEx, for Amazon to figure out what warehouses to ship a product from and by what carrier, to adjust airline schedules and the equipment and crews to be used based on weather and passenger demand, to manage just-in-time manufacturing parts ordering and delivery, and so on. (This used to be called operational business intelligence.)

    Those operational BI cases are more exact than the marketing ones, because the idiosyncracies and changing needs of people are less of a factor in their context. Thus, they have more of an engineering feel to them than, say, ad targeting or search results. But they too are situational, so there will never be a perfect model for them, either. However, there can be more perfect data and assurance that product’s or vehicle’s location and status is known that we’ll ever get for the state of mind of a user doing a search or contemplating a purchase. In other words, there’s more certainty about the environment and the forces affecting it.

    The next time you hear about machine learning, the Internet of things, big data analytics, and other new computing fancies, keep in mind there are two major thrusts for this basket of technologies, and how they work and how to think of them differs greatly based on the specific problem they’re being applied to.

    www.DaoDaily.News

    Translate »