Brain Structures, Data Architectures and the Connected Metaverse

Brain Structures, Data Architectures & the Connected Metaverse: Our Evolution of Thought

Evolution of thought is an interesting concept to study.  Collective psychologies play an important part in our interpretation of the role of technologies in relation to problem solving.  This is true because our perceived benefits of technology can form a sort of guide-post in thought to what kind of outcomes can be achieved, or preconceived barriers to what technology can truly become.  And sometimes it is difficult to fully describe what can come next, like Bill Gates explaining succinctly 'What is the Internet?' to David Letterman. Or David Bowie explaining how the Internet would break down barriers between the 'creators of media' and the 'audience consuming that media' (as well as the potential negative implications of this shift, i.e. social media).  

Potential Future Trajectory of the Metaverse

In this vein of thought, as the connected Metaverse evolves, so too will our need to shift the collective understanding of the intrinsic strategic value of data, as well as the foundational enablement architectures needed to go from a 'linear' to a 'networked' interpretation of data.  And a Metaverse where man-and-machine rich interactions will be both bi-directional and synchronous in real-time to precisely simulate reality in a reactive high-fidelity closed-feedback-loop process

Evolution of thought of the Metaverse

  1. Value of Data 

  2. Interpretation of Data 

  3. Data Architectures of the Metaverse 

Value of Data: Linear Translation to a Networked Assessment of Data's Value 

What is the value of data?  In the lens of most organizations, data valuation comes from the linear translation of data assets into analytics, insights, and outcomes.  In general, most companies utilize a linear data pipeline to develop use cases that drive either growth (revenue potential, new business opportunities), returns (incremental benefits, resource optimization), or risk reduction (decision-making precision, reduction of blind-spots). There are many examples of such successes such as Coca Cola using image recognition technology via Instagram and Facebook posts in order to position targeted ads to drive 4x click-through rates (example of growth). Or Nike using ultra-precise customer data via their apps leading to a 60% contribution to their overall digital business (example of returns). Or McDonalds mega-integrated digital supply chain in which digital drive-thru menu boards dynamically change based on orders, inventory, bottlenecks to lower operational costs and risks (example of risk reduction).  

While each of the previous examples demonstrates the real tangible value derived from raw data, the linear translation decouples data valuation from a true 'networked' assessment of its intrinsic value.  There is a lot to unpack in this statement.  But it begins by differentiating conventional 'pipe' versus 'platform' business strategies, and highlighting that platform business strategies intrinsically use a multi-modal assessment of data valuation, simply because their net business valuation is a direct function of it.  Concretely for platform businesses, value is derived from factors such as data interactions driven by participants, user generated content, partner networks upstream and downstream, APIs, and customers acting as a source of innovation - thereby driving new and novel data and its conjoined interactions.  The value of Twitter, Airbnb, Uber, and others is indeed in part a function of a multi-modal assessment of data interactions rather than the linear translation of data to outcomes.  Data is viewed in the context of an interconnected network between those who create data and those that consume data (i.e. tweets, posts, transactions, etc.).  

‘Pipe’ versus ‘Platform’ Business Strategies supporting Open Ecosystems

The most successful companies on the Fortune 500 have adopted platform-based business strategies & intrinsically value data in the context of an interconnected network.  

Why does this matter?  Because the linear translation scheme of data-to-insights in some sort of pipe-like funnel not only underpins the general strategy that most companies take as witnessed via their collective actions, but such a strategic psychology also limits the understanding of the foundational data architectures that will be certainly required in order to power the Metaverse of the future.  Data cannot be viewed linearly, disseminated linearly, or actioned on linearly to power the connected Metaverse.  Fundamentally, data cannot be viewed one-dimensionally as raw material input to be converted into useful insights and output.  Data exists as a network, data exists amongst other data, data is a network.  Data is a contextualized fabric.  If our collective psychology around data, its intrinsic value, and its optimal structure does not change - it will be difficult to achieve the 'Metaverse like future' that is envisioned of truly connected machines, truly connected grids, and truly connected people.  

Interpretation of Data: Mimicking the Biological Constructs of the Human Brain

It is interesting to map out the 'biological thought' origins of AI.  This biological-rooted reference has been quite a powerful analogy that has demonstrably evolved the way we analyze and interpret raw data - seeking meaning in complexities and non-linear patterns.  And much has been written about how neuroscience can be used in order to inform better AI algorithms

"When the mathematician Alan Turing posed the question “Can machines think?” in the first line of his seminal 1950 paper that ushered in the quest for artificial intelligence (AI), the only known systems carrying out complex computations were biological nervous systems. It is not surprising, therefore, that scientists in the nascent field of AI turned to brain circuits as a source for guidance. One path that was taken since the early attempts to perform intelligent computation by brain-like circuits, and which led recently to remarkable successes, can be described as a highly reductionist approach to model cortical circuitry. In its basic current form, known as a “deep network” (or deep net) architecture, this brain-inspired model is built from successive layers of neuron-like elements, connected by adjustable weights, called “synapses” after their biological counterparts. The application of deep nets and related methods to AI systems has been transformative. They proved superior to previously known methods in central areas of AI research, including computer vision, speech recognition and production, and playing complex games."

We have successfully taken the biological constructs found in the human brain and mimicked these neuronal functions of dendrites sending electrical impulses to axons triggering neurotransmitters that other dendrites will pickup in the machine-world context. In a conventional deep-learning AI neural net this is replicated via neurons, activations, inputs-hidden-output layers - learning and ultimately predicting via back-propagation algorithms in a feedback loop:

"Today, deep nets rule AI in part because of an algorithm called back-propagation, or 'backprop'. The algorithm enables deep nets to learn from data, endowing them with the ability to classify images, recognize speech, translate languages, make sense of road conditions for self-driving cars, and accomplish a host of other tasks."

And if we are to carry this analogy forward, of replicating biological constructs in the machine-world, we also have to look upstream of how the brain takes in raw sensory input, contextualizes that input against existing experiences, creates interconnections between 'the now' and what has happened in the past, and then proceeds with the next course of action. In the language of 'technology stacks', we have done a great job at the analytical processing layer, but we still have a ways-to-go in terms of the data acquisition, processing, and storage layer. It is indeed the most important layer that will unlock the connected Metaverse.

On the data collection front, if we continue with the biological analogy, our brain processes something like 74GB of information a day, increasing 5% per year. 500 years ago, 74GB of data would be what an average human would consume over the course of their entire lifetime - we now do that over the course of 24 hours in a day. It is without a doubt that the human brain as a biological construct has plastically evolved over time to deal with this barrage of constant sensory inputs. The brain has mechanisms in place to focus on those most important discerning characteristics of a situation or an experience, and quickly map key data points, contextualize new information with past understandings, and take actions.

We see the world in context because we see data and real-world sensory inputs in context. We immediately color data points with past experiences, with incidental relationships, and with other inputs. As far back as 1948, scientists first suggested that the brain actually forms a 'cognitive map': "[A]n internal flexible and adaptable [data] model of the outside spatial world capable of being dynamically updated as new external information comes in." We have such a dynamically updating cognitive map-model view of the world because of our ability to contextualize data as we see it and process it. And it's because of this hyper-contextualized data set we use to feed our internal cognitive map-model of the world that is can be claimed "[T]he brain is still the best inference machine out there." The human brain has the ability to make predictions after just seeing a few data points or external stimulus. We can jump to largely high-grade decisions and conclusions with only seeing a few external stimuli or data inputs because of context.

As an aside, I recognize this is an oversimplification, and a more comprehensive thesis can be coupled with Anil Seth's state-of-the-art research into consciousness content (3 dimensions defining consciousness: level, content, and self).  But the main parallel I want to draw is that the brain has the ability to continuously take in ambiguous and noisy sensory signals and couple those inputs with prior knowledge to contextualize data points.  The brain is a prediction machine, but the act of contextualizing data is what allows us to quickly take new inputs and make decisions.

And this is the fundamental key: Data cannot be viewed one-dimensionally as raw material input to be converted into useful insights and output. Data exists as a network, data exists amongst other data, data is a network. Data is a contextualized fabric.  

Now that we understand this contextualized data network fabric architecture in which we interact with the world at large, the broader question is, how is this applicable to the Metaverse, digital transformation, and the economic value of data.  It will all start with the way we fundamentally view data architectures.

Data Architectures of the Metaverse: From Rows & Columns to Data Networks

It is clear that change is needed upstream of those algorithmic mechanisms that give rise to artificial intelligence. Data management in the Metaverse will likely predominantly be graph-based, running primarily on GPUs, distributed, massively parallel, and federated. Many have also concluded that data management (and standardization) forms one of the major obstacles in the realization of the Metaverse in the near-immediate term:

 "[T]he creation and effective use of a metaverse or mirrorworld will be data (and knowledge, which is contextualized data with supporting relationship logic) management. More specifically, to be useful and accurate, interactive digital twins demand web-scale of interoperability–not just application scale."

How we manage data has been a direct function of how we utilize that data, and this has been seen throughout the history of data management. For example, higher-level programming languages such as FORTRAN, COBOL, C, C++ were all predicated on the regularized tabular data structures that they were manipulating. ETL then came around in order to collect data from different sources and convert them into a consistent form - to be stored in data warehouses. SQL then focused on relational databases and relational data models - unified languages for navigating, manipulating, and defining data. NoSQL formed a leap towards 'big data' and being able to store, manipulate, and search vast quantities of structures and unstructured datasets. This then gave rise to data lakes and the ability to store a large mass of unstructured datasets for downstream consumption into analytics.

And part of the problem again are the collective psychologies, and the way that data has been managed overtime via: poor architectural design choices, legacy, point-to-point integrations, custom code, data siloes, application-centric stacks, and increasing complexities (SaaS, cloud, tools, services, IT/OT/IoT).

But where does this leave us now? Do we have the prerequisites for the connected Metaverse in terms of foundational data architectures? What requirements will need to be encapsulated by these new architectures?

  • Plug & play connections

  • Interaction logic between datasets

  • Dynamic data models

  • Contextualized datasets

  • Data enrichment

"Knowledge graphs are ideal for this purpose. At the heart of knowledge graphs are contextualized, dynamic models that allow data enrichment and reuse in the form of knowledge graphs, graphs of instance data, rules and facts that can be easily interconnected and scaled. These knowledge graphs can make interoperability between larger and larger systems possible. Think of it this way: more and more logic is ending up in knowledge graphs, where it can be reused for interoperability purposes. Once you’ve done the hard work of making the graph consistent, graphs and subgraphs can be snapped together, and you can create data contexts, rather than just data monoliths."

And there are many companies today that are moving towards this shift, re-imagining their enterprise architectures to support what comes next, whether its Thomson Reuters' financial knowledge graph-as-a-service, Siemens' industrial knowledge graph, or Airbnb's knowledge graph to surface relevant context to people - the largest companies by market capitalization are all not only investing in the analysis of data, they are firmly investing in those data structures that will support richer man-and-machine interactions.

Submit Name & Email to download the PDF White Paper

Previous
Previous

The Social and Industrial Metaverse will meet in the middle

Next
Next

Today’s Linear Technology Stacks