In this detailed ‘primer’ from the VC Matthew Ball, he offers up a definition of the Metaverse: “The Metaverse is a massively scaled and interoperable network of real-time rendered 3D virtual worlds which can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communications, and payments.”

If all of the users connected to the Metaverse need to feel present and for that presence to be felt synchronously with other users (and in sync with both the physical and digital worlds) then the devices need to understand, enhance and even translate selected parts of the physical world into its virtual equivalent.

In some cases, this will require the physical and virtual worlds to blend together in an augmented state. Other times it will require elements of the physical world to be implemented digitally in a virtual world. This is often referred to as digital twins and is typically associated with copying physical objects into digital versions.

I can easily see a world in which I buy a new jacket in the physical world and for that to have a digital equivalent that my avatar can start wearing by simply scanning a QR code on the label. This simple example highlights the need for objects, concepts and artefacts to co-exist but it isn’t just objects. Our friends, family, and co-workers will have a digital self, possibly our pets too. Many of us will build versions of our dream home or our actual home in the virtual world. You can already construct a virtual version of your workspace.

These people, things, pets, locations and objects will be useful as we drift frequently and seamlessly between these worlds because the digital version needs to be a reflection of the physical world as we will exist in both at the same time.

The same is true for sound. The devices we use to connect to the Metaverse and the software-based worlds we inhabit need to work out how the sounds of the physical world interact and impact the virtual one.

The ways in which devices understand and process sound will be important. That could be through recognizing the sounds around us and then using the information to change, adapt or disrupt our digital experience. Or it could be eliminating or enhancing certain physical world sounds to greatly improve the experience within the virtual or augmented worlds.

Imagine – for ease of argument – that I’m talking about a VR headset as my entry point to the Metaverse. That device is likely to understand the world around me through object recognition, voice recognition, cameras, spatial mapping, LIDAR, ultrasound, accelerometers, gyroscopes and geolocation. Could be all of them or some of them. They help tell me what is physically in my space (to stop me from falling over, etc) or which way I’m facing. They’ll adapt what happens in my digital world based on the commands I give or the movements I make.

However, as you and I do, and have done all our lives, we rely heavily on our hearing to help us understand the world around us. Sounds carry significant metadata that we want to know about and which lead to a reaction. Alternatively, the sounds around us can be distracting and irritating, which degrade our experiences.

There are many ways that a greater sense of hearing beyond just voice is useful to keep users connected with the physical world but also to enhance their experiences in the virtual world. The Metaverse promises a revolution in terms of how we interact with the technology, its success will hinge on how it makes us present in two synchronized worlds.

We are using our own and third party cookies which track your use of our website to help give you the best experience. By continuing, we’ll assume that you are happy to receive all cookies on our website.

You can check what cookies we use and how to manage them here and you read our privacy policy here.

Accept and close