Case Study: The Guardian Avatar – with Martin Geddes

Recently, I got a unique opportunity to jump on a video call (recorded video call – listen to it to appreciate the irony) with Martin Geddes, co-founder and executive director of Hypervoice Consortium. Martin shared his thoughts on the trajectory of telecoms and mobile industries, the problems it raises and the solutions it calls for. Tune in to learn, among other things,

  • Why people are not comfortable with their voice being recorded
  • What is missing from the dominant computing paradigm and the way we engineer software
  • What makes phone the original ‘internet of things’ thing
  • How the Guardian Avatar is a virtual identity, a “fourth wall”, and a meta-verse browser
  • Why most of current wearables are worse than useless, and what they should do instead
  • Why existing Internet infrastructure is inadequate for ‘internet of things’
  • If artificial intelligence is a right tool in the world of sensors
  • What is the real threat of artificial intelligence
  • What question you should ask yourself as technologist
  • We pretty much figured out computer programming. What’s next?

Martin Geddes is a consultant and authority on future telecoms business
models and technologies. He is formerly Strategy Director at BT’s network
division, and Chief Analyst and co-founder at Telco 2.0. Martin previously
worked on a pioneering mobile web project at Sprint, where he was a named
inventor on 9 patents, and at Oracle as a specialist in high-scalability
database systems.

He co-runs public workshops on Future of Voice and Telco-OTT Services, as
well as providing speaking, consulting, training and innovation services to
telcos, equipment vendors, cloud services providers and industry bodies. He
is currently writing a book on the future of distributed computing, called
The Internet is Just a Prototype.

Martin holds an MA in Mathematics & Computation from University of Oxford.

 

“We are potentially entering the most wonderful of eras. We can engineer good relationships and happiness. We are also entering into one of the most horrific of eras where it’s North Korea everywhere. May be both, at the same time. It’s going to be weird and wacky.” — Martin Geddes

Prefer listening to watching? Download MP3 file here.


Could you please tell us a little bit about yourself, the work you do, and how that led into exploring the future of the communications?

Okay. Let’s take the long arc to this story. Yes, I’m deeply geeky. Before I’d even left school, I was working with a friend of mine to design CPU instructions sets. I went off to do a degree in Math and Computation 25 years ago, so basically, theoretical computer science. The thing I was rewarded for was thinking about formal methods of proof of software correctness. Then, in the 1990’s, the easiest way of making a good living without working too hard was to work in the IT industry. I spent my time building artificial intelligence systems in Lisp for manufacturing, working through building back office systems for banks doing check clearing. I was a consultant at Oracle for four years. I was very focused on the traditional software industry and automating IT processes.

Then in 2001, I got seduced into going across the Atlantic to go work for Sprint, and join a phone company. I was transitioning out of that world of IT into another technology domain: telecoms. I knew what networks were, but never had to deal with one; not really. Famously, the phone company is centered on one particular tool of technology, which is voice. We’ve got computers held to our ears, of all places, but not to our hearts. Imagine if only it was our noses! After all, they are like our ears and our mouths! We have a whole network industry dedicated to the replication of this sensor data between our ears and our mouths, done at a distance.

I got into telecoms and having done a degree in theoretical computer science, so I pretty much grasped how this computing thing worked. “I think I’ve got this nailed.” Then I got into telecoms. It’s like, “Wow!” It’s technology, but it’s all different and it’s pretty weird. They’ve got their own language, their own way of thinking and I’m a bit lost. I found myself in this dot com business in Kansas City, trying to build open communication platforms for the wireless web. This unexpectedly turned into a whole new career, which is trying to understand what makes the telecom industry work. I figured out IT during 1990s, and I kind of almost have got there with telecoms 15 years later.

There are two main things that I’m interested in and that I work on. One is the network itself. All networking computing is now just distributed computing. There isn’t the cloud, there isn’t the network, there isn’t the PC. It’s just distributing computing. There’s only one business. I’m interested in the future of how that infrastructure works at a very deep level.

The other thing is the people. The separate journey I’ve undertaken of learning and discovery about people and organizations. We have hearts, we smell; we aren’t technology. It’s been a journey inspired by thinking about the future of voice, which has led me into a much wider world. That’s the broad space in which we’re playing with these new ideas: people, technology, and the interaction of the two.

As I understand it, you formed Hypervoice Consortium… back when?

About five years ago, I was doing a piece of consulting work for a client. This company has come with a very clever new way of capturing conference calls, and relating together what you’re saying with all the notes you’re typing, moving through your PowerPoint slides, tagging moments, assign actions, and so on. They were linking all that together into a new kind of time-based data object. This object was searchable and navigable, but they were having some trouble describing it to people in the world: what it was, what it meant and why it was useful. They hired me to come along and I was kind of, “Yeah. Uh-huh, uh-huh”. I had a hot chocolate. They had a coffee. Hmm. “I think you’ve just built the first hypervoice system.”

That idea of linking of things together in time in the activity stream around voice is the core. It’s like hypertext, where there are relationships between various objects in a spatial metaphor. Instead, this was a temporal metaphor.

So that resulted in me working with the CEO of the company and we created the Hypervoice Consortium. It seemed to be a rather lonely place, a bit like doing hypertext in the 60’s and 70’s before the Web. We were engaging with a whole community of people who are working on the future of voice.

We got some sponsorship from some very kind large companies. I had the pleasure in 2014 of going away and spending a large chunk, over a year, thinking about researching the future of voice with my colleagues. In the process, we interviewed thirty experts, across virtual reality, disability services, electric cars and a whole vast range of different subjects.

We were reading lots of books and articles, and watched TED talks in near-lethal doses. It meant going away and having week-long retreats to think about this stuff, integrate it, and draw a big map to the world. In that process started to have… and “A-ha… Ooh…” moment. Even with us as “experts”, there was the sense of how the assumptions we came into this project were suspect. There was a bigger picture.

Can you give us a layout … What was this picture?

There was an on-going hypothesis, that’s on SlideShare: “The future of communications 2024”. We had these ten different ideas about how communication had improved over a decade. The AI bots would join us in conversations, and our conversations would be recorded and managed in different ways, and contextualized in different ways. Devices would change and become more ambient. We had some refinement on those basic ideas and the way of organizing them.

What we quickly found was that our radical ten year vision into the future of the phone call had only one problem with it, which was that it was already happening.

We weren’t good futurologists, as we weren’t even present-ologists. So, we had to go back to the beginning, and we were forced to rethink at a much deeper level what the future of human communications is like. What does it mean to communicate, what is it actually for? How does it happen when you mediate those communications (as you can see with us talking right now) through digital technology?

There was a critical “aha!” moment. We had been interviewing all of those people using this hypervoice conference calling tool to capture all of our notes and organize our stuff.

We started each call with, “Hello Mary, thank you for giving us an hour of your time, we really appreciate it. We understand you’re an expert at social robotics (or whatever it is). Would you mind if we recorded this call just for note taking purposes of doing this report.” Of course they didn’t care. “Yes, absolutely, no problem.” … We hit “record” and then heard in an automated voice… “This call is being recorded.”

It eventually dawned on us that we had spent the first minute of every one of those calls going through a negotiation process to record the call. In this case, because of the nature of the invites and the relationships we had, the answer was always “yes”. However, the contract around that recording had not been captured. The “this call is being recorded” moment was after we said we would only use it for note taking purposes.

The exact terms under which the call can be recorded were not captured.

Hey, it was not captured! It has been negotiated manually by humans by voice. So we started to realize the implications…

The other thing was that, over and over and over again, people were telling us, “Look, there’s all this great new technology and it sounds interesting, but I don’t like recording my calls. I don’t trust you to do that.” Not us personally, but the phone company, Google, whoever it is. “Nobody is trusted with my voice.” We type in search terms, do a bunch of stuff online. But the moment it came to bio-sensed human data, my intimate data from my body and my voice – no. Nah, nah, NAH!

It was clearly something of importance… There’s a boundary that’s being crossed here that needed attention. We started to realize that the figural issue wasn’t the clever things you can do with voice. Me, a computer science degree from Oxford. Kelly, my colleague, she has a Harvard degree, and she is smart tech CEO. We’ve been rewarded in our lives for our ruthless ability at logic. The whole IT industry is being self-selecting people for their logical thinking. “Can you code? Can you program?” “Oh, yes, we can do that!”  “We’ll now promote you to a team manager for coders.” “We’ll now promote you to a product manager!”

That’s fine when what you’re trying to do is automate the back end of a bank to do clearing systems. The moment you try to deal with humans, in their human state, we’re entering into a new domain. In some ways, even recording voice is “programming” a human, by performing the computing “identity function” on voice. “Yeah, I stored it and brought it back. I’m programming humans.” So there’s an echo, a shadow of you, out in the world.

It required a new way of thinking, a new paradigm. Now, there’s a second ah-ha moment. The first one was that we were not paying attention. The clue was in the interviewing process that we ourselves were engaged in. The second one was that we were out on a retreat at my colleague’s house up in Sturgeon Bay, Wisconsin. There were four of us there. There was me, my business partner Kelly Fitzsimmons. We’re both GenX’ers, and we had two helpers, including Lindsay, who’s our marketing guru, a millennial.

And that’s when we were analyzing all of the stuff, she’s going, “you two aren’t very touchy feely, are you? You want all these logical outcomes, don’t you?” And whist we were there, there was this mysterious sound when we would be talking. A beep. A few minutes later another beep. We started looking around the house. Is it the smoke alarm, is something upstairs malfunctioning? The thermostats? Is it the oven, the microwave? After quite a lot of searching, because it was a minute or two from beep to beep, we found the refrigerator door was ajar by a tiny amount. The fridge was going, “Beep!”

At that kind of point, the penny dropped (or at least the cent dropped), and the whole thing came together. Like, “Ah, right, so, the fridge is telling us there’s something wrong”.

The fridge knows what’s wrong.

It knows what’s wrong, and it’s trying to indicate wrongness. But it is offloading the sense making of that wrongness to us. In the process, it’s causing us a feeling of anxiety and frustration. The logical thing is to just close the fridge door.

There are three aspects to this puzzle. Is it Plato, or Aristotle, I cannot remember. The basics from which a good argument is built are that there are logos, pathos, and ethos. Appeal to logic, appeal to feelings and appeal to ethics, to what’s right. We as computer science-y type people had really strong, good, 20/20 vision into logic, and we were really half blind into the feelings and the ethics. So when we go back to the, “Can I record this call?” moment, we were engineering an ethical outcome to the call.

To continue doing that in the future, it would have required us going to the, “By the way, do you mind us recording this call? It’ll be stored in Iceland, under my enterprises data retention policy. We may run it through an analysis program that checks for mental health conditions of people we do business with. Also, I do have a relationship coach program that is telling me when I’m interrupting people because I know that’s my bad habit. Or I’m not listening properly, or I’m talking in a strange funny foreign accent. Slower, faster. By the way, what’s your enterprise data retention policy, and can we spend the next five minutes negotiating that? Also, you’re roaming and you’re using a device from company A on platform of company B under enterprise C.”

The whole call would be taken up with the process of negotiating the recording of the phone call. So we should have been, from a logical perspective, totally unproductive. We would have got the ethical outcome but it would have felt awful. So there are things you have to pay attention to in addition to engineering a logical outcome.

“Phone calls that can be recorded and searched.” Great! But you also had to engineer an ethical outcome, too, so that there was an appropriate management of the power relationship and transparency. Thirdly, you also had to pay attention to the feelings. How would people feel what’s happening around this? Do they feel unsafe about having their bio-sensed data captured?

Logos, ethos, pathos. You need to look at technology and engineering problems through all three of those lenses.

We start to think, “How can we help to solve this problem? How can we reconcile the huge potential benefits we can see from capturing the tone of your voice, or your galvanic skin response during a conversation, or your heart rate? Your eye focus… Am I making appropriate eye contact with you?” In looking at a wider world of how our human bodies can be integrated into tech, voice just happens to be the precursor, the harbinger, of a general world of sensors for a set of things.

The original and best “internet of things” thing is the telephone. We put a microphone sensor into everyone’s house, and a speaker, to relate somewhere else.

The idea of the Guardian Avatar came from one of the things we did with the senior technology industry member. He talked about a Guardian Angel, and we stretched that idea.

The “Guardian Angel” helps to take care of the ethos bit. We stretched it to think more of the pathos as well, and the logos. The resulting Guardian Avatar is the digital shadow of you. It’s a digital doppelganger who helps to act on your behalf, and protect you in this metaverse, in this emerging synergistic hybridized digital-physical world.

It’s a little bit like how Doc Searls has written about the idea of vendor relationship management. You have corporation. They use customer relationship management to gain power over you as the individual. You can use vendor relationship management services to gain power over them. You’re kind of having a third party and a fourth party. Some companies kind of act as vendor relationship managers. Maybe you’re some credit checking bureaus, or even some trip advisor. Is this company over there one that you should do business with? Are they ethical?

The Guardian Avatar was conceived initially as a way of thinking about the process of automating that negotiation, the outset of this interaction. This Google “Hangout on air” we are doing, can you now go and resell this? Is this a creative commons use by whatever company? I didn’t sit down here and agree with that! My life is busy. I don’t want to spend time having to negotiate what’s going to happen with this recording. Which archives will this be put into? Which distribution systems? Will this be put on Facebook? I don’t like Facebook very much. I don’t think they’re necessarily a very ethical business. I have no means of expressing that. If this is going to be put on YouTube and sold, how is my cut going to be negotiated? Whatever it is.

So we need systems in the world that represent us. That look after us. There is one inescapable fact which is to the best of my knowledge each of us has precisely one body. (Some people claim to have a different experience which is rather suspect.) So, ultimately, in some sense there can only be one Guardian Avatar for each of us. There can only be one identity that represents our body in that virtual space and virtual sphere. It doesn’t mean it’s a single piece of software. It just means that, conceptually, there is only one shadow of me. Like if I stand outside in the sunshine, there is only one shadow of me.

The Guardian Avatar was born as a way of framing the problem. You can think of it in several ways. One way is that it’s the next generation browser. Today’s browsers require us to go into the virtual world. To pretend that the world is thirteen inches across, or four and a half inches across. We’re about to enter a world of mixed reality. Cyberspace is over. We’re now in cyber meatspace. It isn’t just a call recording for voice, and the future of the phone call. It’s about how, when I meet you in the street, or I’m within 200 meters of you in the street, how a computer is going to mediate our relationships. Your friend is in the same supermarket – do you still want to rush to meet them because your basket is full of fifteen bottles of vodka?

Or you might just not have time to have a conversation.

Or, “It’s a great party. You might want to come to this.” Whatever it is. We are creating a symbiotic future between us and our tech. We have been doing it a century. However, this technology accelerates that process enormously.

The Guardian Avatar, firstly, is a browser for the metaverse. Just like a web browser is a very different thing to a green screen terminal, but conceptually it’s a portal into another world. This is the browser for the hybridized world.

Secondly, it’s a thinking tool. If you’re in the world of theater, there’s the idea of the fourth wall. On the stage, there’s the back wall, two side walls, then there’s this big wall between you and the audience. It’s referred to as the fourth wall. Of course, there is no “wall”. Some plays deliberately pierce the fourth wall. The actors walk off the stage and interact with the audience. I’ve been in a play that was entirely about the fourth wall. People have tried to reverse things. The play was called “The Audience”. It’s a thinking device to think about the space and the relationship between us as the actors on stage and the audience in the world.

It’s also a practical type of technology. There’s a long list: homomorphic encryption, and some data hiding-sharing things. There are all kinds of info and security tools and techniques for managing, negotiating, revealing information, storing it in appropriate ways. What we haven’t done yet is…

In some ways this is a conceptual re-foundation of what computing is. In two or three senses. When people like Allan Turing or Church or von Neumann put together the idea of computers, they saw them as symbolic devices, transformers of symbols into other symbols, with some sort of rules along the arrow of time. They missed out several things, as they had to.

They were not thinking about the internet of things. There was not the concept of the symbol coming from an animal, like me, was there? The menagerie of life does not appear in the computer textbook. Where did those symbols come from? Secondly, therefore, the privacy of those symbols, was not thought of as a first class object in computing. Only the transformation of the symbols was. Storage – first class object. Compute – first class object. Communicate – first class object, in the sense of, you wrote the symbol onto ticker tape and sometime later the Turing machine took it off. There was unexpressed communication. Security, that wasn’t there. Any Turing machine, you can connect anywhere on the ticker tape. There was no idea where the two things that were associated with each other and the boundary. So that was missed out from the theory. And performance, that was sadly lacking too. All these actions, they happened in time, but the amount of time that elapses is not defined.

In the foundations of computing, we got some things great and some things are missing out: about identity, security, privacy, performance. Now we have a challenge of taking the systems we’ve built over the last fifty-to- sixty years and re-factoring them in light of the fact that we have failed to engineer these things correctly. We’ve attempted to retrofit them onto a basic model, but the essential model of what it means to program and to compute doesn’t include programming the human and doesn’t include critical aspects that represent the interest of the humans.

It’s as if computing lives entirely inside the logos box. The pathos and the ethos, what ethical outcome or feeling state am I engineering, have no meaning in that box. If you ask, computer science doesn’t have… You can come out of a three, four year degree in computer science not having mentioned the words ethics and feelings. This is a problem, because the entirety of the future of computer science is engineering ethics and feelings. There is a fundamental disconnect. In some sense, computer programming, it’s over, it’s done. Stop. We’ve figured it out. Just don’t do it in JavaScript, that’s evil. You’ll be held responsible for anything that happens as a result of using JavaScript. Please use languages that are safe and well typed. That’s not good enough, right. That’s aesthetic. What you also have to understand is what would be the impact to the human.

A simple example that I’ve used over and over again is I’ve been using a program called f.lux on my Mac here. Apple produced sort of similar things on the iPhone. So towards the evening the blue light on the screen gets turned down, because we have a pituitary gland that issues melatonin, that actually thinks the world is blue and it’s still daytime. Therefore we cannot sleep as well.

The device driver between my laptop, here, and this screen-y thing here does not include any concept of a human watching it, the impact of the pixels lighting up on a human. We have missed out entirety of the model of a human in computing. Whoops. It’s like, “You have been calmly sat in front of your computer for eighteen hours without moving. Maybe you can move.”

Now it becomes a crisis, as we move into the era of wearable tech.

I won a Galaxy Gear S smartwatch and I wore it for months as an experiment, and it was worse than useless. It was a device I would pay not to wear, because all of my notifications coming into my phone, and my wrist kept on being vibrated and my attention was being shattered by this device. Every time I’d go for a walk and stop walking, it goes, “Congratulations”, all gamified. I’m like “Buzz off!” I’ve just had my walk, I don’t need to have any praise now. It’s like, how does that serve me? It doesn’t.

Whereas, I’m in an unfamiliar city. There’s a nice walk over that way, yeah! I see you’ve got an hour on your schedule. Simple example, I was at Boston Logan airport staying in the Hilton at the airport. A good hotel. You can walk all the way from Hilton through the terminals to the terminal I was supposed to go to. The default in all the instruction is to take the bus. Actually, it’s a ten to fifteen minute walk, and guiding me to use my legs to go to the terminal, that’s good.

I have a theory that most of the exercise in America actually happens in airports. The primary purpose of airports is not to travel, it’s to cause Americans to exercise, and walking through to the gates.

The Guardian Avatar is part of the nature of the new demand, which is helping us to live better. My friend, Lee Dryburgh, is running a new conference in November in Silicon Valley on Hyper Wellbeing. Not only how can we be healthy, but how can we engineer happiness and how can we also optimize our lifetimes.

We are predictably stupid. Computers can tell when we’re about to have a bad relationship, a marital meltdown. They can help us. To implement that we need a way of engineering these things and you cannot do some of these things with the conceptual tools from the 1940’s and seventies. We have to go back to the basement and rethink what computing is and what a computer is for in the world of sensors. And privacy and performance and security become first class design objects – like they are in other disciplines. When you build a skyscraper, you don’t send in thousands of people and load it up and see when it falls down and say, “Hey, actually, maybe that will be useful as an office building after all.”

Just like this video voice interaction we’re having. It is not engineered. This is the result of an emergent performance outcome between Lithuania and wherever you are, and it could stop right now. It could go away tomorrow. There are serious negative scaling problems in the Internet that people aren’t aware of generally. This Internet is basically pretty screwed. It’s just not going to last in its current form. So we have a very serious engineering problem that we have to get back to some basic science. To define performance requires new math. To define security, to define what privacy is. To engineer systems, engineer frameworks that make it impossible to do stupid things from the outset.

Just like JavaScript is not really an accident waiting to happen, it’s an accident that happens everywhere. If you have well typed computer languages, there’s a set of mistakes you just can’t make. A lot of the mistakes that are happening is… If our voice goes into a containerized system with a set of privacy invariants around it that no programmer can violate, and application program can not violate the underlying operating system or Guardian Avatar or whatever that’s taking care of it, then we’re in.

If we’re going to let people run amok and do the equivalent of JavaScript in the internet of things, we as a society are screwed. We have no idea what we’ve got ourselves in for, because there’s a fundamental lot of trust going on. People don’t want their voices recorded. They’re going at-scale with all the opportunities of machine intelligence in sensing, all are being falsely seduced in trusting it.

It’s like, Siri comes over with an “I” and talks with you as if it’s got a soul. It pretends it’s a highly elocuted evil imbecile. It speaks beautifully. It has no ethics whatsoever, and it’s basically stupid. By presenting itself as a human … By giving a human voice to that interface, it is making you think that it will act human-like in its ethical stance to you, in its intentional stance to you. We are opening ourselves up to a Pandora’s Box of problems. The Guardian Avatar is a way of dealing with that.

We also need to build a new Internet. I give thanks to the 1970’s prototype, it’s been quite interesting. No, we don’t need a “Save the Internet” campaign. We need a “Destroy the Internet” campaign, and we need to build another one, which actually has performance, security, privacy, and resilience that’s actually built in as first class design objects.

I highly recommend everyone to go to the hypervoice.org. On the front page they have a video which gives a really mind-blowing illustration of what life might be with a Guardian Avatar on your side.

There’s a two minute video, there’s also a free report which captures the essence of our thinking, and I also published on SlideShare a presentation on the Guardian Avatar concept.

Do you know of an example of something which exists in real life? Something which gets close to that kind of experience?

Yes, it’s all over the place if you look for it. Every web browser has a TLS security negotiation to secure a connection. You can choose one web browser of your own, open source. You can go to this website and the two of them will interact and set up an appropriate communication which is secure. There is a whole world of companies, you would not believe how many companies are out there at the moment, are building wearables and mobile devices to help engineer our living spaces and look after your wellbeing.

One of the ones I really like is called soulight. S-O-U-L-I-G-H-T. There’s a new version coming out soon. It’s only on Android. It helps you be mindful of your current mood and energy state, and takes you on a little musical journey between moods.

Imagine a machine not far from now in the future that sees you’re getting very stressed in the world about something. You get off the plane, you’re tired, there’s a long queue at the security. You’re starting to lose it. The kids are going crazy. In your little earpiece it starts to play a little bit of rock music, whatever it is, that it takes to bring a new state. You raise your energy slightly, whatever it is.

There are other examples, and it’s like, oh my goodness there is so much of this stuff going on but people haven’t realized that it’s a new industry. Information technology is done. It’s finished, it’s over. Thank you. It was lovely. Human technology is where it is now.

Just like electric motors. We don’t go around obsessing about electric motors anymore and the electric motor industry. We don’t think, “Yes! I’m going out and buying an electric motor device. What kind of motor does it have inside of it?” My phone vibrates – I don’t care about how it vibrates.

The IT components just become background objects. They have to have predictable properties. What we’ve got either has no properties, has no concept of privacy or association control as an engineered object, or have emergent properties or have accidental properties or have mis-described properties. If you want to build, it isn’t about the internet of things. Who gives a shit about the things, right? It’s us I care about! If you want to build the metaverse, the hybridized human-computer world, which serves our needs to be healthy and happy and to flourish, then you need the right tools for the job.

Even to record this phone call may require a huge amount of signaling and AI to happen in just a few seconds as a phone call comes in, in order to negotiate between all of those actors as to how it’s going to be dealt with. And if the automated terms of service lawyers can’t deal with it, we may turn around very quickly and be like, “Do you approve?”, blah. It isn’t the media transport of the voice that is the hard problem, it’s the “decision matrix” that has to make all those choices that is the hard performance engineering problem.

The current internet is just a prototype. The Web, it’s kind of the new green screen. The hybrid reality metaverse full of sensed data, bio-sensed data, intimate data, requires a different infrastructure. Good news, we’ve solved all of our problems!

I wonder if any areas of our computing will be immune to this new thing. I mean, can stay in the old paradigm of Turing machine.

It’s not that the Turing machine is wrong, it’s just that it has nothing to do with humans or performance or security. So, it’s a bit like running an old IT infrastructure in a virtualized container. You have MS-DOS running inside Windows, or Linux running inside a virtual hypervisor or whatever it is.

We can keep the old stuff. We can even keep TCP/IP, which is the JavaScript of networking, but only slightly worse. We can keep those things, but we need to very carefully contain and bound them and control what goes across those boundaries, because what’s inside of them isn’t safe. It’s like an 1830’s steam engine that blows up and burns people to death occasionally, and we kind of got used to that for a while, but then maybe realized that steam engines don’t have to blow up all of the time and go on fire and incinerate people.

Basically, the computer programming part of the software engineering is done, we’ve figured out that part. The idea is that now, software engineering should take into account all of these other aspects.

As my colleague, Lee Dryburgh, Hyper Wellbeing event inventor describes it, we are moving from computer programming to people programming. And the mistake being made is going from artificial intelligence (in which computers are more humanlike) to identity augmentation, which is to give humans superpowers from computing. We’ve got it backwards, so artificial intelligence is the wrong problem. It’s not how you solve it, it’s not a problem. How do I give myself superpowers of empathy and understanding, which is a … Well it’s not artificial intelligence but its very nature is rooted in logos, not ethos.

It’s people who were the old crowd who could cope with assembler and early operating systems. It’s our brains wanting to immortalize themselves through intelligence. Even Turing… The concept of the computer came partly due to loss of his friend Christopher, or whatever, as a teenager. Computing was invented to deal with a pathos problem which is Turing’s grief.

If you’re a technologist and you’re not able to wear three eyes – logos, ethos, and pathos – and see the world through all three, you’re not complete. If you don’t think about, “How will this make the user feel, and is this the right thing to do?”, you’re not doing your job. Even in my first year of university doing formal methods of software design. You write a formal spec, a formal language, it’s an algorithm, whatever it is. There was no concept on how to capture feeling. The idea of feeling wasn’t even a relevant problem to be considered. It didn’t exist.

Like you mentioned, to start factoring in this thing, we’ll really need to build a model of that, a model of the human. So do you know of any efforts or advances in this area?

Yes, there are many, many, many companies building little parts of the problem to deal with various aspects of our bodies and behaviors. You think of Google. Google is not an artificial intelligence company. It’s not a search company. Google is a human behavioral manipulation company. That’s what it does. This is not a positive or negative value judgment on them, it’s just a statement of fact. They can do positively wonderful things with it, and they can do evil things with it. In some ways, some propaganda is good. In the sense of propaganda saying green vegetables is good. Behavioral manipulation that causes you to stand up and move around a bit, after you’ve spent two hours sat down, is good.

There are lots of companies working on wearable tech, often under the healthcare label. Like my colleague Lee says, people haven’t yet grasped that the future of mobile is well-being. It isn’t instant messaging or Snapchat. There is only one thing we want which is to feel good, in the right way. If you just do crystal meth in your veins and then you feel good, then yeah. It satisfies the pathos but not the ethos. May be logical, yes, you’ll feel great but the ethos is a bit troublesome. That’s why you need to integrate all three.

A lot of the tools we’re building pathos-wise are very weak. It’s like, it feels good but where is ethos? Now you start to think about how to model ethical problems. Where is the module you can buy – the open source module – about the ethos part of the human OS? Different people might have a different idea about what the appropriate ethos is. If the only ethos is going to be defined by corporations who want to strip mine our identities and strip away our identities and make money from them, we are in a lot of trouble. We are in a lot of trouble. The threat from artificial intelligence, it’s not the one I think people have been positing. It’s a hyper amplifier of power of the power structures in society in a way that… If you think inequality is bad now, whoa. The dystopian possibility is extraordinary.

Therefore there is a requirement for people on this call listening to this to think about what kind of world you want your children, and grandchildren, and great grandchildren to live in, and how can you help work back from them being at the end of their lives and thinking “That was a good life”. What does it take? It may require some deep radical restructuring. Not just technology, but how we think of society and relationships, and our relationship to the world around us. Being a computer programmer with coding skills, I’m sorry, I’m hoping JavaScript disappears.

From the technological threat [point of view]… The whole ethical component, you can build it in. Is there is the threat that once the technology become more advanced and more self-aware and self-preserving that the whole ethical module of it will become just an obstacle for some of the problems it will try to solve?

Let me give you an example. An example: Microsoft with the free upgrades to Windows 10. It got to the point where they up classified it into a recommended upgrade for part of the standard computer upgrade of the security system. In other words, they forced you to upgrade and even if you close the little red cross box when offered the upgrade, it did it anyway.

Where is the overlay piece of software which semantically mediates between me and that Microsoft offer and says, “That is toxic.” So for me, this is a Mac. I used to be on Windows. I will not touch a Microsoft operating system. It’s like, “You’ve burned me out like that. Never again.” For me, if I had allowed that to happen, I’ve got a five year old PC laptop. The drivers wouldn’t have worked. The whole thing would have gone wrong.

It felt like a digital assault, like I had been violated. Other people have felt violated, but the feedback mechanism between lots of people experiencing the pathos of anger and that collective causative action by them, that loop is missing today. The ethical violation and the anger it results in could have stayed separated in the world. We cannot say actually, “Microsoft, you’ve just caused a lot of people to get really upset,” because we can’t measure upset or disgust.

In this case we have Microsoft. We know where to direct the anger, or maybe the Guardian Avatar would protect. Basically, we should really trust the Guardian Avatar not to come either to any contract with Microsoft. Or not develop its own idea of what’s good for us, which might be not exactly good for us but good for it.

So, think of how a lawyer acts for you. When you go to use some software in the cloud or anywhere else, regard the Guardian Avatar as your terms of service lawyer. You have all of these terms of service and it’s your automated terms of service lawyer. You have all these contracts of adhesion today. This is a political problem as well. The nature of the contracts that we are being forced to enter into with cloud service providers are not one of free will in the free market. Let’s stop pretending that they are. Over time there is a competition between Amazon and Google and Facebook or whatever. At any one moment you have very little choice. I have chosen to cut myself off from Facebook because they have violated my privacy, and I will never do business with them again as a result. It would take a written apology from Mark Zuckerberg for me to ever to do business with them again.

Yes, it’s expensive, but until you and I are willing to do something different, and refuse to engage with services and systems that are working to fix with Guardian Avatar or whatever it is. The right thing to do is say, “Actually, no, you can’t record this call,” and, “Hey, Siri, piss off.”

[Siri on Martin’s iPhone goes: “I’m not sure I understand”. Martin and Misha both laughing.] You can’t script this stuff. Siri has no ethics. Siri doesn’t love you.

I really like your cat and dog analogy of the computer systems. I think that really brings it home. They just don’t really care about you at the moment.

There’s lots of people who care about the future of the web, the internet, blah, blah, blah. These messages, net neutrality, other stuff. Nastiest problem, and sometimes wrong problem or bad solution. You need to be thinking about the next problem, which is the metaverse, human-computer symbioses or blends of identity and reality, worlds of sensors. The centrality of privacy, the fundamental assault on our ability to act as independent entities and agent in this world, because our decisions will be manipulated or constrained by the corporate control systems around us.

We are potentially entering the most wonderful of eras. We can engineer good relationships and happiness. Instead of correcting our vision we can correct our wonky psyches, wonderful! We are also entering into one of the most horrific of eras where it’s North Korea everywhere. May be both, at the same time. It’s going to be weird and wacky. You can’t reinvent the web, that’s a prototype, let’s move on.

Okay, Martin. Thank you. Thank you so much. That was quite a deep dive into the future of computing and I think I’m still decompressing from it. Thank you so much. If people want to learn more about that, where do you think they can go?

Firstly, go to my website, martingeddes.com. Sign up for my newsletter and I will send you lots of mysterious things. There is a bit called the “think tank” where I organize some older articles. On SlideShare I have produced tons of stuff over the years, some of which is good.

There’s a new mathematics and calculus and algebra around network performance science and engineering called ∆Q. It’s very hard to find, because in Google you cannot type terms that come from two alphabets. It’s perfectly search engine de-optimized. I have a reading list for it. If you want it [go to www.qualityattenuation.science]. It’s not hard. My twelve year old daughter gets it, but if you’re Cisco Certified Engineer, it gets a bit harder.

Get to my website, drop me an email, contact me. mail@martingeddes.com or anythingyoulike at martingeddes.com – it always gets to me, and I will send you relevant stuff and I will answer your questions too if I have time.

Awesome! Thank you so much, Martin. Take care and see you in the bright future!


Learn more:

Leave a Reply

Your email address will not be published. Required fields are marked *