Symposium on Blockchain for Robotics and AI Systems

>> Hello? . Hello, everybody. Thank you
for coming. We are super lucky because
actually today is pretty nice. You cannot imagine what happened
to me two days ago, when I saw I was
coming, I thought there was going to be a massive problem
for you to be here, but fortunately, you are here. I'm
very dry. So that's fantastic. Welcome to the second edition of
the symposium of blockchain for
robotics Testing testing But I would like to give you
some kind of vision. And I will start with my own
story. So these — one of these guys
here, right, was my Ph.D.

Advisor. The other is a robot clone.
Right? Let me give you some context. I get my master and
Ph.D. in Japan. One of these guys was my supervisor. One of these guys is called
professor Yoshudo. He was very interested in how he can be in two places at the same
time. He really wanted to explore
this. He was a very simple professor in Japan and he was hired in give
lectures in Tokyo and all other the place. He realized he was
spending a lot of time traveling, traveling to the
places he really didn't want to be.

So he realized that, okay,
what happens if somebody could
represent me in that place. What happens if I would give
this in proxy? He decided to build a very
realistic robot and send it to the place he didn't want to be. This makes more sense when you
see this. So what you're seeing here is
basically the two together. You can see the robot as a very,
very expensive Skype client.

Now, the robot is sent to the
place where he doesn't want to be. Right? He just operates the robot, like
through cameras and sensors. Yeah? Basically what will
happen, if you see very realistic image of somebody that
talks like that person, that moves like that person, and
reacts like that person, but it's not that person, because
that person is not there. So in a certain sense, he wanted
to transfer that persons, right? Let me tell you a story about
that. So of course, he didn't want to
be in the place that the robot went eventually,
but somebody had to move the robot, right? So that was me,
right? [laughs] It was not only Japan,
but all over the world, and I had to travel with a human torso in my
luggage. [laughs] So everything security
guard knows my face. Right? [laughs] The problem is the
future of robotics, we will have a very complex robot that will represent us,
one very complex robot, that is expensive and hard to prepare,
et cetera. For me, when I was doing my Ph.D.

I realized that was not my
vision for the future of robotics. Mine was more
de-centralized. I thought by making robots very
simple and easy to repair but putting
them into big groups, you can achieve complex tasks. At the
same time, you can basically have a lot of nice properties,
that one very complex robot doesn't have. What you're seeing here is very
easy example of a swarm of robots.

They're doing something
called foraging. Basically, robots self-organize
in order to find 3D tokens, like in
a football field. They can represent sources or data or
people. Right? And what they do, basically,
these robots, in a very de-centralized — there's no boss here, no set of
command and control operation. Find these three tokens. Then
they put them into the nest, which is the center of the
field. So these nests simulates a human society, for example. Once the robot puts, like, a
token, the robot gets a recharge in the battery, a rework. The robots start to
self-organize in a centralized way in order to keep this doing
for a long period of time, to achieve sustainable behavior.
Now that you see this, you think, okay, why is this useful? Why do we care about this? You can start seeing the use of
these, right? The interesting thing about these systems is
that since they are de-centralized and there's no
single point of failure, I can break one robot or two robots or
three robots, and the others will work.

This system has
robust and has full capabilities by design. Which is very
interesting for this kind of like new public
infrastructure that we're trying to envision
with robotics. But there's another problem here.
The world of robotics is very polarized. There's people trying to do
research of the theory on the basis of these emerging properties that this
system has, and they're focused on this, but there's also other
people that says, no, these systems are going to
deliver packages in five years and we'll have swarms of
self-running cars in cities. You know? I realized these two
communities are very far apart and there's nobody trying to bridge these two things,
these two big missions of these worlds.

This is basically
because there's many things that we didn't tackle in order to
make these systems, which have good capabilities, work, and
available. Right? Some of the problems we have for
these systems, we don't have any security standards for these
systems. We realize they have good properties, but what
happens if at some point in time some of the robots gets
hacked or start to misbehave? What will happen to these
systems? Will it remain robust? There's
also no good way to understand how these big systems,
especially large swarms, can get into agreements and
consensus for certain things. We don't have the research based
on that, like the boots on the grounds, for these systems.

And more particularly, we don't
have new business models for these systems. Right? It's
very hard to bridge the gap between academia and the
industry. So one of the things I realized,
like why we have these problems, is because we don't have good
interfaces to these systems. In academia we have a lot of
research about how to do human-to-robot interaction with
one human and one robot, we don't have go interface for
doing human to groups of robots interaction. It's because as
you scale these systems, it gets more complex. So they are very
difficult to audit and very difficult to do this.

So we started to see this is a
need. For example, an article I found, a couple of weeks ago, in
which researchers started to say if we have self–driving cars in
New York, for example, we know which cars, for example, are in Times Square,
we can actually block part of Manhattan. You might think hacking a
self-driving car is complex, but it's not. It's extremely simple. There's been people traveling on
the highway or driving.

Yeah. What this article projects a
little bit is we're trying to create
new things but we're not covering the holes this is
providing. Do you remember three or four
years ago, ransom ware, that encrypts
your computer or hard drive and if you don't pay me $1,000 in bitcoin, you're not going to get
your hard drive back. Well, that was cute, but imagine if
you're driving down the highway, and at some point a pop-up comes
up on the dashboard, I'm not going to brake unless you
pay me $1,000 in Bitcoin. So we tried to tackle these
problems. Basically this shows for the
first time how a group of vehicles can self-police or monitor each
other. What you're seeing here is
something very simple. You're seeing these robots, which are
very, very simple, trying to go around this checkerboard and
trying to sense the tiles, the color of the
tiles, and then come to consensus about what's the
majority color, something super simple. One robot goes to part of the
checkerboard and senses there's 30% black tiles and 70% white

Once you bump with another robot
you exchange opinions and it emerges into a big consensus and
at the end we all agree that black is much more in
color or white is the majority color. Simple stuff. So we
started to simulate what would happen if you introduce
Byzantine robots, robots hacked to break
the consensus, start to lie, and we
realized, well, we can just compare the classical approach,
that we have for consensus, and a blockchain-based approach.
This distinction makes for the fact that if, for example, we
use the classical approach, you can get,
like, messages around, like, this
swarm. But robots basically somehow believe what the other
robots are saying.

Right? With the blockchain approach, you
have a record. You get all these votes into a blockchain
recording in every single robot and every time you bump
into each other you synchronize it. Right? We saw in this graph you have
two axes. The X axis is the number of
Byzantine robots, the number of bad bots, right? And the Y axis is the exit

The number of times they consensus the right
color. As you start to include more bad
bots, the success rate drops dramatically. This means if you
are in New York, and you have, like, 300 self-driving
cars, and then you hack, let's say,
ten, your success probability that you will have a good system, based on pure
peer-to-peer communication, like technology,
will drop dramatically. And it's because actually this
peer-to-peer gives you really good things, but also gives you
a lot of open problems. Which is If you use blockchain approach,
and you start these bots in transactions among the robots, you'll start
to find inconsistencies in the system. You can start to find
the fact, actually, if I told you the majority color is white, and then I tell
you the majority color is black, and you
start to find I got into an
inconsistency. We all have the same controller,
but I'm starting to change my
opinion in a very weird way. So you can assign me a reputation.
And you can weed me out of the system and continue doing
whatever you're doing.

What would happen if we add reputations to the robots, based
on the information they provide? That's the idea. With that, we can continue the
system, the self-sustaining behavior of the system. But this has a problem, also.
It seems like a very simple case, but what would happen if
we try to do more complex actions, sequential actions,
where the robots have to assemble stuff and keep order.
Right? Normally, what happens is that in these kind of actions, in order
to maintain these terms, and these nice probabilities, we
need to distribute, for example, the blueprint of what the robots
need to do.

If they need to make a bridge, for example, they need to
understand, okay, the piece one should go here and piece two
should go here and piece three should go here. If I am broken, because you have
the same plan, you can continue the plan. But this also has a
problem. If we all have the plan, because we need to
maintain these capabilities, the fact that actually we have a lot
of us, with a lot of plans replicated,
also opens new holes. If I'm an attacker and I want
to hurt that system the best, I just need to get one robot to
understand the plan and then act accordingly. What we envision, by exploring
this blockchain space, do we have
tools in this blockchain space to give a blueprint of the robot without
actually giving the data? Turns out, yes, we can do it. Many of you might know the
concept of mercant tree. It's binary tree, instead of having
the data in the notes or the decisions of whatever, you have
hashes. All the hashes are stored in the lower level, and then you encrypt this
information and you rehash it, rehash it, rehash it, until you
get to the root.

What we did, in this research,
is try, like, to substitute the normal
transactions in the blockchain. A sends B one Bitcoin, with
robot actions that belong to our plan.
Stuck piece number one, stuck piece number two, stuck piece
number three, and then we can encrypt this information up to
the root, and we can give that root to the robots. So how this works is the fact
that, like, an operator designs the
whole plan in advance, and says, okay, so in order to build this
bridge I need action one with piece number one, and action two
with piece number two, and action three with piece number
three. You encrypt all this information and then you give
this tree to the robots. The robots, with that tree, do
not know what they have to do because everything is encrypted.
Right? But they know if at some point in time they find the
correct combination of robot action and robot sensor input,
let's say, like that, if they're in front of piece number one,
and they say, okay, what should I do with piece number one?
Should I stack it? Move it? And they find the right
combination and that ends up being the hash of the first leaf
or the second leaf or the third leaf, they know they have to do
that, even though they don't know what the other actions
really mean.

Right? So we tried this with several,
like, missions which could be projected in many other, like,
things, but for example I'm going to show you here. Robots, we encoded one of these
trees. One is an obstacles, there's empty spaces, and there's the entrance
and the exits. We encoded this tree and then we give it to the
robots and say when they're around, at some point in time,
if you find a place that belongs to the tree, just stop there.
So robots do not know what they have to do. Robots cannot infer any details
about the plan.

But they know once they find a good action
they just stop it. Right? So in the end, through a lot of
wandering around, robots are able to make the maze. Right?
The interesting thing about this, if I now capture any of
the robots and say, "What do you know about the plan?" They know
nothing. They know this hash and this
hash is correct, is part of the plan, so I just stopped here.
Right? So you cannot infer where the
entrance or exit might be, which is very interesting for security
reasons, right? So we try to project this and say, okay, instead of a very simple
maze, can we do large-scale missions? What would happen if we encode
the millennium falcon in this tree?
We conducted this research and understood that it's within
reach of the technology to do that.

In coding, like the Millennial
Falcon, it only requires 230 kilobytes
of memory and there's communication between the
robots. So we can achieve that. But now, I would like to give
you a final touch to this. The last thing that we talked
about is the fact there's missing new
business models for the systems. What I'm going to present to you
here is something that would be present in more detail in the paper
representation, but it's something I hope makes you think
a little bit. What you're seeing here is a
robot, Kak achu.

He's assembling robot parts, but
he's painting pictures. He's basically choosing a
Japanese kangee from the internet and basically
replicating it with a brush. So the interesting thing about
Kakachu, once the robot starts painting, there's an auction that he's starting.
Auctioners through the internet can auction what they want to
pay for that picture. Right? So once the auction is over,
there's a winner. Right? Of the picture. And the winner
instead of, like, giving the money to the owner of the robot, just gives money to the
robot's own account. So the robot gets the funds of
that process. The important thing about this
is that those funds are used in order, like, for the robot autonomously to buy everything
it needs for the next picture. So the robot can buy more
canvasses or more paint, the electricity, the internet bill, to paint the next
picture. Right? With these, we're trying to understand what will happen if these robot
s… s… So this is something that I
thought was interesting.

So this is a case where I think . So to finish, what we are seeing
here is the synergy of different
worlds that are set apart, that now they can
create something bigger than them alone. Right? Of course,
we're here because we are interested in the world of
robotics and the world of AI. The robots are here and coming. This is, in my opinion, this is
not us against the robots. It's us with the robots. And we need to find ways in
order to coexist and benefit from these,
like, coalition. But, of course, we cannot just
leave them unattended. We cannot leave them autonomy
for the sake of autonomy. It comes with a price. We need a new interface for that

20 years ago we didn't have certain tools, but
now we have them, and with the combination of these two worlds
we can do something very powerful, but also we need to
put it somewhere. We cannot just leave it anywhere. This needs to be placed in a
society, like, framework in order to make
things in a good way, right? And make societies more greener
or efficient. Right? Of course, we're in the Media Lab today so we care about how to
deploy these systems and move from academia
outwards. So with this, I am over. This
is done. Thank you very much. [Applause]
I don't know if we are running late in the first
presentation, but I would like to introduce you to
Professor Sandy Pentland. He's one of the founders of this
place and very, very interested in data, as you will see. And without further delay, I
give the stage to him. >> Thanks. Thank you. Glad to
see you're all here. This is a good turnout.

day. Blockchain robotics, and how we
got here, I refer to my group and
me, because I'm not known for robotics, so I thought
I'd explain why we're doing this now and where we think it will
go and also bring people together in an interesting way. What I am known for is variable
computing. Back in the early '90s we did
some of the first wearable computing, decorating humans
with computers and sensors and stuff like that. It was great
fun. It produced a lot of very weird-looking people. A lot of people said, "I'll
never wear that," and there were fashion schools that came up
with stuff like this. In the early '90s they were
wearing things that looked like iPhones
before there was wireless and such. The guy wearing the
display there actually went on to do Google Glass. But there's a symbiosis between
computers and humans. I worked for Nissan and designed
the framework for their autonomous
vehicle. The goal there was to be able to have a cooperation
between people and the machine.

And one of the real challenges
of autonomous vehicles is that you're going to be in an
environment where cars are — the other cars have no
autonomy. They're just people. And by other cars are by other
manufacturers. So you have this requirement to
be cooperative without necessarily
being able to talk to them in a deep code-to-code sort of way. I think that's the type of
system we'll see more and more of.

We have these wearable elements,
not necessarily robots the way we
think about it, but we have to cooperate with them and it's a
mixture of people and machines cooperating to get something
done and you want to design the system as a whole.
One of the main things we learned from doing this was that
it's not about the robots. It's not about the wearable
computers. It's really about the communication between them.

It's actually not that difficult to build a lot of these things. At
least first order. But it's very difficult to get them to
coordinate and cooperate with each other. And this has been
noticed before with respect to people. This is a little quote from Adam
Smith in the late 1700s. Everybody knows what the
invisible hand is, right? The invisible hand is a way of
people and institutions cooperating
with each other without conscious
planning, the way Eddie was just talking about. In today's
society we tend to think of this as being a market property, that comes from the market, but
that's not what Adam Smith said. Adam Smith said it was
peer-to-peer communication, local
communication, and local negotiation, that determined the
balance of services and the norms for cooperation.
So not a global thing, a local emergent property. That's
really interesting. Because for a lot of reasons, local emergent
properties are more robust, all sorts of things. They're less susceptible to
corruption and attack of various sorts. So what we study — oh, and
actually Karl Marx said the same thing. These two guys, may be
the only time they agreed, but there they are.
So what we study is we study how you can get systems of
communication, local communication, where you
continually negotiate policies of action to get a desired
overall system performance.

And a good example of this is
network bandits or distributed bandit problems. Bandits are things that are
little autonomous elements, actors, that have a number of different policies,
number of different options, that they can choose from.
They don't know the rewards associated with each of the
options that they can choose. So they experiment to find
something that is the best way for them to
get along in their environment. Okay? Sort of like people. There's actually in biology this
is called foraging behavior. You see animals experimenting to
find better food sources and things like that. This is a mathematical model of
that. In distributed systems you have the ability to observe
and communicate with other people, and that's good. It's a
very powerful technique. You can imagine early humans, if
I see you eat the blueberries and then
get sick, I'm not going to eat the
blueberries, almost no cost to me, just distributed learning. There are different ways to do
this. Most of those are framed as
single users, though, in an environment.

There has not been nearly as
much research on distributed things, although, clearly,
there is. So we're interested in that problem because that
maps to these sorts of problems Eddie was just talking about.
You have bunches of agents, whether people or machines, and
they have to learn from each other to coordinate actions that
they take that have the best utility for all of them.
Sounds pretty good. Some of the things we're doing
is we're focusing on environments
where one of the things that's wrong with machine learning
today and a lot of estimation is that
they implicitly assume concentrated distributions. So
noise levels, like normal models, or something like that,
but actually in these distributed systems you get
cascades. You see me do something, you begin to copy it,
he begins to copy it, five other people begin to copy, you
get this cascade of behavior. Current techniques typically
don't work very well with that at all.

They go heywire in all sorts of
ways. We're focusing on how can you build systems that are
robust to this and actually can learn from these sorts of
signals? Byzantine agents, Eddie talked
about that a little bit. What happens when the agents are
trying to mislead you? And this comes in different
flavors. It may not be intentional misleading. It may
be they have a very different purpose than you do. So they
take an action and report this is a very good action; whereas,
you would think it's a very bad action. How do you detect this
and compensate for it? This is certainly key to the sorts of
problems. Privacy, certainly, with people,
you can understand how you don't want your personal data leaking
out everywhere, but the same is probably true in many situations with robots,
particularly if robots are agents of people. How do you
actually do this communication, which proveably
preserves people's privacy? And unreliability, you can look
at that as a form of Byzantine.

You get screwy stuff happening
sometimes. You have to be robust.
We do work in this area. I'm not going to talk a whole lot
about it, except to sort of say we have some very strong results
in these areas. For each of these cases, we find mathematical scenes that are
completely de-centralized and robust and differentially
private. You wouldn't have thought you could do this. The key idea is that in these
communication channels between agents, you're trying to model the
distributions observed by the agents, and you
hold out outliers for later consideration. You say, whoa,
that looks weird; I'm going to hold that back, and when I get
more of them, I can decide whether the guy is trying to
trick me, whether this is a cascade, or other sorts of

So there's nice mathematical
ways to do this. This is very new stuff. Avi is
a Ph.D. student who is rocking on this. If you're interested
in the math in it, I do point you at that.
Hopefully, that sort of gets you a framing of the types of
things that we do. I can be happy to talk more about this. The other thing we do in my
group is we build software to support these sorts of things. So we build things that are
blockchain systems that have offchain data,
and methods of doing communication, auditing, and
machine learning on top of them. And we've been very successful.
This is me wearing a tie, something you never see, the president of
the European union invited me to
talk about how they should be handling data
for privacy and things like that. That's basically, you now
know the story, blockchain, offchain
data, and do analytics on that.
In these certain coordination systems, you don't want to share

The moment you share the raw data, even if it's anonymized, you
doomed yourself. What you can do is you can share answers about your private data
with other people. For instance, for differentially
private distributed bandit problems, it turns out that sharing means of sort of buckets
of action, different types of actions, you share the mean payoff, not the
specific payoffs, yields and differentiately private scheme.
It's pretty good because you can still get optimal convergence on
that sort of a thing.

The second thing, you have to
log things on a log chain so you can go back and remember things
accurately, and so other people can query you if there's a
problem. You can essentially debug things if you can show
that there's a problem, you can then go up a level and get
higher level permissions to be able to go back and look at the
data you need to be able to figure out what's going on.
So we build stuff like that. I'm not going to spend a lot
of time on it. We're building systems for Senegal and Columbia that do
this sort of thing and give you this auditing ability.
It's pretty amazing.

The bottom line is there's two
threads that we do. We'll be happy to collaborate
with people. You'll here a little bit more
about this. One thread is building this
blockchain system for humans and human institutions, but nowadays
that includes robots and actors and autonomous all sorts of things.
How do you actually guarantee like your thermo stat and things are
used in the correct way and not hacked
and maintain your privacy? Those are the sorts of
questions. They're not interesting robots,
but they're very interesting in IT and control problems,
particularly when you talk about a distributed thing,
autonomous vehicles and that sort of things.
Hopefully we'll find ways to work together and I just wanted
to welcome you and tell you why we're interested here.
Okay? Thank you. [Applause] >> The algorithm? >> The meeting you do every
year. The annual meeting. >> Yes, the annual meeting,
that thing there, we do one coming up
in January, and we do one here
that's sort of a big open one, and then we also have a meeting that is much more
technical meeting which is for our
responsers and collaborators.

That includes, at this point,
seven nations and half a dozen large corporations, like Ernst &
Young, Intuit, IBM, et cetera. So we have twice of those —
twice a year. If you're interested, sure, why
not? [laughs] Okay.
Yeah. >> SPEAKER: Yeah, so one of
the big concerns that everybody has,
right, is dominance of the tech world, by
a few players. And the whole result of AI and
all that is about data. So it really boils down to who
owns and controls the data? Today, we have sort of come from
this wild west, which has allowed certain organizations to
become very, very large.

And it has all sorts of dangers. I actually don't think Google or
Facebook are going to come and kill my children. They might steal their wallets
but they're not going to kill them. On the other hand,
governments have access to this, too, and they might. [laughs] They've historically not behaved
themselves. The best thing at the moment
that I'm pushing is called data cooperatives. Under the laws of the US and
most of the EU you have these
cooperative organizations which are owned by the members and are
democratic. They're in Switzerland and
there's a whole variety of institutions here that are
typically called credit unions and they're chartered by the
government to manage the money, of course, but also the data,
just sort of by accidents in the regulation, of the members. So you can have something where
a group of people have access to
having a copy and controlling their own
data. Just to sort of give you a sense of the power of that, currently, a
lot of websites have these terms and conditions.

It says you
can't use it for this, can't do that, well, having
somebody as a legal representative of you overrides
those. Imagine you're in a hospital and you're in a coma.
Your lawyer could see your Facebook page regardless of what
Facebook says. Right? And keep a copy of all that data.
Absolutely. Just like no question. So can one of these cooperative
things. Legally, it is you. So that's thing one. Thing two, here in Boston, or
any place, we have all these

They do treatments on you.
They give you drugs. Nobody, including them, knows if
they're any good. So when drugs get approved,
everybody stops looking, because they might see something bad.
Okay? So we take these pills and God
knows what happens. We don't know if they interact.
There's all sorts of stuff we don't know.
Because nobody wants to look. And they hide behind privacy
law. They say, "We can't share that
with you." But if you had a cooperative,
you had a right to your medical record.
If we had 50,000 people who had their medical records here
in Boston, those 50,000 people could analyze their medical records o.
Not giving them up, still control them, but you agree as a
cooperative to ask, "What's the efficacy of
this drug? What are the interactions?"
So the people could know.

Once the people could know, it
will happen. This is a political action, but
a vehicle of political power to have a sort of knowledge-based
activity. The key is you have to have
collectives of people. Your data, my data, not very
valuable. Can't really get much in the way
of insights out of it. But if we have 50,000 people in
a town, we can tell in the government is any good.

We can
tell if the hospitals are any good. We can tell if the bank is behaving. You name it. We
can go right down the list. This is a little bit like labor
union battles were a century ago. A century ago you had this being corporations that owned
everything, the robber barrens they were called in this country, and they were
exploiting their workers because there were no other options. You worked under their rules or
you didn't work. They banded together, under
cooperatives, called labor unions today, and as a
cooperative they were able to point out unfair practices and
get companies and eventually
government. Interestingly the companies
changed before the government changed. Right?
And I think the same thing probably needs to happen with

But it's through this collective
action. Sir? >> SPEAKER: How are we
seeing today adoption of this model to introduce cooperatives? >> SPEAKER: People are — sort
of ongoing discussion. Most companies that are not Google or
Facebook or Amazon are really interested in this because
they're feeling completely cut out. Right? Citizens, of course, feel cut
out. Governments are worried because
they have most of the data that they
need to provide citizen services is not data that they own. So if you look at the
sustainable development goals for the UN,
most of those goals require having data
from private entities like banks.
Typically, that's not the situation in the world today.
Some countries have made laws about this, but they don't have
practice, regular practice, about that. That's one of the battles that's
going on. And interestingly, companies are willing to give up their data,
enough to make government better. Give it to government
to be able to do better management, but not all of the

They don't want to give up personal data, just aggregate
data, and that's actually enough for a government. It's like
census data. Companies are willing to
contribute to the census, a rich census. If you go to
edu, which is one of our sites, you can see what you can do with
it. It's really surprising. Without any individual-level
data. Okay?
So who is next? [Applause]
I think we're going to move to the next speaker. I think in this case, I would
like to introduce Professor Marco
Dorigo. Marco is one of the founders of the swarm technology
field. He directs the IRIDIA lab in
Brussels, which is one of the forefront
labs for the swarm organization. Without further delay?
Thank you, Eduardo. Very nice to be here. Last time I
was here was many years ago. I'm not a frequent visitor. I'm not an expert in anything
with robotics.

I will represent first with robotics what we do.
And then what we do with blockchain to make robots more
secure. Not working? Next slide.
So I think you agree with me that the future will be more and
more robots. There's already drones. In the future there might be
nano robots or possibility the first
autonomous vehicle in huge use. Next slide. Yeah.
Okay. Let me tell you something about
swarm robotics. A swarm is a large number of autonomous robots that
communicate in a peer-to-peer way through local
interactions between and with the environment, and
self-organize to solve problems or perform some tasks. All these are in absence of
centralized control. Next slide. So we started robotics out to
design such systems. There's some collective behavior. There's local interactions
between the robots and between the environment without any
centralized controls. Next slide.
Okay. So the problem I am interested in, in the last 20 years,
approximately, is how the control of the swarms so
that robots cooperate to form a task.

And it's scalable, so you
don't need to reprogram the swarm when you want more work done or less work
done, and the swarm is tolerant to
manufacturing or money issues. Next slide. Thank you.
Okay. So most cases, we use
self-organization, no centralized control. And this is very good because
it's . However, there's a problem,
next slide, so the problem is that
our goal is to program the swarm but we
can only program the single robots. How do we program the single
robots so we can get the swarm to do what
we want? The way we do this, is by taking
an approach where we design and implement behaviors for the

For the single robots. And then we test the behavior of
the swarm in simulation. We repeat this cycle until we're
happy with the results of our swarm,
the way the swarm performs. And then we move to the test of
the robots, and we recycle again
until we're happy with the final result. The reason to take this
approach is that we need or our goal is to
program the swarm, however, it takes a lot of time, can break the robot, can
cause many type of problems, security problems, also. So it's better to get faster and
more efficient to work in simulation; however, once you
get good results in simulation, most probably they
will not carry over to the real robots nor a lot of different
reasons, so you need to add the second cycle.
Next slide.

There are many definite
collective behaviors thats. There's behavior of how they
aggregate and form together, so on. There are navigation behaviors,
and there are collective decision-making behaviors. One we're being focusing on, in
our blockchain for robot swarm
research, next slide, so I would say that
the way that we program our robots taken
this behavior-based approach, and most of the time what we do
is we program the robots using simple rules, very often
inspired by behaviors that you see in
insects or other social animals. Just to give you a couple
examples, we have been working on self-organized search and
retrieval where a certain number of robots you see there are self-organizing, searching this
base for an object. And they manage to take, over
time, different roles, in a
self-organized way, up to the moment in which one of these chains of robots reaches
the object that has to be retrieved and the other robots
use the chain to find the object and then to grasp it and retrieve it and return to the
location, the blue object on the far right.

In a similar way, we've done experiments with search and
retrieve where we have three types of robots that can move in
the environment and they search for an object, and
self-organize and retrieve it. Now, I want to show you briefly
a video of these experiments so you can get a better feeling of what
we're doing. Swarmanoid is a heterogenous
robotic swarm made up of three types of robot. The hand bot is designed to
manipulate objects. The handbot can also climb but
needs help from other robots to move around. The footbot is a wield robot
with a gripper. Using its gripper, a footbot can
form physical connections with other footbots or with the handbot. An eye bot can fly and explore
large areas. It can attach to the ceiling and
provide environmental information to the other robots. In this film, the Swarmanoid is deployed to find and then
retrieve a book. Here, the Swarmanoid has already
partially explored its environment. As the eyebots search,
successive eyebots attach to the ceiling,
forming a connected network.

Once an eyebot has found the
book, the knowledge propagates back to the deployment area. The handbot then requests
transport assistance from the footbots. Using the eyebot network, the
footbots form a ground-based chain,
linking the deployment area to the book. The footbot/handbot entity then follows this ground-based chain. A second handbot prepares for
transport. The first handbot/footbot entity
has rotated and aligned with the
bookshelf. While climbing, the handbot
supports its weight with a cord attached to the ceiling.

The handbot has control over its
angle of rotation around the virtual
axis. Swarmanoid is a parallel
distributed system. Parallel activity and redundancy
increase its robustness and flexibility. The second footbot/handbot could retrieve another book or act as
a backup should the first one fail. In this film, the
Swarmanoid retrieves a single book. However, the true Swarmanoid
concept would manifest itself in
parallel scenarios and unstructured environments. Future Swarmanoids might be able
to replace human work in hazardous environments, perform
search and rescue missions, or even conduct extra
planetary exploration. This gives you an idea of what
we do with the swarms. As you can imagine, everything is
organized. What you were seeing at the
beginning, one of the reasons for robotics research, this is a little bit
wishful thinking, besides that it is
true in principle, but when robots break
down, they create problems for the
others. They misbehave and create
problems for other robots. We need to find ways to increase
the autonomy of the system. Now, the main subject of my
presentation, in the next slide we show you another shot video in which
to explain how we do collective
decision-making, with the swarm and the robot.

Then I will move to the
blockchain. In our research, we study
collective decisions in swarms of simple
robots. We take inspiration from the house-hunting behavior
of honeybee swarms. When house-hunting, honeybees
choose their new nest location in a self organized manner. The
collective choice they make is the result of simple interactions
between the swarm networks. In our artificial swarms,
collective decisions are also the result of
self-organized interactions between individuals. The Pinabot is a small robot
with little capabilities. It can move in a straight line
or around the center.

It has only one sensor with
which it can measure the brightness of the ambient light. It can also exchange messages
with neighboring robots and when receiving the message estimate
the distance of the center. We consider our site selection
problem in a swarm of 100 kilobots.
Robots are initially located in the nest. The area where robots exchange
site preferences and take individual decisions. From the nest, robots can move
either to the red or the blue side.
The goal of the swarm is to find consensus on the best side.
In our case, the red side. The quality of a site is an
abstract numeric value. We use infrared beacons placed
under the arena surface . A swarm has made a decision
when as a result of a decision-making strategy, a
large majority of robots have the same preference. We control the kilobots with a machine that
implements our decision-making strategy.

In the dissemination
state, the robot is in the nest, and its primary goal is to
promote its current site preference. To do so, the robot repeatedly broadcasts its
preference. Before moving to the exploration state, the robot corrects to
preferences of its neighbors. It then applies the majority
rule to update this preference, which determines the site it
will explore next. In the exploration state, the
robot travels towards the chosen site. Once there, it randomly explores
the area in order to estimate the site quality. Eventually, the robot returns to
the nest and reinterests the dissemination state. In a way
similar to honeybees, the effort each robot takes to promote a
particular site is proportional to the quality of that site.

Specifically, a robot promotes its preferred site for a time that
is proportional to its current estimation of the site quality. This modulation introduces a positive feedback which in time. It can overcome the limitations
of the individual robot. This system is robust to robots
that break down, but what happens if some of these robots
start to send wrong messages? This is what prompted us to
start studying malicious robots in the context of this type of
problem. That is what Eduardo presented
in the first presentation this morning. So going back to the problem I'm
interested in, as I said, my main interest today is to show our
initial research on how to make the
robot more tolerant.

This is work as I say done in
collaboration with Ph.D. students in our lab and Eduardo Castello and a postdoc in my
lab. It's clear as soon as we have a
robot swarm deployed in the real world it will be subject to
attacks. There will be some guy that
wants to create problems. So what we are trying to do is
to see whether it's possible to control these swarms using a typical type of
computer program that's in the smart
contract so that they're robust and tolerant and encountering
with messages and similar attacks. So I think everybody knows what
is a cyber attack or creating a fake
ID so that one robot can try to take
over the swarm by creating many fake
ID's. So you know that blockchain
basically creates a trusted system and
trusted agent, usually computers. What we are doing is to use
exactly this same approach using robots
in place of computers.

And we are doing this in the So the goal of the study is to
first show that it's possible to write
a smart contract and control the decision-making of the robot
swarm. This was not done before. So it's important to check it.
Then to show that the blockchain-based control makes
the robot resistance to these Byzantine
robots or messages that try similar attack, and then show the blockchain-based
approach outperforms other classic approaches. So the this is an example that you've
seen before this morning two talks
ago, with Eduardo.

We have a swarm that has to collectively estimate the
frequency of ties on the ground. The ground is covered with ties.
Compared on the example that was shown by Eduardo this morning,
this is not the collective decision on which is
the most frequent, but it's a collective estimation which the swarm has
to hand out what the percentage of white
ties in the environment. So the experiments are run in
the simulation, but we have already
everything in place to run experiments with the robots, which we still
have to do that. Next slide. So the blockchain-based approach
that we use works as follows. The robots move randomly in the
environment. They explore the environment and measure the frequency of tiles
that they move over. For example, what is their own
local estimate of white tiles. Then, I believe, 45 seconds,
they send their reading as a transaction
out to the pool of transactions. And to do so, they pay a certain amount of tokens, within the

Our robots, while moving
randomly in the environment, they mine, and usually the first
robot to discover the parcel sends to the
blockchain and gets the reward. So when a block is added to the blockchain, it's a transaction
refined, and this smart contract computes the means of the estimating the
transaction. Here, we use a very, very simple
detection mechanism. The goal is not to find the best possible
outlier detection mechanism, but to show that it works. Then, robots are actually computing the mean,
are paid back a certain amount that is bigger to what
they paid, but is paid. This is because only the
transactions that were actually used to compute the mean, so not the outlier, are
paid back. Next slide. So you understand here that this
mechanism is automatically takes care of a similar attack, because
everyone wants to send transaction as to pay,
but if there's bad guys, malicious
robot, they're sending transactions that's outliers, is
not paid back, so it does not have money to perform a similar
attack. So the project, compared with classical approaches for
computing estimate of the frequency in a distributed system, by
simple constraints, the robots collect their own readings, and estimates of neighbors, for a
period of 45 seconds, and then they update
their own estimate with the formula that you see there.

And while moving around
randomly, they distribute their estimate to other robots that
happen to be in their neighborhood all the time. If a Byzantine version of this,
it's basically the same except that the update of the estimate
is done without considering the outliers. So in our experiment, we use an
information matrix that has the difference between the new
estimate of the frequency and the estimate of the robots, and the blockchain size. What you see in the next few
slides is the structure. You have the representation of the
experiments. And then you have linear
consensus, and blockchain results. Here, you see on the X axis, the
true presentation of white tiles and
then the opposite.

What we see from these graphs is that all the approaches are similarly
good at finding the estimate of the
percentage of white tiles. Although, the blockchain
approach has slightly higher error. These results show it's possible
to implement our results, and then the performance is good
enough. This is the same results where
you are able — the actual
presentation of white tiles and the estimate. The optimal solution is along
the dotted line, dashed line. So now what happens when we have
a Byzantine robots, we see as soon
as a number of byzantine robots on
the X axis is increasing, the opposite error increases a lot in both
byzantine consensus and linear consensus.

The curve here is a
little bit misleading, but what you have
there are the Ox plots plus the
presentation of outliers, and the outliers is every time the
system was giving basically 75% error, which is the maximum in
this particular case. The system is estimating On the other side, when you
look at the results for blockchain, when the number of byzantine robots
increases, it increases a little bit, but it
stays — it remains slow. When we come to consensus type,
the two classical approaches, basically, there's no consensus time,
because since there are malicious robots, it will always vote for the wrong
outcome. There will never be consensus. On the other time, with
blockchain, we can have consensus.
How do you know the results? You take one of the robots and
you read the estimate, right? But you don't know which robots
are byzantine and which are not. Basically, you pick a random

But you don't know whether it's
the one estimating or one of the malicious.
Differently, even within the blockchain, even the robots that
are malicious, they share the same blockchain as the others. So you can read the wrong
estimate even when you have random robot
that is malicious. I already say when a robot tried
to perform a similar attack, it will not work with the
blockchain. The next slide we see in the
results — sorry. This thing cannot work with the
task approaches. It can only work — it cannot
work with the blockchain, but it works very well with the other
approaches. You see here the graph, the
arrow, it grows very, very fast with the number of byzantine robots for the

It remains quite low with the blockchain approach.
Next slide. Last, as you know, one of the
issues with blockchain is that the
memory usage grows with time. In our experiments, what we've
done is measured how the growth faster
and we found out at least in our
experiment, the framework for our experiment, this is very
manageable, because the size of a transaction is 148 bytes.

And even the impact of robots,
that we were using, of 16 gigabytes, or
approximately, but this is something that, for sure, has to
be taken care in future research.
So in conclusion, what we have shown is that implement robot swarm
behavior using smart construct within a blockchain-based
context and using the blockchain-based approach first
the robot swarm can achieve
consensus even in the presence of byzantine robots because the
byzantine robots are identified and discarded from the
computation of the estimate. We We found our warm is resistant
to similar attacks. Additionally, since all the act
is memorized in the blockchain, the behavior of the robots can
be audited in the future and analyzed. So there are many, I think,
many, many problems and open challenges in this line of
research. First one is that when a robot swarm moves around,
at the moment there's no guarantee that it remains connected all the time. If two disconnect from each
other, they could grow different blockchain.
When these two subswarm connect again, one of the two,
the shorter one of the two blockchains, will
just be loose. It's a lot of work that has been
done by part of the robots in the
swarm that is somehow lost.

So maybe we should ensure
connectivity all the time. Maybe there are other solutions.
We don't know yet. Another challenge is how to
extend this to more challenging
scenarios, as we consider one smart contract,
one particular problem, simple problem. Can we extend these to problems
where there is more than one task? Another issue is that now
current robot swarm, all the robots are the same.
They all have the same computation of power. And in the real world, it may be
that the swarms are joined by robots
of different intelligence, systems, capacities. So maybe that is not the best
way to go and maybe we should look at
alternatives. Finally, what should we implement in the
blockchain framework and which should we know?
For example, in this case, the collective decision, it's
reasonable to think blockchain approach is okay, but there
might be a situation where you need a faster response to the
activities of the robots, and which is not
compatible with the implementation of the
blockchain. Thank you very much. [Applause] When we started research, we
wanted to give better support for implementing the smart

We already starting to look into
alternative frameworks, especially for the issues of using proof of
work. Quick question. I think it was
you addressed computational deployments, and comparing the linear approximation
approach to the blockchain approach, has there
been any comparative study of total
computational requirement and skill? Of the
entire system? They're very similar. The
problem with blockchain is that you need more communication . Hi. Just to confirm the
robot swarms are not the ones contributing to the consensus
process. Rather, they're just clients,
putting their data on to the Etherium
blockchain and there's nodes on the Etherium

I'm next sure I understand. The
way it works is each robot is running is one node in the
blockchain framework. It's running Etherium or not. Each of the robots are also full
nodes? Yes. They're mining all the
time. Profession but are they part of a private network?
Yeah, yeah, it's private. It's not the main Etherium. [Applause] So yeah. Let's introduce the next
speaker. Thomas Hardjono. He's one of the first guys I
encountered here at MIT. He helped me draft some of the
initial papers in this field so I'm really grateful to him.

think he's going to talk about identity, about data, how to
manage these kinds of things. Without further delay, let's
welcome the speaker. [Applause] Thank you, Eddie, and Professor Marco, and we have some famous
names here. I met Aleksandr earlier. And flying into beautiful snowy
Boston to witness the snow. It's good. I recognize some of
you guys from last year.

We had the same conference here. Hopefully, this will become kind
of an annual thing. It's good to see what people are doing in
other fields. We tend to, like, focus on our
own fields and we forget about everything else. Occasionally
it's fun to see something like this. Compared to your videos, my life
is boring. [Laughter]
It's just a white board with a lot of maps. Nothing moving.
Nothing 3D. Today I'm going to talk about
rethinking computing, particularly in the context of IOT devices,
blockchains, so on.

My naive view of robots is it's
an IoT device that has intelligence. It's not just a
sensor that does one thing. It has capabilities and, therefore,
it's good and it could be dangerous. Right? If you abuse a whole warm of
robots, you can get into serious
trouble, and cause a lot of harm. You know? Thinking airports, roads, and so
on. So blockchain technology is
still nascent, and if somebody tells
you it's mature and ready to go to production, and please buy my coins or
tokens, run for the hills. If you were interested, I was
reading this recent BBC article about One Coin or Coin One, one of
these Ponzi schemes.

The person who started it is now
missing and wanted by several authorities. So virtues of blockchains, what
got people interested in Bitcoin in particular, you would have these distinct nodes, a physical
device in your basement, in order for all the devices to
reach a particular state, shared state on the ledger, it would
need to run some agreement, consensus protocols. This is
proof of work. And they need to be able to do this
independently of each other. There must not be
interdependence within nodes within a blockchain system, by
definition. The fact that each of the nodes
carry a complete set of transaction

And actually, well-defined
limited operations. One reason Bitcoin is successful
it has such a limited upcoding. You need three operations in
Bitcoin and you're done. It just runs. But in Etherium, it's a
fully-fledged programming language. You can do
interesting stuff. You can do damage, like the Dow attack and
so on. So when you look at swarms of
robots, how do you know that those things flying in the sky,
those things that are running in your house, are
healthy? What does healthy mean? So in the context of trusted
computing, healthy means the device is running the correct firmware and software and
correct hardware and not been tampered
with by anybody else and you'd like to have visibility into
this world. Literally, could you have a screen where you have
yellow, green, red, power for each of your nodes, where red
means you're not sure? What is it running right now? Could you have the ability for
the nodes to report, truthfully
report, its type.

Just imagine your drone being a
complete PC board, from bias, all the way, everything, it
needs to be reportable. So one of the key things, I
think, is that the idea of a cohesion, the
value of a swarm of robots, is
dependent on this ability to report the node
status, and therefore the health of the entire population. This is actually not just true
in robots, but just in your boring sort of enterprise
network devices. Enterprise IT and network guys
want to know how healthy each of the nodes are, each of the
routers within its domain.

So could we rethink how we use trusted computing technology to,
you know, use the features there to feed into the decision-making
based on the policy? So that when you ask for a robot
to report on something, in addition
to that something, you have the option
to ask about its health. Has it been tampered with? Has
somebody touched it? Has the firmware been updated? Has somebody tried to update the
firmware? A bit of history for those who
are old enough.

Back in the 1980s there was famous documents came out of the
DoD, this one is called the Rainbow
book. It had a network of trusted
computing base. It was the totality of
protection mechanisms within a network system. The whole point
is it needs to be able to implement and carry out a policy
that you decide on. Now, this is TC segments, 1985.
That's a long time ago. This is long before
virtualization. I think VMware did not exist, if
you know VMware. Cloud computing was a dream, did
not exist. So interestingly, some of these
definitions are being revisited in the trusted computing group.
There's a group of vendors and service providers who have been
working in this space more than 20
years, since 1999, trying to address some of these issues. So Trust Computing Group people
talk about trust and trustless today
pretty loosely. If you look in a coin desk or
some of these media, but it's not an easy matter. In late '99/2000, the TCG came
up with the following definition of trust. Think about it
carefully. It needs to perform a well-defined function. Think
about the brakes on your car. Why do you trust this little
thing when you're driving that it will stop the vehicle? Have
you thought about that? You just press it, and it works?
Why is that? It works once.
It works twice.

It works five times.
It works a hundred times. It works five thousand times.
It works ten years straight. So repeated operations of the
same thing, and without deviation, it
creates social trust, human trust, in the function. The function needs to be
well-defined. Another one is your door handle.
It's a very well-defined function. If it does something else, if it
does 360, you usually panic. Like, what? What is this?
So these are easy examples of how you define trust. The second property, and think
about it, when the TCG defined this,
it was actually thinking of a chip set, and what features it
needed to have. It needs to operate unhindered
and shielded. So your car brakes need to work
unhindered. You can't have a piece of carpet sticking out.
You guys know about what happened with the Toyota case? The Toyota, Honda, big lawsuit,
because it was bad design. It means crypto graphic
identity. Imagine you have a chip on your
machines or laptops. It needs to be distinguishable from one
another, so when a chip signs something, you know it's coming
from that laptop versus the other PC
over there.

The fourth one is a difficult
one. We kind of call it TCB dynamism. Why am I talking about all of
this stuff? Could these features also be
inherent in robots? Could you have robots where this is just
built in? You can use it or not use it? Imagine if your robots are
bored, and it comes with one of these TPM
chips, about two dollars each now, and it has all this
capability, how would you use it to secure that particular robot
and how can you build up that layers of trust? And how can you use the
feature of all the robots as a swarm to a particular goal or to do a
particular function? One of the things we're looking
at is extending these four properties and adding two more

Group membership. When a robot, when we think of a
robot as belonging to a group or a
swarm, how does it prove that? I belong to Group A and not
Group B. Right? We don't know how to do that
today. But with this model, you could. If a robot wants to join a swarm
and leave a swarm, it needs do get permission to do that. It
doesn't get to get permission from all the members of the
group. Truthful at the station. Imagine you guys know what SGX? You guys are in hardware have
probably heard of it.

This is the trusted computation,
the enclave, secure enclave. What if each of your robots
actually contains a secure enclave? Meaning you can do
secure computation and do everything, all the approval
work, hash computations, so on, within a trusted compartment,
within the chip set. So the robot could do that, you
could ask it to report truthfully, not
just the firmware, but also its
memory status. What's in the memory and who put it there? So diagrammatically, it looks
like this. You have a bunch of nodes and robots out there
forming a swarm. Collectively, it makes available
these two additional functions as a collective so that you can
have applications and make use of it. The applications need not be
aware of DP1 or DP4, and in fact, doesn't
need to understand DP1 or DP2. Think about robots that have to
carry a mission out in the field, like military robots.

So the same problem, you know,
it was already discussed 20 years ago. Imagine you have
troops out in the field in a war situation,
carrying back packs. There's always one guy carrying the
radio. How do you update the firmware of one of those boxes in the field? The platons are out there. You want to do a firmware update by is at
light. This is almost the same problem.
This is one of these use cases.

Could you ask nodes to do
regular pings to each other, each time reporting its status?
It's not just enough to say, hey, I'm here, I'm signing this ping to
do my keys, but who put the keys there? If you don't know the providence
of the keys inside the hardware, the signature is useless.
When you do consensus, could you incorporate that? Meaning I will accept the proof
from a node in that node or robot accompanies it with his
report of the internal status, memory status, and so on and so
on? And so when I want to confirm
this, I need to also check the reported
status of the robot.

Governance. When you have a group, a
collective of robots, owned by an
organization, private organization, and there's governance, so typically in the
BC world when you buy a BC it comes with
firmware, hardware, software, and so on, coming from different
vendors. When, let's say, I have a BC and
it has whatever bias version, 6
point something, that list of
components is called a reference manifest. Could each robot be given a
reference manifest as defined when the
robot left the factory and when it was getting deployed? How
can you make sure nothing has changed in that manifest? Governance here means you can be
part of your collective and join, but you have to have the
same reference manifest as the rest of the nodes, the rest of
the robots.

That's kind of the end of my
talk. It is a bit, I admit, deep
technology, a bit complicated. If you want to read the paper,
it's on Archive. A shorter version is going to be
in Frontiers. Just take a read. If you're interested, reach out
to me. My cards are there. Ned Smith is from Intel. We've
been at this from '99. This is an old problem. The industry
takes a long time to evolve. When I say blockchain is in a
nascent state, believe me, it is, compared to the hardware and
software that needs to be used for the nodes and the
blockchain. That's kind of it. Any
questions? [Applause]
who is next? Did I go too fast? You're going to ask a question?
Thomas, I have one question. I think it's very interesting
that, like, you represent, like, the
world of robotics and especially from
your end. Do you see, for example, any applications in which you see
these are two worlds combined, right? Do you see this happening more
into household robotics? Like with nests, or with Alexa? Or do you see this happening in
more a city level? Self-driving cars? Public infrastructure?
All these things? What are your visions?
Definitely, yes, particularly for high-value assets.

There's a whole discussion about
industrial IoT, what people are going to use for the next generation
nuclear reactors,s. Those sensors need
to report correctly. It needs to have these features
and be able to measure environment
report inhinderred. I think going forward there will
be a lot of application for this technology for what I call static IoT's all
the way to various robots. There is a group in the TCG
working on this. It's called the cyber resilience
group. How do you create a future infrastructure that has
resilience against all these possible attacks?
Definitely. It's also price. It's kind of interesting. Router vendors, I mean the big
router vendor in San Jose California I shall not name,
they consider a two dollar chip is expensive. This is national infrastructure
and you think two dollars is expensive? For a five thousand dollar box?
So there needs to be a change in mindset. Also, in understanding the value
of the infrastructure and also develop data that flows through
infrastructure. Thank you very much. You didn't touch at all on the
work on blockchain-based identity systems, the standards work being done by
the worldwide web consortium and
four or five sort of related groups.

It seems like there
would be, from the blockchain sense, a great
deal of overlap between what you're talking about in the IoT
world and that work. How far along is that? What happens
next? Sure, absolutely, there is a
connection. IEEE has a standard called
Device ID. It's an 80211AR specification
which is now five or six years old. It's for device identity. Ideally, using our language,
Adrian, a device needs to be able to
produce an assertion about itself. So the question is what keys are
being used for those assertions? Is there a key hierarchy and key
provenance? This whole identity problem has,
again, exploded because people are interested in blockchain and
crypto currency and digital assets and
virtual assets and they have a key.

How do I prove to you this
is my public key and I haven't stolen it from Adrian?
There's a lot of fundamental revisits of problems that people kind of
ignored 20 years ago right now, but, yeah, this definitely ties
into this whole, you know, blockchain-based identity work
that's happening. There's a number of groups
working on this right now. Thank you for going through kind
of the computing base. Are there attacks when you
modify that computing base temporarily and then recover? It seems like, long run, it's
not really — So the chip set I'm talking
about, the TPM, actually has registers
inside. So you can detect if the outer
firmware was modified and then put back again, you can detect
from the, you know, registers inside a TPM. Every time you do something, the
reenlister updates. You can run through a machine the history,
and what you think should be the correct history, and if there's
a mismatch between the internal registers, you know something is
wrong. The TPM hardware is temper
resistant. It probably takes about a million dollars to
scratch off and do physical attacks on that thing.

It truly
has shielded locations, just like your smart card, sim card,
has the same technology. But it's low cost. We talk about this every week,
several working groups. How do you detect something like
that, clever manipulation of the
firmware? [Applause] I want to introduce the next
speaker. Professor Fabio Bonsignorio. He is the CEO of Heron Robotics. And Professor at Scuola
Superiore Sant'Anna. Let's welcome him. This is just a way, this long
lists, is to say I'm involved in a number of research strategies and
roadmapping exercises at the European level.

What I want to do today is to
share some points about, actually, and
make a kind of rant in favor of so why
blockchain matters to robotics. Because to this point, it's
still something which is not
controversial but maybe not a lot of communities
do it. First, I give you some context
from the point of view of robotics,
and I will talk about why
peer-to-peer in general is smart. And why it
matters in robotics applications.
I will quickly give you a couple of examples. Actually, the first one was
already explained or shown in a couple of talks before, but
Marco and Eduardo.

And then I will do some final
remarks. So the context. We are having
some impact on the planet, to a point so we are significantly already creating
problems to the survival of the planet,
to a point we might be quite close to
the tipping point. So significant change on our
situation. In the meantime, this is what actually is going on in
robotics. If you look at this chart, you
see that robotics is a very popular and growing field. So it's 9% is actually — more
than 10%. But you see when you go to about
80 billion, whether it's dollars of euros, but if you compare to the
product of the planet economy, which is just 1,000. So very good if you're in
robotics, because it's a growing field. Very nice idea to join
the robotics field. But still it's a niche. It's a small

So I think you have seen in many places, in particular, on
YouTube, from where robotics can do things like opening a door,
sitting on a chair and put your hands on the steering
wheel of your chair, or operating a vault. We know from
that, this is more or less the state of the so-called
mechatronic paradigm. That means where you run a
complicated machine server programs, and these programs operate by electrical
models. This is what happens. So the movie is not starting,
but you may — you may have seen — we open the door about one
time in twenty. So this is more or less the
state where we are.

We do that by employing ten
people operating the robot. That's why many people thinks we should go in the direction that
Marco was showing for swarm robotics. We probably need to look at more
deeper inspirations. While our robots are all
designed top-down, so you have a central
brain managing information, devising a
plan, implementing the plan, in nature
we typically the keyword is
emergence and self-organization. So we think it should go a bit
forward, and actually many people thinks, for example, in this
flagship preparation, we are pushing, but
we should … A few significant technologies,
like Internet of Things, machine
learning, and deep learning, how long
before we get it? And computer, if you don't have
too many objects, also work. This is a complete reinventing
of older manufacturing processes, which is not a small
thing. Now, we typically have still
mass production. So you can order your car, but
most cars are built together. So you have some options, but
you cannot perfectly customize your car.

You have options, much more
maybe than in the past, but you still
cannot have your own with preferred side and
colors. colors, like, in clothes, for
your jacket. These, they learn a lot, but
they lose a lot. Actually, the key factor today
for a company in service or customer or if you think of the
iPhone, a product, typically, you sell a product with a service, it has to be
fast and adaptive. If you have a stock of blue
shirts, and now people want red, all
your stocks are valued zero. So this capability of
reinventing industry in such a way, like a
"craft," like" craft shop "is critical. This is free enabling
technology. There's all these new kind of
robots. Not some we know already. Also, some interesting
application. Another interesting thing, which
is related, is bottleneck in
sustainable production.

There are things people have been
talking about for a lot of time. One is mass customization, what
I was quoting before. The other is sustainable
production. We all know if these new
billions of customers adopt our Western
lifestyle, we don't have enough resources. Some would say we
need two planets. Some say eight planets. Maybe mining the
asteroids. But we know that. So side effects of these
technologies is that we can recycle materials. Before, I give you an example,
again, of cars. You have every minute a new car
coming out of a production line. At the ends of life, you should
take the same car and dismantle it. Today, still today, despite
these new technologies, you do it by hand. There's no
competition. So you have no way to — but, if you have vision, and if you have
controls, you can disassemble cars. This means that one side
effect of these new technologies is that sustainable production, which
has been breached for many years, is now possible.
I go faster.

This is related to some work we
did, which is typically giving two hands and putting — or giving and forcing
control. Now we enter in why this
matters. This is from the World economic
forum. They actually dream of what? Having a satellite cloud network
managing the whole logistic networks.
Now, I think, most of you know, we have a very complex logistic
network, supply network, which gives us our products. So a car is actually
manufactured by several suppliers connected into
a network. So it is disposed of at the end
of life. Until this other paradigm, at
the end of life, you should have
logistics networks and do the same thing.
And the idea is this. A big centralized system,
managing logistics. And you see some problem here. Apart from — so we can really
think of it as such a network that's global. It is not
distributed. This leads to some political and financial issues. So excessive concentration of
humans and power and capital. It's really secure. Cannot be tampered. There's a lot of discussion
about the fact that now, actually, you don't need money.

You need data. If you have data, you can make
money. A friend of mine says we are transitioning to data-ism. You
should not focus about money. You should focus about holding
as many data as you can. Okay. We may decline may make the same discussion for
food supply, and also constructions. So can happen ten or fifteen
years? Okay. We should have some new
technologies. I mean application, not just
interesting topic for research. We will have more computing. We should go towards a system
where swarms and networks is really
the backbone of the system. Another thing is that we have transition from browsing, so
internet would be mainly used to share information, used to do
computation. Maybe you don't anymore rely on
car factories, because you will have
the car, like, in this case of a
3D-printed bike. Someone close to your house will assemble, downloading from the
internet, the designs of your cars.
On the other hand, we still have someone building chips or
building 3D printers.

You will have someone build your
roof, but maybe a big centralized
fusion power facility, such as ITER. So what we can do now, in short,
is a structured environment, network
of connected agents. I have no time to enter into the issue of self-guided cars. Maybe
they're not ready for real autonomy, but if you have a city infrastructure which connects
the cars, then you can have it probability
already now. So we need to adjust the
environment. This goes to the responses. So from my standpoint, a smart
city is a huge collection of many
robots, many AI, all interacting. You know my story,
but everything will be connected.

A lot of things will be
automated by robots and by AI. There are already examples of
smart cities projects around the world. These projects typically
are implemented by choosing a main
supplier, and to give the keys of the city, the data of the city, to that
supplier, which is the equivalent. And so if you think about it,
you're going to robotickize everything. Whether or not we reach this
level of deep buyer inspiration and
solution, but this will allow cars to go
around and people and whatever, this
means with the current technology, assuming
it is scalable enough, we need to move
— we need to satellite everything. If you think today, Amazon is
big, tomorrow it will be ten hundred
— one thousand times bigger. This is because remember the
chart I shown at the beginning. Today, robotics is one
thousandth of the economy. AI is a bit bigger. But we are
still there. If you think this is going to
permeate all the system, all the entities in this space are going
to grow by, if not a thousand times, a hundred times, or ten

So we know that there is a
solution. Peer-to-peer. Right? So why it matters from robotics? It matters for robotics because
it provides, and we have seen an example about that, the distributed and
secure transaction. It allows this. You see there's
a problem. I emphasize it , the aspect of centrallation of
power, but do you really think that we can grow at the current
level to the level that we can really manage this complexity? Honestly, I'm not so sure it is
feasible. So there could be a technicality
before all the considerations.

For example, Europe, the European data are going to
the U.S. And the blockchain distributed approach to data ownership is
seen as a possible solution. And distributed computing. Here is a point I wanted to
make. Do we really need a blockchain
in robotics? If a single robot, maybe no. I
could just keep a connection to a server. It works. It's
stable. If I have one to ten, okay,
still, so we have seen, Marco was saying
before, the amount of computation and communication, especially
communication exchange, is probably still a bit — so you
need to balance it with respect to the benefits, because, again,
you may have a turning . The point is we have millions of
interacting agents. Can we really think of managing
those millions or ten of millions of independent intent agents without a centralized cloud
computing infrastructure? Probably not.

So the point is about service centrality, actually, one of the important topics is network
science here. So what are people doing with
blockchain? There are areas in robotics where you probably
don't need to use blockchain. So it will be interesting. But it seems we are heading to
very big swarm of things. Even managing within the
companies, it seems we need to start on this problem. Big swarms could have different
set of rules to follow than small ones.
So what people are doing? Security. We have seen examples about
that. I think it is interesting to,
when someone is doing some experiment about that, to use a platform like
this to distribute computation. If I don't want to use a remote
cloud, I need to have something, the
mechanics, to do the computation. It's slow so far, but if I could
make it, it could — then I probably
will need a market infrastructure, because if you think about the city of
tomorrow, not as — let's assume that I am
lucky and it's a fabulous city, so
there's a growth provider managing your city, but if you
have think you have a different actor, and this is advisable for
many reasons, you will need a kind of
market because we're living in a market economy.

And those markets are very
efficient. So apart from the fact that
there are politics, the first speaker,
Adam Smith and Marx many times said similar
things, but we need a managed market. So the blockchain provides tools
for that. Then there's some work we're
doing together with Aleksandr about environmental monitoring. I started with the fact that we
have seen that we are overloading, a
bit stressing, our environment on
many respects. Typically, you have public
agents, taking the same person and
telling you about the environment. Having transparent third party provide, for example, by
distributed ledger to take samples of
environment can be — can have value. At this point, you can't trust the results because
in all countries citizens are being suspicious about what the public entities are
really doing. So you see the final results. You cannot You also have the data but you
are using a skew.

I think from a scientific
standpoint it's very interesting to see how
you can match market mechanics with
swarm mechanics. Swarms are one thing and markets
are another thing, but both rely on many agent interaction. I was talking about a supply
chain. If you want an open supply
chain, for example, I've shown an example
of the local 3D-presented bike. Now
Of course, you need a supply network. But now supply networks or
supply chains are top-down.

If you have a big company, for example, Kawasaki, so if tomorrow you
wanted do start your bike business, not
selling bikes but making and selling
bikes customize for your friends, you need a supply network, which is really
a network. And this requires to somehow
abandon this kind of chain set and build
them as distributed structure. The agent, of course, you can
move the same logic to intelligence.

You can have a registry. You need to make — you can
manage a birth registry of robot. If you think about it, what I
think is that we are all people in robots
are preaching about the smart cities, but I see huge problems
to manage a smart city with current technology. If you don't suggest to me
something better, I think that the swarms
and the blockchains is the only thing —
is the first thing to think about, because really I think the complexity of
the smart cities is being a bit
underestimated. Okay. In the future we may have some
— now, problem with something we've not
seen before. Okay. So for example, one, you have
seen already. I'm sorry.
The second example is the idea of building a market. We needed to implement something
like a market. If I want a city where I don't
have an owner of a city, I need to be
able to manage the transactions among
different operators. You can call it money or not, but you
need it to manage data. You know? Remember, data. It's not
money. You need to manage an
economical tranceaction.

If there are two actors provided
automated parking services, and automated self-driving Uber
service, the self-driving Uber may have to
pay a toll for parking and will have to interact with the agent managing the
parking lot. If this system is automated, I
will need it automated in market
transactions. So I buy the service, and I give you money,
or tokens, or whatever. But this is an example of it.
You will need it to implement a contract. We know, for example, in a
network, you can associate a piece of program that can manage a program.
So final remarks. I focus on this particular
thing. Even a simple electrical car is
made up of hundreds of components. The iPhone has less components,
but still has a lot of the components, and still you have
hundreds of suppliers involved. If I wanted to allow the local
supplier of car phone or product, I need to abandon what are the
currently hierarchal parameters of supply
chain, parallel, and one going to
Kawasaki, one going to Duchati, one going to
BMW, if you talk about bikes.

So, for example, typically, when
you show this, you use this kind of picture, but this is for human
variability. We should have hundreds of
nodes. And relations are much more
complex. Yeah. I see it doesn't fit well.
So this is what I already was anticipating. If you look at our supply chain
today, how it's managed, they're managed by typically what is called
electronic data interchange, and they use the
internet, which is highly distributed, but
they are not distributed. So this that point is a bit confusing. So where is the problem that I
see? There's also an associated
problem. We have a confident — for example, I have tried to solve this idea of
a blockchain , the supply chain to some
company. Right? The problem is that they're not really used to this mindset of having a
distributed supply chain. So the main problem that you see
is the issue that we are use to
working in a different way.

Apart from the technical issues,
we also have this problem. I mean the industry is already managing very complex networks
with very huge investments and involving a
lot of people. What we are proposing, it's
changing what we are doing. The reason why I wanted to tell something about the markets is that it's very
mature risk kind of change. We already have a system that is working below the possible
effectiveness and efficiency level. We have this demands which go up
and down, but we still have a very
rigid supply system. So in very, very short,
making short of it, we have this new field, if you
wish, of network robotics with many, many
robots, in the millions, or hundreds of
thousands, and above, where you need market.

You need data
interchange. You need distributed ledger. And just to finish, yeah, these
are, to my knowledge, has been
organized by blockchain on the interface,
blockchain and robotics. And I can tell you more or less this is the size of the individual
map. So someone said before
blockchain is a massive feel, it's an em
brieotic field, but I think there is a
convincing case for looking on this interface. So I think — I could bore you
by ranting, but actually the summary of topics that I did before is
taken from browsing what has been shown at
this workshop one and the major things coming out of it.
So thank you.

[Applause] auto I think you mentioned about
market being a very efficient way to
allocate resources. But there's no guarantee that the allocation
is going to be very just or fair or socially equitable. So if you have, let's say, a
blockchain algorithm that is used to manage a market, how can you
make sure you don't have something that is very similar
to 51% attack? Let's use your example of the
Uber market. Let's say you have a market, let's replace Uber
with ambulances. So you have a market for ambulances that is
managed by a blockchain algorithm, how can you make sure
that, you know, people from the rich district of the city
cannot, you know, abuse the system, you know, like use a lot
of competition power to create 51%
attack to always get the ambulances for themselves?
Very good question.

Of course, I have half an hour. This was another interesting
aspect. Markets are social constructs. When we talk about natural
markets, typically they are not natural. But they come from a
kind of stratification of the habits over time. And we know that market created
inequalities. I would say it's one of the
typical effects of markets. Yeah, when we talk about a
blockchain for doing something, for example, management of a city, the
difference with natural market is that it
is an efficient market, so you write the rules. While the rules of a real market
are customary — so we used to do it, by the way, this is also true of stock exchange, algorithmic rules in
an artificial market are algorithmic. So a nice topic of research,
when I was talking about research on the interface,
between, for example, a swarm meaning as big multiagencies, is
that you start the rules which lead
them to a more effective approach. Just to go into a bit politic,
not politic, but compare the Soviet
Union to the people's republic of China.

Typically, the Soviet Union was
top-down planning. The . This was a mistake. In China, now they mix planning
with markets.
There's the problem of managing market in such a way that they
benefit all or the most — the biggest
number. See, I didn't touch,
intentionally, the political implications. But it's true. I think that the advantage of
having an artificial market is that you can design it, and you
can simulate it. But when you have a nice topic
of the confidence of existence of a
natural market, from the neighboring
markets or to the stock exchange — by the
way, the stock exchange is, I think, 70%
of the transactions are already automated.

So it's not — it's
already something which is similar to an algorithm that you run to see what you have to
do. I see your point. We need to be vigilant about
privacy issues, these kind of things, and we also need to be
careful about how we design these new markets because
it's true it might end up being a
nightmare with a few rich people. This is true with
robotics and AI. Robotics and AI, I multiply by,
say, one hundred times the productivity of one hour of
human work. This can be used in many ways. I hope I answered your question. Yeah?
Hi. You mentioned earlier about citizen transparent
environmental model different from environmental science. Can
you speak about that? Yeah, Aleksandr will speak more
about it, but if you have a peer
reviewed way to check samples, it's an application of what has
been shown from before. You can avoid that fake data and
enter data into the data chain.

So actually, I see this big
potential in having citizen doing science
on data, peer-reviewed by citizens. Because you can use these
technologies, like any technology, you can use in many ways and you can
develop different blockchains with different purposes. For example, this is a kind of
blockchain to give more power to the people. Power of the data. Because as before, if data are
the real worth, giving ownership of
the data to the people, to the
distributed data, it's a positive thing.
I don't want to go too far with this, but you may also think about,
for example, electronic vote.

I'm a big fan of electronic
vote because I think it's easier to move physical data. But you know it's not so
difficult to tamper electronic vote.
This, again, could be an application. I didn't go into
that because I wanted to stay on the topic of robotics. And then you have all these
strange things. But if you have these robots as
a service, how you can buy this
service? Who can buy the service? And you have the
potential for inequality, again, but you also
actually — I'm a dreamer of the
possibility possibilities.
So it is not only the citizen collaborating with the data but
also the data with the citizen overseeing
that the data is not tampered.

[Applause] the last speaker of the
session. Professor Alex Kapitonov. He's a pioneer of this field and
from the ITMO University. Without further delay that's
start the presentation. I'll bring my little friend with
me and just put it here. Today, my talk will be about
more futuristic scenarios of
application for de-centralized technologies. Right now, there is several
really good stable solutions for
information exchange, data storaging, communication, and so
on. And right now, it can be
implemented to the real tasks and it's
really scalable for purpose of smart
city and citizens inside smart city.
Okay. Let's go further. Just some short information
about myself. I'm an associate professor in
St. Petersburg and working with
robotics things already eight years and mainly focusing on
robotics and last four years I'm working with the
de-centralized technologies, starting with
Etherium blockchain when it was launched,
and we mined one of the first thousand
block inside that was really amazing
for us.

And after that, we found that
smart contracts and other peer technologies that makes possible
execution of the source code in distributed ledger, it's really
useful and it can be used and
applicable for robotics stuff. There is the main idea, what
we found, and we started to develop
it. Okay. And what we found that's really distributed
things, peering communication, is needed not in all spheres. Because there is a lot of disadvantages, but advantages,
of course, will solve the specific
problems. Or it can be really useful. Of course, the first thing, it's
direction related, with sharing economy. Sharing economy right now,
there's a really good way how to maximize
the utility of some cars stuff,
equipment, and you can see that many
companies already going in this direction, for example, in
aviation. They already selling their
engines for their planes not like the solution, but like the
service. The companies who are buying
this plane, they're paying for the time of the engine, not for the e

And the same things, I think, it
will be with the cars, with the
industrial equipment. It's well on the way. It will come soon
everywhere. The next thing, it's crowd
sourcing, crowd funding stuff, that's
shown doing powerful things, to solution. Different crazy ideas and
implemented to the real world. And the last thing, but not —
it's really an important thing. Public assets management. That's what, right now, is
executed by governments, mainly, all the
public assets management. And sometimes that's not so easy
to make some really fast changes in
this sphere. It's a big problem that we
should solve and the computing
technologies will come there to make it more transparent and immutable and speed up this
sphere. Okay. All those things are really
needed. As we talked previously, we
found such sphere. It's environmental monitoring. Environmental monitoring is weak
point from the side of transparency
and immutability of the data, and
participation of the citizens inside this process as related
with environmental monitoring is really needed. It's really important that every
person should be involved in the process to collect the data, to
providing the data, sharing it with
participants, but other task is analyzing of this data,
interpretation of this data.

It can be, of course, made with cooperation of the big laboratories or research
centers, so on. Right now, there's a project,
that's mainly popular in Europe, when
the governance tried to share the
liability of data sourcing, crowd
sourcing, all the data involved in the
environment, like CO2 Dust, the solar energy value,
water quality, and so on. There's all those tasks going forward, and citizens are involved in the process to
measure the data about the quality of the water and so on.

There is, like, a presentation
of the main project, what they found,
especially for the soil, for the air, for the water, and the common
platform for aggregation of this data and showing the
interpretation of that information. Here is in this process, it's
really needed in peering technologies. Because how can you trust the
information when you're, for example, sending — you're
measuring your data. You're sending it to the server. And after that, you are getting
the average level of the dust in the city. That's usually how
it works. But the average level, it's kind
of a tricky thing. For example, during the day, the
level of the dust in the air is really high. But the night, nobody on the
street, of course, it's not so

In the night, of course, it's
much lower. Average level is okay, but we
are getting most, the biggest part
of the day, where the high area and
there's all the dust in the air. This process should be fully
transparent for citizens. How the data is collected, where
it's stored, and how it's processed,
it should be all those steps should
be transparent and clear for every participant of the city,
for every citizen. And of course, we're starting
from the stationery sensors that can be
placed on the house, or the roof, can
be mounted on some wheels, buses,
but there is sometimes it's not enough to
get the data only from the single point
where the sensor is located.

Sometimes you need to
reconstruct some dynamic field of the
population of the dust or some gas, and
estimate where it's going, how the flow is working in that
area. This task , especially for robots, in my
opinion, they can be, like, defender of
the nature. They can connect us with the
nature and explain to us what's happened, how it's going
in the realtime, and this is really an interesting thing, when you can
get more data about what's happened in the environment, how it's working,
and what will happen , if you're deciding which
decision to make. I think autonomous systems and
robots can connect us with nature and
give us the common big picture of this

Here, just a couple
demonstrations, how we implement the mobile
robotics with peering technologies to collect the data involved in the
environment. Here, you can see the drone.
It's flying out of the city. And you can see here the landfill in the top corner. This is the really big gray area , the value of the land fill, how many tons are inside
it, it is quite difficult to estimate it.

But using the mobile robotics
and sensors and efforts, we can get
the real information about such things. For example, on this video, you
can see there are lakes around the landfill and it looks like not well. It smells like really bad. The monitoring of these things
can be solved with autonomous systems and robots. The next task is water
monitoring. We made the water draw to
collect information involving water
quality inside one of the rivers, one of
the biggest in Russia.

We found really interesting
scenarios. When you're showing the quality
of the water, not in average letter, but you're searching and estimating the
flows of the pollution, you can find the initial point, where it's polluted,
inside the water, and so on. The discussion inside society started about,
okay, who did it? The companies, of course, are trying
to say, "No, that's not mine. That's not mine."
But the next step somebody will ask, "Okay, please provide
me some monitoring system. I want to be clear. I want to show that I'm not
polluting environment." This is the next step. And how — it's how it can be in
the future when the companies will
connect to this system.

One of the really — the next
big thing, what I want to just show you and
try to explain, try to imagine that, for example, we have a forest
equipped by such system, with a sensor. It can measure the CO2 or O2 and
other constituents of the air. In this case, such system,
forest plus a total system, can be like a separate agent, and
okay, if it's separate agent, we can make a deal. Okay, forest, for example,
lending the land, the landphil, and saying,
okay, I'm renting this land and giving you O2, getting the CO2, providing the
place for animals, and so on. And we can estimate the economic
value for such actions. And in this scenario, the real
nature or part of the nature can be,
like, an engine in the economical sphere.
It's really, really weird.

It's amazing, if only for
example, definitely, but I like this idea
when I can communicate with the nature
and the economical weight. Just a short two topics that I
want to discuss just a little bit
more. This is the picture from the publication of the Cornell
University. They start a discussion, okay, in the future we will see the problem,
the economy of robots, because we
were already told they're a big part
of transactions already made
automatically. Can you imagine the big value of
this another mat tick economy and
real human economy when we're exchanging the money with each
other, peer-to-peer. And there is a problem. The
human economy will be really small. Really small. The robotics part will be much
more bigger. And if we will see some oscillations in the robotics
economy, it can be dangerous for human economy, because it's much more smaller. So what the solution, I found,
for example, for that scenario,
it's described in we should maximize the
communication between each other, and tokenize

I mean if I'm discussing something with you, I should pay to you.
If you're asking something to me, I should pay to you, or you should
pay to me, to maximize the value of the
human economy. This is some weird things, but it's one of the ways how to make
humans' economy much bigger than the robotics one. Of course, right now, there is
not only such aspects, economical
aspects discussed with our colleagues
and scientific society, but there's
a lot of things, and that's all described
in ethical AI design for robotics researchers for
robotics sphere. There is a lot of things,
starting from the rules of the
communication with AI, and finishing some intimate relations between the robots and
robots and humans. There's really interesting topic
for discussion, and all those things described here. One of the ideas in this ethical AI design book,
the distributed networks for
different AI and robotic solution. Right now, it's
really needed. We need to ensure that any
actions that can be from the side of the
artificial intelligence, and it shouldn't be in the single

Single point for decision is dangerous for us.
Okay. I think about the futuristic
things, that's all. But just one more moment, I want
to show you one demo. I hope it will be good — yeah, please, Vitali, can you assist
me? This is a demo. I just want to show you how it's possible to apply the
simple car, simple robot, can be connected
with the peering technologies. And here is inside this
technologilogical background of this demo, there is a lot of peering
things, starting from the peer-to-peer
communication, transactions, IPFS, information storaging, and
finishing the robot operating system, robot
software, that's also needed to control
some aspects of this work. Okay. Just let me put it. I hope we will get the transaction from
this robot. But there is a process, what I
want to show you. At first, here is existing — do
you know the interplays or file
systems technology? Who knows? Just raise your hand. One-third
of the audience. It's really good. Interplay file system makes
possible to provide free communication
channels, peering channels, for different

And the next thing, interplay interfile system can storage the
information inside. In the peering distributed
network. After that, if you have a free
layer for communication between the different parts, different robots, you can
send — you can broadcast the information about the deals, what should be done,
inside this process. If you're finding in the first
layer, in the free layer, somebody asks
for the work and someone else rated to
make it — oh, perfect. It works. And somebody rated to make it,
that's — oh. Yeah, yeah, yeah, just wheel it
out. Not a problem. It's rotated around. Somebody, I think — yes. The first rows saw how it works. The interplay interfile system,
you can put information through the blockchain, because that
transaction is inside the blockchain. It's costly.

And you can just broadcast in
blockchain some information. It's really costly. But when matched, it can be put
in the blockchain. The thing about the third part,
where a smart contract has been created, and in our case, this smart contract
collect the information about rows back
for this robot. There is special instruction about what
it should do. In our case, it should rotate on
the time in ten seconds, because the price, what I paid, previously, it was
ten tokens. And look at at this. Right now,
peering technologies is really simple. It's really simple,
applicable. It's really simple to use it.

Just several months ago, for
example, there was a release for the application for the
smartphone that you can control the peering communication for
your smartphone. It's amazing. That's why we should use it. We
should apply it. We should promote it and collect
much more information about all
aspects of the environment and try to
improve our world. Thank you. [Applause]
Any questions? If somebody wants, we can
discuss after the demo and I will show the details, if you
want. If nobody have a question, you
— yeah, please. So I have two questions. The first one is is really love the concept you
have about nature as an agent. Or the symbiosis with robots,
right? Especially because we don't tend to think of this, but we live in a
tractional society. Because I'm an agent in society,
and I pay my taxes, and they give me
things, et cetera, et cetera, mainly because nature is not an agent
in this contractual society we tend to not respect that agent. If we were to have this
interaction, it would be way better. I really love that
example. The second thing, could you
explain a little bit more, what just happened with this robot?
How was the flow of information? What happened in the back end? It takes several minutes, but
can I change again for the browser,
yes, here is de-centralized applications for
this scenario, what I showed you, to
send the robot liability for to make them

Using de-centralized
application, you're communicating with IP first layer, interplay interfile
system layer, where you are broadcasting the proposal that I'm ready to pay for this
work. And you're providing the
description of this work, description of
this work, according to robot operating
systems software in the Rosebeck file. When it matches, somebody tells
you that I'm ready to make it. It matches. And sending to the blockchain,
if you're in blockchain, to mine
the smart contracts inside the blockchain. You can use a contract. And I definitely ask you to save
this contract, because I hope it will be historical.
[Laughter] And when the smart contract is
put inside the smart chain, it starts executing. The Etherium virtual machine
matches the addresses of the robots and
address of the payors who pay the
tokens, and start executing the smart contract, sending the hash of the Rosebeck file,
which was put in first to robot.

Robot uploaded the Rosebeck
file. After that, it executed. After execution, it's uploading
the final result. The Rosebeck, because when it's
rotating, it's recording the Rosebeck file and putting it
back to the IPFS, providing it to the next
transaction, to finalize the smart contract. This is the flow, what I showed
right now. Thank you a lot. [Applause] So this was the final speaker
for this session.

Now, we have lunch. Yeah, the other side of
this wall. Please enjoy it. Yeah. We'll be back here at
1:00 p.m. with the paper presentations. Don't miss them.
Now, enjoy lunch. Thank you. [Lunch break] [ [Captioner standing by for
audio. ] >> We're going to be testing
this right now in a second. Just getting the audio going. If it's easier, just do a voice
test, so she can hear. Sorry about that. >> Hey, hey, check, one,
two. Check, check, Skype check. Hey, hey, one, two, check.
Check, check. Great view here. It's hard to speak in front of
this view. It's harder to speak in front of
this view. Yeah, it's great. Amazing. Okay. Thank you so much for your
attention. My name is Fabio Petrillo. We start to work with my two new
Ph.D. students, especially Marcella
did his master in robotics and moved to Canada to work with me in software
engineering and robotics, and you say, oh,
there is a great symposium about blockchain and robotics. Maybe
you can share something. And you prepare this work today.

It's preliminary work in progress but you have and
insights we would like to share. Just to present, Quebec is a
huge province in Canada, Society of in the north. It's a good
place to visit. If you have an opportunity to visit. Take a
flight. There's an airport. It's far, but it's — there's
not just bears there. There's flies ab also.
We are trying to organize a team around software, software
opening nearing. Soft is my art. Software is more art than
technology sometimes because it's hard to understand this phenomena you
try to do . Together you organize this
stuff. Finally, we start a discussion,
with Marcella, it's hard to use
robots yet.

Robotics is everywhere in
the industry, but it is cost efficient. Why is it not
everywhere yet? You can say in the discussions, because in
general it's not as safe to share. So there is the point that
safety is something to improve to put robotics everywhere. One discussion, one option, to
improve the reliability and the safety, maybe, is blockchain.
That's our discussion that we are trying to motivate this work. So what's our goal to try to
identify the state of art on blockchain and robotics. Okay? There's a bit of point of view
of the distributed systems and software. So our goals we try to identify
and classify and evaluate the work on this topic. So how you did that, you do a
systematic review. Yes, you read a lot. And try to find the papers,
follow guidelines, traditional guidelines, and systematic review, in software
engineering. Okay? You organize research questions and search strategies, blah, blah,
blah, that you know probably, and you
try to show you some cool research
questions, as what are the main challenges that you can discuss,
the main approach in blockchain, benefits, and limitations.

Okay? So what you did, you propose a
research stream, search stream, in traditional libraries. Okay? And you compose 89 papers and
automatic search on blockchain and robotics. That's the keywords. There is
89 papers. And also, we put together the
papers manually from the first
symposium, to try to collect more papers, and
after a filtering the criterias, that is
able to answer our research questions,
you select 14 papers to analyze.
So I can show you the catalog of papers on this topic, so you
can continue and use this work. And one important for me, it's
really, really extremely new topic, is
80, 26, 27, and 90. It's really, really recent
topic. So you are in the trench, you can say. There's not a lot of, but in
progress, starting to come. And different people are
probably here working on it. So what's the opportunity is you start to
make mapping and you analyze these challenges. So it's not a surprise, the
majority of papers discuss about issues,
the challenges, on communication.

Okay? How robots work together and how
they communicate and several of 14 papers, working on collective decisions and other
important challenges, discussed in this
topic. So communication, but also communication every time and
point of security. This is the main point that you can realize. And the distributed decision,
making algorithms, is something really
important, I can say, in this work. Okay? If we try to show you some
quotations, really new paper, they usage of blockchain
paradigm on embedded systems for distributed moot agent robotics is too
uncommon. This is important when I see the announcement for this event,
say, wow, it's great; blockchain and robotics. First time,
there's no meaning, because blockchain is something,
for no realtime system, if you can
think, but, in fact, people starting to use
this technology to improve the
systems as robotics. However, the limitations of
embedded hardware is yet an issue to
support this topic, because as you probably
know, blockchain is an intense proof
of work, so this is an issue that you can
work together to use blockchain in the robotics systems.

Okay? So also, the discussion of using blockchains to improve on the
tech. This is one topic to discuss. What's the main approach or that
people are trying to use blockchain technology in
robotics? Not surprised is the smart
contracts, that you had an example in the
first — last presentation here. Used smart contracts to organize
tasks in blockchain context. Okay? But the majority of approaches
are smart contracts.

That's the point that people were focused on in blockchain.
So blockchain approach has a potential enabling to build
security and scalable distributed cultural systems device such as a robot in IoT
environments. So some people try to focus and say it's a
path. There's a potential to use. So the user of blockchains
provide a mechanism for entering , durability to storage, and. There's new papers that can
share this kind of discussion. Here talking about, also,
there's a couple of word discussion about
legal and safety regulations, also, in
blockchain. Okay? However, you can also discuss
some traditional issues in blockchain, and especially in robotics. And latence is the most
important one, is a traditional one, that becomes probably the
most significant — one of the most important issues in
literature identify it to use this technology in robotics
systems. So as a consequence of the
communication systems in large swarm of
robotics to deploy this network is not a
simple task. So this is another quotation
about communication.

So just to finish from a couple
of discussions that you prepare and
to highlight, so regarding
analyzing the mobile robotics systems, is a predominant robotics use of
blockchain technology. Okay? And the integration of
blockchain in robotics systems could be, in
fact, a key serious progress in the
field of robotics. It's in part trying to observe
that, and you are here just to put
together this idea, probably. Okay? It is probably a huge impact in
the economy that you discuss. It's clear in the literature, as
confirmed that. So just to share some maybe
points or recommendations that you can discuss, maybe, to create some metrics
parameters to evaluate this practice,
especially, from the point of view of
security, okay, methods to compare
blockchain and other different methods in distributed systems, okay, and, also,
requirements, specific requirements, to put together
robotics in blockchain. There's work to do. It's something that you imagine
that is in need of progress. Okay.
So that's it. Some kinds of attributes in
distributed systems for robotics. Also, maybe the kind of, as my research on software engineering, how to put some
software engineer aspects around blockchain and robotics.

I really appreciate your attention. I This, as I said, is a work in
progress. We will work to continually improve, and work how to publish a really
good paper, and thank you so much for your attention.
[Applause] Hello. Hi name is Renita
Murimi. I'm an associate professor at the University of
Dallas. Today I'm going to talk about my research on a
blockchain framework. That is for social robots and the means
for analyzing this framework will be a mathematical concept called
shift theory. I apologize I don't have
anything 3D or that kind of graphics.

It's mostly mathematical work
and due to time constraints I skipped the mathematical
equations as much as I can and just simple visuals to how shift
theory really works. Before I go into the
framework for social robots, I want to speak a little bit about
the motivation for this problem. So the way I'll start is a traditional conundrum in
decision-making. We all know of decisions we have to make in our
lives, personal and professional, trivial and
nontrivial. One of the biggest hindrances is imperfect
information or not having all the information we need. It
could relate to games or where to apply to college or a job
application, where we simply do not have all the information we
need. In the absence of perfect information, or rather in the
presence of imperfect information, we adopt an action. That action might have, may or
may not have, suboptimal outcomes for
us. These effects are compounded when we're working
now not as an individual player but as a group. There's a
swarming or collective behavior when a bunch of people like us all make suboptimal decisions
based on the imperfect information we all have.
The second impediment to good decision-making is

There was a study how people did
not always do the right thing, especially when talking about
outcomes of a game. So if you have the choice between getting ten dollars and fifty
dollars, there's been studies in various domains they found
people are consistently choosing the ten dollar outcome. That's
because they simply did not understand the mechanics of the
game or rather their perceptions of the utility of the game seemed to be that
choosing the ten dollar would be more profitable, when, in fact,
it was not.

So what these researchers did,
they turned the traditional model of
utility arising from economic games over
and they said humans are not always rational. We have here
an irrational actor who does not have good information,
who is behaving in such a way that the outcomes are suboptimal, and often what
plays into our decisions is biases. When we do not have good
information about a circumstance, we rely on
patterns that have worked well for us in the past or have worked well for
others. We rely on these biases which are really shortcuts to
making good decisions. Now, I'll move on to the current
conundrum in decision-making. For the past few minutes I spoke
about the traditional conundrum.

Currently, we have enabler
capabilities, we have machine learning and massive data sets,
and really good algorithms that parse through those data sets, and these have tilted the
scale of rationality in favor of the machine. You're all aware of recent
advances where the robotic version of
Jeopardy or chess defeated the world champions in those games,
when those games were played, and they continue to improve. And what has happened is at the
heart of this we have a game between
two unevenly skilled players. On one hand we have a robot, a
robotic version of the software, that can perform one thing
really well. So the robot just plays that one
game really well. You would not expect it to pick
up a block. On the other hand, you have a human who has
imperfect information in a range of applications, not perfect
information on that particular game, but a range of
applications, and does things fairly well on a range of

It really, because the scale has tilted in favor of the machine,
that is because the human has usually
limited recall of the past, is
far-sighted in the future, and even the current we are plagued
with imperfect information. So what works for these robots?
Why do they usually have the upper hand in these complex
environments? It's because they do two things
well. They reduce the imperfect information, because they can
store everything in the cloud, and they have access to
these large data sets, which we humans cannot. It's beyond the
range of normal human cognition. And they're also usually not
irrational. Given appropriate code, it will
always choose the best outcome, which cannot be said for human
beings all the time. These algorithms are not just gaming
versions. We see them coming through data sets of demographic
information, voting polls, SAT results, and they're working
well. It helps when these algorithms
help detect cancer faster, they help
us correlate income to majors, and in general they are able to
predict the future, let's say, stock prices,
medical tragediry of certain illnesses. But the problem comes when we
think of these devices or robots as not
just our helpers but coequals with us.

This morning we heard various talks about robots working with
humans, becoming coagents, interacting with nature, so
we're moving in that direction. This is not anymore in the realm
of sci-fi future. We already have smartphones that
are talking assistants, and they
keep the company of our elders. They entertain my children, for
sure. When I was here last year I
learned about a drone that has its own social media page. It orchestrates pick ups and
drop offs. So we're looking at a future where we have a society that is inhabited coequally with robots and humans
in both virtue and physical spaces. In such a scenario, it is a
problem when the robot always has the upper hand. If we're thinking of a robot has
a coequal member of society, it's
hardly comforting to think it knows everything, in all

Who would want to play a game with an entity that
always wins? Right? Or to socialize with a robot
that knows the answer to every question? At least speaking for myself, we
all have these memories of
conversations or quarrels with a friend, a spouse, a significant others, where ten
years down the lane something was said and
there's imperfect information about who said what and we just
let it slide. That's not the case with the robot. The robot knows what was said
and who said it, along with a time stamp
So the work we're proposing for social robots is to make them
more social. It's not just in the form of aesthetics. There's
a lot of fantastic work done around robot skins and
appearances, texture, but the work around
human empathy. I mean human society is made up
of these glues of empathy. We're like-minded, likely able
individuals, with common goals, concerns, challenges. So to
make the robot more human in that sense.
Now, I'm not advocating for a robot who behaves with imperfect
information and is irrational all the time,
and that's where we bring in the concept of smart contracts.

So our proposed framework does
three things. The first is to have these
tuneable parameters, both for imperfect information and
irrationality. For example, a robot that's
talking to a child or playing with a child
has a certain level of imperfect information and irrationality,
and then when the same robot is used in a different application, it has better
information and exhibits a more serious demeanor, if you will. And these classes are then
embedded in smart contracts. Now, smart contracts work well
in the blockchain, and that's not the only reason we pulled
the blockchain in here.

It's also because of the inner
properties. It's distributed, it's trusted, and it's
immutable. Now, for analysis of all this, a
common concern that was raised in this morning's talks was that
the blockchain is limited in how much it can do. We have all this data that's
coming up and analysis of this data sets
is hard with attributes that maybe have a hundred different
features. It's easy to represent an
interaction or transaction between entities up to
three-dimensional space. That's good for us humans. With algorithms we can go up a
few more dimensions. But imagine a transaction — I
use the word transaction to record any interaction. So if there is a transaction
between a robot and a child where we're
trying to decipher sentiment, emotion, facial expressions, the
content of the text being spoken, all these features
quickly add up. If I were to represent this as individual points, it looks like
a very dense point cloud. What happens in dense point clouds is
you miss important features.

You miss components such as
tunnels and voids in the graph. You miss features about clusters
and anomalies, because it's just a
very dense point cloud and analysis is difficult. That's where a fairly new mathematical concept developed
around the turn of the 19th Century, called shift theory,
comes in. Shift theory actually has its
root in algebraic topology. So this is the contribution of this
work. So social robots analyzed using
shift theory, made more or less
social, based on certain tuneable parameters stored in
smart contracts. This is the premise. On the left, you have a line, an
edge, which is one simplex. A simplex is a building block.
Then he have a two simplex, a triangle. A three simplex is the
tetrahedron. This is the extent of all the
math that I will put up on the slides. There's more in the
paper and I'm free to talk with anyone interested about the
details of this.

But algebraic topology really
looks at the shape of spaces and it does well is it understands
patterns and the structure of spaces, which is
excellent when it comes to things like graphs. By visualizing the blockchain as
a structure like this it becomes easier for us to find our
patterns faster. More and more applications that
envision to blockchain, there are some really nice visuals in
the talks this morning, but one example that I
put up is that of fraud detection.
Fraud detection usually depends on a range of characteristics. Patterns in location, buying
behavior, the amount of the transaction, and envisioning this as a blockchain
of sheeves makes it easier. A sheev is a gluing of features
that has certain local and global properties. And that's
where the faster computation comes in.
So more motivation for using algebraic topology, last year I
proposed think framework for storing social network data on
the blockchain. That also is massive amount of
information. Lots of lines about how many
tweets, posts, shares, that are generated everyday. Storing
this information and studying this with traditional graph
theoretic tools becomes cumbersome. The default now is to just
collect the data and store it and analysis comes at a later

Why use shift theory? There are some tools like
principal component analysis that can help us study the
structure of these interactions. But those are limited. They do not work well in
manifolds or curved spaces, but shift theory does really well on
end dimensions, which is harder for us humans to visualize, but it's very forth coming and very
powerful using sheaths.
Blockchain transactions, there's one other paper I'm aware of that
studies blockchain using shift theory. There they do work
concerning distributed consensus protocols. Regardless, there
are other work, purely from the field of
mathematics, that looks at sheaths. They study things like
eccentricity. They're studying how well a node is integrated
into the graph or how isolated it is.

How does that translate
for us? It helps us find anomalies and outliers and clusters and all
sorts of interesting things about transactions. Transactions I use the term
loosely for interactions. Smart contracts, usually smart contracts have underlying
clauses, just like contracts in a non-blockchain world. Where upon the fulfillment of
certain clauses, an action takes place.
So in terms of a graph, again, depicted this way, that
when those conditions are executed, another edge is
formed. These smart contracts are the
potential to balloon very fast because all these smart
contracts lead to more actions which probably have more smart
contracts embedded in them. Quickly, we're looking at very
dense point clouds which are difficult to analyze as in
traditional tools. This scenario is further complicated by the use IoT devices. This devices, late test
statistics say 20 billion, 22 billion, by the end of next
year. So if these devices and various virtual agents are on
the blockchain, that results in computationally
intractable techniques for analyzing the data produced by those devices.

This is an analogy of sheaths. A sheath, think of it as a
sheath of, you know, wheat. A sheath is made up of stocks.
What I have is a stock. The power of sheath theory is
because it's so versatile. It's applied to categories of
objects. These objects can be heterogenous. These objects are also module
and the categories of modular.

You can have objects of
various elements. Right now, I have a stock of
blocks and the common spine can be
depicted as the spine inside that stock. Each of those blocks are
analogous to the seeds or the germ inside every stock. If you put together a bunch of
these stocks, and bind them with the smart contract, then we have a sheath.
This is an example. On the left, I have a single
block, depicted as a shock. On the right, I have an entire
blockchain depicted as a stock. So immediately, you see the
versatility. It's not limited to a single
kind of object to be linked within a stock. And the individual blue circles,
they represent nodes. Each of these nodes can be
transactions. Those green circles are features. The
features record about a transaction. These could be
sensors that are monitoring various data points. And then whatever is the common linking point, usually, is the
hash, that comprises the stock in
these diagrams.

Here are some applications. The one on the left is a
blockchain stock for logistics and tracking. You see now the structure of the
sheath where you have multiple stocks bound together. These are bound by smart
contracts, which are a reflection of imperfect
information and irrationality. This can be tuned.
On the right, we have a blockchain for the pursuit of an evasion
game where we have maybe a dominant robot
and maybe a slightly less dominant robot. Then we have swarm robotics
behavior, where robots are measuring different things, and they're all bound by
the smart contract. The applications as we mentioned
in swarm robotics and smart social robotics, also consensus
applications, which could be related to building mechanics, and crypto exchanges
can be analyzed very well using sheath theory. We have central
nodes behaving in different ways. All of this can be
captured and analyzed very fast using sheaths and then stored on
the blockchain.

And certain challenges and
limitations that are not just unique to our framework. They're the result of bringing
together disparate disciplines. One is quantum computing. It
helps and it hinders. Once the technology can be
embedded in regular robots, it, of course, leads to a very fast
robot and therefore a very slow human. But on the other hand,
al of this, if it's on the blockchain, key
management and a big issue. And quantum topography can lead to issues in
the provenance of the key and how strong it is, how fast it
can be broken. So that is one challenge that
affects such frameworks. There is currently work done in
post quantum computing. That work is still emerging, that produces ledgers that are
resistant to quantum algorithms. So that will help this
challenge. The second is added regulation.

There's a concept of distributing computing. So no
one really owns who goes first, how the consensus is formed. It's a natural, organic way,
based on mathematical constructs. But in social robots, and smart
contracts, it might be necessary to have some kind of oversight. For example, if a robot is
deployed to understand the prognosis of a
patient, it's helpful to decipher intent
and emotion. And these, when not done well,
might lead to catastrophic
consequences, especially when things are not very black and white for medical

Another common challenge is that
of diverse blockchain environments. There are so many environments
used across so many applications. It becomes a challenge when we
want a seamless way to interact with
all our devices. That is another limitation. And
information overload. Blockchain in itself does not
store a lot. Therefore, sheath theory is critical in reducing
the amount of information that we gather and storing that
on the blockchain. So in conclusion, this work
brings together smart contracts that
are tuneable, that behave differently in different
environments, and these smart contracts are created with the
soul purpose of making these social
robots more social or human, and
imperfect information and irrationality will be the
tuneable parameters here. And the analysis of this is done
using sheath theory, which is a
computationally efficient tool to analyze large volumes of

Finally, storing all of these on
the blockchain, not just because of smart contracts, because of the
trusted and distributed nature of the blockchain. For future work we're looking at elements of how AI technologies,
specifically, the algorithm, can help in looking at sheath theoretic
implications for analysis and broader work is
social analysis through algorithms
which we haven't hashtagged yet but we're still in the initial
status. That is it for my talk, and if
you have questions. [Applause] Good afternoon. My name is Jorge Pena. I'm here
with my colleague and we're going to be talking about
utilizing blockchain technology for managing collaboration in heterogenous
swarms. Before that, just a brief introduction to what our
research is about in this area. We are working in the
intersection between robotics and cloud
computing, so IoT domain. In this direction, our goal is
to design methods that enable truly autonomous reconfigurable
robotics swarms. We think about this from the point of view of
how you can think of deploying an application in a distributed
class set of cloud servers.

You can think about it as with
cloud computing you can take this,
without knowing exactly how it is implemented, you can deploy
it in a set of servers, to some extent, in a distributed way.
What if we could do the same with large robotics swarms and
heterogenous? What if you could take a set of robotics, and take an
application that somehow is coded in a way that is
obstructing the capabilities, so doesn't necessarily know about the
specific hardware or specific sensing capabilities of these
robotics, and just deploy it? There could be a system that
takes into account energy constraints and take into
account how to distribute the computation and how to
distribute the tasks so that this is not given
to the developer of the application. But this is a long-term goal. Before explaining how or what
can be deferred off the blockchain in this idea, I want to say or let me
say the specific questions we want to ask in this paper, and
the specific problem settings.

So I have said that we are
talking about heterogenous robotics
swarms. What does heterogenous mean? For us, heterogenous in
this setting first means about variable operational parameters. So it means robots that interact
with environment in different ways. Robots that can operate in the
air, drones. Robots on the ground, cars. Or why not robots
in the water? Autonomous bots. And by robots we mean in general
any sort of autonomous agent in this context. But it's also a variable in
difference in sensing capabilities. Robots that understand their
environment or agents that understand their environment in
different ways. They can have geometric understanding or
isometric understanding or any kind of sensor, unless they have
variable processing powers. How can we take this into
account? We can have an autonomous car
with probably almost a super computer need, but drones have heavy payload
limitations. We cannot put a lot of
processers in there. There's that limit.
From our point of view, where would this be? This image only shows —
We have autonomous delivery of robots, which are smaller.
We also have delivery drones. Or we have automated tracks tracks or logistics industry.
What we want to answer or what we're trying to discuss about in this
paper is how can we achieve consensus in
a collaborative heterogenous more difficult system, but
potentially ad hoc.

We don't need a priori what are
these robots. We mean achieve consensus in the collaborative
effort. We have multiple robots and they want to share data.
They want to share data to improve their situation. And they want to raise their
degree of intelligence or level of autonomy. To do this, we
have to promote high-quality data. So we have constraints in
terms of, for example, bandwidth in the peer-to-peer network.
What data needs to be shared? Which robot is going to send
which data to whom? And when we think about data,
it's not only about the quality of
data or a specific characteristics of
data, but the amount. So what size of data we're
sharing? This is not only in terms of the network
limitations, but also in terms of processing limitations of who
is receiving the data. So how can we take this into
account in this collaborative or
distributed concepts or system? And we believe maybe not the
whole answer but part of the answer might be or can be in blockchain

So in this paper, we are talking about the idea of modeling to
some extent sensing capabilities and processing power of different
robots with blockchain or with technology that is part of the
blockchain stack. And specifically, we are
proposing the utilization of proof of work
or pictographic proof or algorithms
in order to gain power. This may be new in robotics but it
has already been used in mining, for
example, in the Bitcoin network. It can be used also with robots
with limited computational power
because we don't full proof of work. We don't require robots
to solve the whole — to find the proper
path, we can use proofs of work. We can have an estimation of the
processing power of the different robots without being
aware of what power they have, but just by knowing how
much of this proof of work they can solve within a certain time. And then in terms of data, we
can somehow utilize the blockchain
to characteristic data, to try to promote this higher quality

So in this sense, we can
leverage the immutability and the data integrity properties of a blockchain in
order to have a history of data samples
provided by different robots. And one thing that I have to
say at this point is that as it had been mentioned blockchain in
the current state of the art is not ready for lower latency realtime systems and
data exchanges. So we are not proposing that all data is
shared through the blockchain, but we are proposing that robots have
requested to share a sample of that data
and store it in the blockchain.

These samples have to be
significant enough so that though robots,
either at the same time or later, operating in the same
environment, are able to find the same features. It could be features, for
example, can be a code name of a building or something another
robot can find, and this data, which we call data
stamps, that are stored in the
blockchain, can be utilized and be compared with
other robots. This takes time. It's not trivial. It's initial
research. So it's still ongoing, but it has a lot of
potential. And this can be used not only to
rank the data and choose what is the best source of data, but also
characterize it.

It can be images or semantic
data, so forth. It's just a sample from any kind
of sensors. It can be geometric data, or
just points in space. From this, all robots have
access to the blockchain. They can get an idea of who is
able to produce what kind of data, what
is the quality, for example, you can
think about number of pixels in the
windows, or in the building, or if it's
geometric how many points are in a certain area, if I'm in a corridor or in a
window, as well. But this all sounds very nice.
How to deploy it? Not trivially, it's not clear. Here, we are putting two main
questions, which are from the point of view of deployment are
we talking about an open unanimous blockchain, permissionless where everyone
can join? We are talking about potentially safety critical

These are autonomous cars or
robots that might be operating in the same place as people. So having anonymous agents in
this setting might not be ideal. However, having permission to
blockchain managed by a trusted authority, for example, public infrastructure in a smart city,
or mobile network infrastructure, so this kind of blockchain not only
allows to manage identities and we have a
set of partners that are maintaining, even in the case of network loss,
something that's been recommended this morning, also, what happens if if very few robots are
cooperating, and there's a higher chance of attack and security risks, but we might have a solution if a
vast enough of participating in the blockchain.
The other question is we're talking about a single blockchain or are
they ad hoc blockchains? This is also not clear how to define

In the case of trusted
infrastructure, yes, it can be started by these public
authorities and maintained by them and we can have a single
blockchain, but what to do in the case of ad hoc blockchains?
One option that we propose in the paper is this might be used
by private parties. For example, think of Tesla or any kind of car
manufacturing making autonomous cars. They could deploy this in their
own fleet. It's generated when there's a high enough number of
cars within a certain area, some distance, or within
the same environment. One of them could automatically
start this blockchain and start collaborating without the drivers even being
aware of that. So it could be from a private point of view.
Just to finish some discussion on conclusion. There's a lot of challenges in
this sense. We believe there's also a lot of
opportunities, but there's been a lot of talk today about scalability,
about realtime computing, about
security, and what happens in these cases of
safety critical situations if we have 51% attack or if we have similar

Then we also have the problem of identity management and
accountability, specifically because we're talking about
autoton mouse robots that are operating with people.
How do we make them accountable or how do we make sure that we know
who did what or what was done wrong? And in general, data integrity,
the blockchain in this setting is not used as a crypto
currency. So the value of the crypto
currency is mainly used for transactions, but what would
happen, for example, if a set of malicious robots are the only
ones operating and providing fake
data and validating between themselves? There are
strategies or should be strategies to find these
outliers, based on, for example, some work
presented today, but it's not clear and we need more work in
that direction.

Just to conclude, we're
proposing blockchain could be a good tool
to work consensus into robotics systems. This has been said a
lot today, as well. We can use immutable and the integrity of the distributed ledgers to
manage and classify the data provided by different robots, but we can also utilize
part of this technology to obstruct the different — the different
resources in robots we don't usually —
usually a robot developer knows what hardware or software it's

But if we want to open this
robotics swarms to a wider application,
we need to be able to extract these resources and generate new
ways of interacting and controlling swarms of robots.
So it's been said very well this morning, we have to go beyond
human robot interaction and start thinking how to do human
swarm interaction and control. That's all. Thank you. [Applause]
Any question? So good afternoon,
colleagues. My name is Vitaly. I wanted to see in the morning,
in the beginning of our symposium, it was mentioned the
need for new business models on the market.

That's what I'm
going to talk about today. First of all, let me ask you a
question. How many of you have used an
Uber in the past month, by show of hands? Most of you did. The
sharing economy became really a common concept in our everyday
life in our economy. And I think that the same way
that digital technologies has led to
different relations between the customer and the service provider, the
advent of cyberphysical systems paves the
way for completely new business model.

This business model,
it's not just about fully automated production lines and
enterprise. I'm talking about universecal
access to robotics capabilities for
small and medium businesses and even
individual use. There's the famous idea of robot
as a service model. It's a service oriented architecture that integrates different types
of robotic devices. The whole concept is that people refused to buy a software or
hardware directly and instead they use it as a service. This
allows to reduce significantly the cost of adoption of robotics . Let's go deeper into robot as
a service research. This consists of basic services,
some user services that can be added. The publications that we
researched, most scientists actually discussed that in the context of cloud
computing. So publications about robotics
as a service are widely represented
by Yeong Chang and his colleagues. They published on
the topic and it was one of the first to use the term
"robot as a service." In the first publication they
say in robot as a service there should be the same functions as
in service oriented architecture, which are robot,
robot as a service unit, before which
this functionality is described, the service which is the cloud
provider, and then the broker is an interactive shell that gives
access to those functions in the cloud.

And then in the next research,
those same authors expanded their
research and identified a few narrow points for robots
as a service platforms. They proposed to use
standardization and redundancy. They state some
de-centralized elements are required to improve
the reliability of centralized systems. And then in their last
publication, they presented an architectural scheme for their solution with multiple
levels, and we thought the most interesting was the presence of business layer,
which normally manages the fees for the services, but also it looks for
new business opportunities to improve the whole system in the
long run. Some other research on the topic
were published, which is RoboWeb.
It was one of the first to use robot operation system. It
gives us a lot of opportunities to integrate different types of
robot. In this publication, they really focused on robot as
a service application for emergency response use cases. And the interaction interface
they present here is really similar to publisher subscriber system.
In most academic environments, most research on
robots as a service is focused on cloud computing. After reviewing these
publications our team decided the feature of robot as a
service needs to be revised.

That's why we propose new
architecture and we did, actually, practical
implementation of that. Here it is. We offered broad
de-centrallation and we offered giving them economic
autonomy. De-centralize allows to reduce
the computational burden on the agent management system and a more
robust system overall. Secondly, even though some
components of the economic components were discussed, we think that we need
to propose a new way. So every device needs to have —
needs to be able to create
transactions on its own, in order to achieve fully
automatic analytic system.

In our case, we organize the
work of robots using market mechanic and
robot sends the supply message through the special channel and same does the
client, sends the demand message. We use interplanetary file
system to do that. After there's a match between supply and demand, an economic
transaction is created in a form of smart contract in one of the
distributed registers. We started working with Etherium a while back, but now
also support other ways. Basically, we can improve —
have the high level of production against malfunction
and hacking. And in previous publications,
some methods of identifying.
But specific security issues for robot as a service were not
raised, and neither were transparency issues. The use of distributed
registries allows transparency of
transactions and allows to take the cloud server
to those independent software nodes.

We have independent software
nodes and robots. And then finally, there's a need
for standardized protocol on this.
The standardized communication protocol is robot upgrade
system. We pass the command to the robot
using a Rosebeck file which describes the functionality of
the robot. This is our vision of how a
robot as a service should be organized. I'm happy to discuss business
models or robotics market further and I'm happy to answer
your questions.

Thank you for your attention. [Applause] Why do you think this is
something, compared to the other people that are working on
something like that, can you tell us more about that?
Exactly. We think of liability as basically the robot that needs to deliver
something to the client. So basically, whenever a client
has requested service, and pays for it, the robot is liable to
deliver that. By using those peering technologies we just described, we can give this
economic autonomy to the robot so it has its own identity, its own
wallet, and it's able to create contracts. Now we can make sure
when the contract is executed, the liability is transferred
automatically and we can keep track of that.
You might have kind of answered it through this
previous answer, but if you didn't use the smart contract,
there would be other ways to do it, and why smart contract more
advantageous to the other ways? Yeah, well, in software
manufacture there are a lot of different ways to do things.

But I think it's important to
think of robots as independent economic agents and to give them identity of
wallet and ability to break contracts. By having the smart
contract — a smart contract is essentially a
native contract for robots. That's how I see it. It's
basically an instrument for us to create a contract with a robotic
entity. Thank you. [Applause] So we have a lot of applications
of UAV drone systems nowadays.

From surveillance drones to
search and rescue exploration,
inspection missions and more. But how can we improve these
applications? Hello, everyone. My name is Mario Santos and I'm
going to talk about implementation of blockchain in
surveillance problem. What's a surveillance problem? It basically consists in a set
of UAV's, completely autonomous,
and must patrol certain points of interest or PoI's. The patrols must be completely
unpredictable, and it must be efficient as well. Efficient for us means each PoI
must be visited as many times as he
can. In a perfect system, a PoI would always be visited by a
drone, but this is not possible because we have a limited number
of drones, but we want to maximize the number of visits.
But how do we relate to blockchain? The algorithms, they're
developed and the papers are published to solve these kind of
problems, but they admit this data is always available. The communication is 100% secure
instantaneous. They don't take in account the communication
process between the drones. They usually are relying on
servers and relying on non-distributed products.
So this is a problem. Because they're not resistant to
single-point failure.

It would be very nice as well to
have anonymity and transparency. For example, I have a drone and
I want to rent my drone to a company
who runs a patrolling service. I want every transaction to be
transparent. But how can we connect these
elements together? We think blockchain is the best answer.
So regarding the decision-making algorithm, how to decide how a
drone decides which PoI to visit, we have two options. We go with a classical way, but
for implementing on a smart contract this is too complex and
too resource intensive. Were talking about drones and
guard computers, very small, very
limited on power.

The other solution is game
theory. It's great because it's high
efficiency and low complexity. It can be easily implemented on
smart contracts. The way we thought about this problem was
using an agility function. Basically, this function takes
the input, the position of all the points of interest in the system, the
position of every drone in that particular moment, and also the idleness of every
PoI, which means the time it took since the last run visit to
every PoI. By minimizing the function, the
drone can decide which PoI to visit next.
Then there's a problem.

How does a system ensure the
drone visits a certain PoI? We brought two solutions. A proof
of visit or a proof of work. The proof of visit is intended
for smart devices or electronic devices or devices with computational
power, for example, a beacon. You can communicate with the
drone and see that the drone is there and they both sign the action and publish
it to the blockchain. So you can be sure a drone visits a
certain PoI. But for example imagine I want
to navigate my drone around a
forest or around a tree.

This is simple, as well, by
using a proof of location. The proof of location can be
implemented by proof of location service-based blockchains that
are available right now. I'm not going into very deep into
this topic, because it's out of the
scope of my presentation, but it's a topic that is very well
known and there are many services that provide this kind
of proof of location already.

So talking about the smart
contracts. Which smart contracts do we need to apply in
order to make the system work? First, we need the system
manager. It's not a smart contract. It's just the entity
running the system. It can be a company or an individual or it
can be a foundation. It can be open source project.
They need to write through smart contracts. The first one
is a subscription. If we have a house, and I want
to basically rent a service to patrol my house, I need to pay
some tokens, and these tokens are included in the subscription, subscription smart

Then there's the decision smart
contract. This smart contract is basically the implementation
of the algorithm I told you before. It's to make sure the drone
computes the correct PoI to visit after.
But the drones can be, for example, owned by a company or can be
collected from individuals. We need to make sure that the
drones go to the optimal interest. Why? Because in the end, you have a
real-world smart contract, basically saying to the drones
every time they visit the PoI, and because we are paying tokens
to the drones, a malicious drone could
just fly around the PoI's just to collect
as much tokens as it could get.

That's why it's important.
In order to embed everything in the drones, as I told you,
we're embedding these in the small card
computers, in the drones itself, the blockchain is running on the
drones. We want all the communication to
the handled by the blockchain and
you want every UAV node to run a
blockchain node. This is how the systems looks
like. We have a control box, basically just to handle the
flight control log, and we have the navigation. The navigation is just to
decide, for example, going from this point of interest to that
one, the navigation, basically, he handles the path. And the decision box actually
decides which PoI to visit. This is the implementation of
the smart contract. And then you have all the
communication handled by the blockchain. In order to select a blockchain
that works with this
multi-UAV-heterogenous devices and so on, like, for example, in
Bitcoin, a regular blockchain, it's very hard, because it's
very intensive and we're dealing with very smart cards. These cards cannot compute crypto
graphic puzzles. They cannot mine blocks as a very large
cluster. Our approach wassing to in iota.
It's a very different approach.

This graph has many
interesting properties. One of them is that it doesn't
require miners. The way it works, this is
attached to two transactions. Indirectly, all the other
transactions are connected to these two.
And this is already known crypto currency. We can buy Iota tokens right
now. We can implement both the smart
contracts and the payments on the same network. This is very interesting and
applies very well to this kind of
project. Another advantage of Iota is the
way it handles partitions. On a regular blockchain, if we
split the blockchain, we cannot reattach it without deleting one
of the chains. In bit hub, for example, the
chain that lasts is the longest. But in Iota we can actually do a
part anywhere one. For example, when you launch a campaign, we launch in one of
these transactions. And we settle every parameter of the
campaign. Then the nodes which are built
into drones, you will only process the transactions related
with that campaign, with that mission. In the end of the mission, we reattach this tangle to the main
end so indirectly every other node will approve the campaign
and we don't actually waste resources of the drones
approving transactions.

In order to see if it's possible
to run an Iota node in a smaller
computer, we ran some tests in our lab. We had an O Droid.
Looks like kind of Rosebeck. It's very similar. A bit more
powerful, but similar in terms of characteristics.
In the X axis, transactions per second, these were using a
software that the Iota foundation provides. It won't
be the final software for running the nodes because it's
still in development. They plan to launch it in April next year, but currently they have a
test software that we use in order to
have these graphs. Basically, in the case of 30
transactions per second, running for a long time, and then we took the
average. In the end, we obtained this
linear behavior, and this we can
actually run both control, by the control,
and the blockchain in the same card. We also ran tests of our

The game theory algorithm that I showed you
before. This is an image in our lab. We run with three drones. On the left side we can see the
PoI's. We basically integrate 5 by 5
with 25 points of interest. You can see in blue there are the areas that were recently
visited. In red the areas are visited —
like, they were not visited in a long time. And the drones discover, this
will validate the drones will cover very efficiently. Finally, I would like to thank
you very much for your attention. Feel free to ask me
any questions you may have. [Applause] I'm wondering on the Iota
network, given it can split like that and come back together,
what actually defines the main net? If you can have lots of these
orgs, but you're saying they can come back together and then be
recombined with the main net, how is the mine, main
net, filed into these others? Basically, the main net is
where the majority of the devices are

And it's supported by the foundation. And where the Iota tokens can
follow actual functional similar dollars. In this image, it's
just a small partition, but in reality it's a very, very small partition, huge
network. exactly. Only after
reattaching, we can pay tokens to the drones, and it
will value some money, because in the partition itself, it's
called a soup tangle, they won't value any money because they aren't connected to the main

I was just curious about your
experience in terms of trying to implement the UAV with Iota
compared to with Etherium. I think that one of the benefits
of Iota is it's supposed to be more
light weight and you'll be able to put
the node on to more like a single
port computer, because of the light weight of it. I also wondered if you've tried
this with Etherium, as well? Yes, currently, in our lab,
we're doing work with Etherium, and
with Iota as well. What we notice, it's very, very
— the Iota is very different. It can run, the card we're using
is 32 bit card.

If you tried to run Etherium,
the majority of libraries are not
compliant. Iota is way easier to implement
and, for example, in order to run a
blockchain like Etherium they would need to somehow decrease
the complexity of the proof of work to be able to run in these small card computes, but
due to Iota design, and it doesn't need any miners, we don't need to
actually trade security for being able to run the network only on the small card
computers. One more follow-up. If you tried running a client
node on the UAV instead of using it as a full node?
Yeah, we tried.

The position we took in the beginning was to try completely
independent network of nodes. Basically, they run a full node
embedded in their software. They don't need any external
computer or external servers to connect
to the network. For example, if you want to
drone a forest and have no internet connection, the drones
can communicate with each other and in the end we attach
the information to the main net so we don't actually need to
have internet connection all the time which makes the system more
flexible and operates in basically every scenario.

You can use a run station.
In our case, it was just a design decision we decided to go just
with basically drones. We wanted to be sure that it was
possible to implement and we didn't need to rely on run stations or any
other equipment. Maybe one of the best ways,
like, to conclude, like, these papers, for example, is to give
this analysis. That, for example, he was just
asking about, how Iota could be
compared to Etherium, especially because the previous works are
based on Etherium, and what are the pros and cons. I know Iota has been criticized
in the past, right? So it's good to have, like, this comparison, and if the
advantages are way over, these advantages, maybe you can claim that, like, there's a
new way of doing experiments in this field. That would be
really nice. Thank you for your question,
actually. We are currently working on it. When we wrote
the paper, we weren't really sure if we were actually
doing that, but we are right now implementing Etherium and we're
waiting for, in order to implement the final version of
Iota, but yes we plan to publish in
the final paper the final metrics.

I will not claim Iota is better
than Etherium, but in this particular
case, we believe Iota is the best option. Thank you.
[Applause] I wanted to ask just a last
question. Can you give a brief explanation
about the costs of Iota? How much cost the transaction? Actually, there's no fee
associated with every transaction. Though it works —
sorry. I see these light gray blocks. When you want to append any
transaction, you connect your transaction to two blocks, at
least initially it's always two, and
you're directly connecting and approving these two
transactions. There's no mining process. Basically, you do a small part
of the work. There's no need for external miners. When you approve two
transactions, then another transaction comes and approves yours, and when the
network has a high number of transactions,
the number of transactions that are connected to yours will
increase, and will increase the trust in your transaction.

That's the way it works. That's why it doesn't have miners and
it's very scalable. Actually, the performance increases with
the number of new transactions and there's no fee associated with a
transaction. Thank you. [Applause] >> Dear colleagues, my
presentation is the last one. There are already a talk about
the robot painter, and I will
explain more details in this presentation
what we did, and why the new approach,
what we're promoting today, like
entrepreneurs that separate individual economical agents, it's really interesting topic, not only from
the technical sides, but also for researchers, and the future
development and ideas how our world will
look like.

Okay. Just a short outline. I will explain all the steps,
how we make the work for robot painter,
and show the de-centralized applications and the details about the
planning, painting, drawing, and so on,
and show the several experiments, what we did, and some ideas about how we
develop this project in the future. Actually, the common work flow
right now looks like this. We are starting from the
auction. We are opening the auction on
the de-centralized application. About who wants to buy this
picture. And there is a picture, the
first picture, is defined . It's hieroglyphs, translated
from the hashtags from some social networks, like Twitter,
Instagram, and so on. After that, we are collecting
such several of these auctions'
results, and checking what the price was for
every picture, and use it for the
future steps.

Okay. During the auction, the robot
starts to paint the picture. And people can vote by the coins
what the price is fair for that
picture. After finishing the auction, as
I showed previously, the smart
contacts are also created, and a Rosebeck
file put inside, and the owner of this picture put it
also in the smart contract. When the smart contract is
finalized, that means the owner should get
his product, his picture.

How we synthesize this initial
source for the picture, for the
hieroglyph, would be drawn on the paper. The first implementation we did Twitter search with the hashtag Sotoshi. After that, we create
the least of the words that's related with that
hashtag in Twitter. And choose randomly the K word for that set of the
words. And after that, translated. The next step that's created in
the picture, based on the image of the hieroglyph. We convert hieroglyph to image,
and the next step we skeletonize it,
and find the bounds of this picture, trying to clusterrize it, maximize,
with an open CV applications and several
additional libraries. And after that, we are getting
the path, what paths should be for
robot, which drawing this picture. And when we have that
trajectory, when we have the path, we can
follow it using a usual approach, doing
manipulator task or so on. Here is a search of the
hardware and software, that we use for
drawing and communicating with a blockchain,
what's connected with that auction.

And we are sensing what is
happening on the picture, that we look at the high and
getting the image from the usual camera and trying to get the information,
is it okay? Is it correct? Following the trajectory or not? We used the real sense here. The next thing, the information
from the sensor going to the main
software to computer or control for the robot control, and after hieroglyph
selection and image processing, it's
starting to use the trajectory execution. There's the usual local desk
planner and task execution and there's
— we used Dakuka robot for that.

There's a special software they used to connect with their robot
operating system. It's a special interface. This interface can give the
control to the drivers of this robot. Here you can see how it looks
like, the interpretation of the
hieroglyphs for robot. This is the scheme of the
auction. We are starting from the initial
point and painter send the transaction
that he started the process of the
painting, and you can participate in auction to make a bid for
this picture. And we are collecting the bids
during the one date, 24 hours, and when
it's finished we are choosing the
maximum price for this image for this
picture, and finalizing the smart
contract for that deal, just covering all the information with the smart
contract, and the Unit liabilities smart contract, and

This is an interface, web
interface, how it looks like. Soon we will have more
experiments and I hope we will ask you to join
to participate in this process to collect more data, because right now it
looks like — oh. Sorry. Does the video work? No? Okay. Yeah. That's what
Eduardo will start to show. And here I want to give just a
more explanation about. This is the robot. This is Kuka. It's created for soldering,
mainly, but we requipped it for

I think it's not a bad thing for
this robot to be a painter, not a
solder. Okay. And here is the demonstrated how
the initial information about the
keywords from Twitter collected. And after that, we just translate this word to the
hieroglyph and put in the searching here
for the bounds of this picture, and
after that, when we finish this search we
just sending the trajectory for robot
and here is the process of the painting, how it looks like. During the painting process, participants are making the
bids, and this improvized applications,
and who will give the maximum bid will
get this picture. Okay. But we, right now, we're
thinking about not just taking the hashtag and draw it. We want to add some — right
now, we're on the way to adding some AI elements to this robot and our
topic is robotics and AI. And here, we start to work with information from Instagram and
the process will look like this. We are posting to Instagram some picture, and collecting
information about who likes this picture. And when we collect
the information about who likes it, we're
checking the areas of interest of these
persons, and based on the information from
the interests area, we are
synthesizing the new word, the new data.

And after we know the area
interests for participants, we can — this visualization of such things,
there is, like, purple it's not relevant,
not so relevant topics, but the yellow is one of the hottest ones, or the
closest topic, what is really — what is important for users who like
this picture, who like this, follow
this Instagram. Using this data and combine it
with the data about the price from the previous auctions, we can
synthesize different proportions, some new
words. Right now, it's words, but in the future, I hope it will be, like,
a full picture. Okay. That's all. Thank you a
lot. [Applause] Any q?
If no, coffee break. What is the price? The highest price was — just
let me — 0. 7 efforts, yeah, for the
picture. It covers all the expenses for
the process, I mean for the drawing process, not for the
robot, of course. [laughs] Anymore questions?
What I find most interesting about this research is the fact,
of course, that you have a new loop.

You put the robot not only in
the labor part but also the capital, but also what I think
is very, very interesting is the fact that now you could have,
like, some autonomous entities, for example, with only
one robot as the employee. Right? So as you have, like,
entities for example trading in the stock exchange with an algorithm and an entity,
now you can have corporations in which the only real employee is
a robot. Right? We have seen many cases, in
which, especially artists, are trying to think about what happens if you as an
investor can put some capital up front in order to set up the
system, but then it's the robot, through doing good and understanding what are the
trends, are getting more benefits out of the action, to get some benefits that could
pay back this initial investing effort. Right? Once this
initial effort is paid, at some point in time, the robot can buy
itself out. [Laughter]
Right? And it's interesting. You know? What comes from
there. People here at MIT are also thinking about this.

One of them is called Dazza.
It's a guy that will come at 4:30 in order to give some ideas
about computational law on how this thing could go. But yeah.
Interesting footnote. You have a question, sir?
About three weeks ago the U.S. patent and trademark office
issued in the federal register a list of 13 questions looking for
information about how to apply trademark law to
intellectual property, like, for instance, your drawings, that
are generated by autonomous systems.
You should all contribute.

Yeah, yeah, yeah, maybe we
should be talking to these people. Yeah. Yeah.
I think the paper section is over. Now we have some coffee.
I think the industry session will start at 3:00 p.m. So we have 15 minutes. So yeah.
Let's get back here in 15 minutes. Thank you.
[Applause] [Break] [Testing, Hi, everybody. Welcome back
from the coffee break. I like the coffee break because after lunch you usually have a glucose
down. I know because I teach a lot of people. At 2:00 or 3:00 they start to
nod off after lunch. I'm going to be talking about a specific topic called digital
assets or digital investment assets and
discussing where this whole piece is going and how it's
converging with artificial intelligence and blockchain.

There's no robot talk here. Maybe at the end I'll tie it
into some robotics, we'll see. So a quick little bit about
myself. I spent over 20 years in
technology. I teach graduate level blockchain, artificial intelligence, at
Michigan learning machining learning, at a bunch
of different schools in New York, and I run Chain Haus. We're
building some products in space. I'm a coder. I spend time at
night coding. I've been doing that for over 20
years. We do in terms of our services, we do education, about
blockchain, we do events. I run one of the largest blockchain meetup groups in New York City. And we have a de-centralized
finance area, and we're building some products in that space.
One of them is something I'll be talking about today which is
digital assets. So — and we're looking for
cofounders if anybody is interested. Please hit me up. These are some of our customers
we deal with, people we've educated, and companies we've taught
blockchain and AI stuff to.

So — and we just wrapped up a
project with the World Bank. We took Haiti and put mango
farmers on the blockchain, and helped design and architect that
system, and also involved with a company that came out of
MIT that is building — a blockchain
accompany building a mortgage platform, and will
move to mortgage trading, on a DoT,
which is effectively a digital asset. I'm also writing a book for O'Riley, which may be outline in
January or February.

I have about 80% of the book down. This is the meetup group, if
you're ever in New York City. It's over 5,000 people, fairly
active, and AI and data science group and blockchain. The benefit for me, I get a ton
of market intel. We learn a lot about what other people are
doing. We have a bunch of events coming up. We have an event on AI and art
in about two weeks. If you happen to be in New York City — I , if you want to present, please
let me know.

I'll set some up front baseline
and then talk about digital investment assets, where it's
heading, and where there may be commercial opportunities for
people looking for commercial opportunities.
First, in the blockchain space, there's significant
amount of noise. You have people posting things
like this. This is the real Rubini, a
reading economist in the U.S. He . This is pinned to his Twitter
account. Blockchain is a failure.
To some degree, there's some truth to what he's saying.
Maybe there's a little bit of attention grabbing here. Right? On the flip side, you have
people who are advocating a different picture or thinking
there's going to be a different picture.

Somebody called the Pomp, who
believed at one point in time that Bit
coin would hit a hundred thousand dollars by the end of
this year. Obviously, that has not happened. The secret, he
says, though, is that the prices don't matter. I don't know what
that means. I think that's all that matters. That's my view.
So yeah. So all of this is happening. I as a business and
I as a teacher and educator, as a professor and all these kinds
of things I do, I try to avoid this stuff and try to find the diamonds in the rough, the
business opportunities. And we work with a bunch of different
companies and we start to see patterns and trends around where
there's money to be made. So there is a quiet storm where
people are adopting blockchain, and where blockchain is moving
forward. It's not very sexy, but it's occurring. Right? And that happens to be in the
enterprises right now. So if you look at some of the
news articles that have come out,
like HSBC just came out with this article they're going to be
tracking $20 billion worth of digital assets and going to
be using certain blockchain technologies.
I want to cover that a little bit.

It's not a small amount of
money. It's not a large amount of money
when it comes to GDP or market size,
but it's a good indicator of where things are headed. You see here another effect here, FX transactions, a drop in
the bucket of the total FX transactions. The Bitcoin market cap it's
about $130 million and they trade $25 billion a day. So
this is not a big deal, but this is just the beginning. This is
the starting point. The banks are starting to look at this.
What are they using? What kinds of technology are they
using? They're using blockchain, but it's not
typically the blockchain that you might automatically think.

So these are some of the chains
they're using. They evoke emotions in people, certain
types of reactions. Some of it is justify and some
may not be justified, but these blockchains are getting out in
the market and getting adoption. People are paying licensing fees
for, or deploying out into the
production. They're systems going into production based on
some of these. Now, whatever your opinion on
Lib bra and things like that, I try not
to take an opinion and just look at what the situation is and
find what's factual and not factual around it.
What is blockchain? I don't need to do a blockchain class
here, but I think it's important to understand that some of the
technical terms around blockchain are not translatable
or understandable by businesspeople.

So you say immutable and
consensus, and these kinds of terms, their eyes will start to
glaze. What I try to do, because we talk to a lot of
executives, they say blockchain is a platform for digital
exchange, and you saw double spin and all that kind of stuff, and it's a place where
economic agents have incentives and disincentives are to get
involved with it. What those are is a different question. And enterprise blockchain is
more about mediation rather than
dismediation. How do you get people to work
together rather than use the middleman? Some people get
religious on me and get angry, but, you know, it is what it is. So there's a battle between — I
was here at the — I spoke at the
Bitcoin conference here at MIT six months ago and it was a bit of a
ruckus because I head the permission world and
permissionless world is almost indifferent. They're
converging. If you look at a blockchain or
DoT called Quarterra, it's a
"permission chain" but they have a permissionless version of it
open to the public.

You can get Certain things are baked in.
This is happening everywhere. If you look at Hypoledgeer, it's donated by the Etherium
alliance, it's an Etherium client designed to be enterprise
friendly and they can be both permissioned and permissionless,
in both use cases. You see this as a trend. We're
going to start off as permission and then expand and become
permissionless. Eventually, what does that mean? What does
that mean as things start to expand?
If you have a large number of participants, in a permission
system, it effectively is a public system. Right? You just
have to get permissioned into it. Like getting a library
card. Right? So then the other question is of
centrallation. Even in the Etherium world and Bitcoin world, these worlds are
still grappling with de-centrallation. A ton of
power is concentrated among a small number of blockchain
participants. Miners and things like that. So decentralization overall,
from a purest's point of view, is still somewhat elusive.

It's still somewhat of a
pipedream. It's kind of there's shades of
centrallation and decentralization. I personally
try to avoid that debate, which condition of happens quite a
bit. Especially in academia. People take really strong
positions. And focus on some of the things
that really, really matter. Here is another paper that
refers to the trilemenna of blockchain.
You can have only two of the three. Self-efficiency, rent-free,
meaning you don't pay for transactions, or resource
efficient, so there's no maining costs of things like
that. You can only have two of the three, not all three. Which
two of those you pick depends on your use case, your business
case. So permission and permissionless
world, the two worlds, are converging, to the point of little
indifference. There's very little that is different between
the two. You'll see in five years from
now, the public blockchain world will become more permissioned
because of regulation and things like that, and the
permissioned world will become more open.

It's converging.
What matters, especially in things we do, does this technology move
civilization forward? Does it move a use case forward? That's
the only question we ask. We don't ask whether it's
permission or permissionless or things like that. Does it move
it forward and solve a problem? And we adopt it.
That's kind of our view. Now, these are great — when you
talk about transactions, rates for
blockchains, I think most people here are familiar with the
transaction rates for Bitcoin and Etherium,
relatively slow, and then these other
chains, like XRP, somebody talked by Iota
today, are significantly on the high side.

If you look at Visa's annual report, 124
billion transactions for the year. This is from 2018.
They're working at a different level compared to Bitcoin doing seven
transactions per second. You do the math, it's a drop in the
bucket, and not even. And their capacity is to be able
to do over 600 transactions per
second. They hit those peaks, like over
Christmas and so forth. So these public chains have
these issues. I call them trade-offs. There's some things
you're willing to take, some issues you're willing to take,
and certain benefits you're willing to get. Sometimes
you're trying to trade off for other things. Whether you go
for a permission chain or permissionless chain there's
certain trade-offs you make. If you decide to make certain
trade-offs, will you arrive at the conclusion that you want to
build a platform around, let's say, using a permission chain.

You get
certain benefits. One of the advantages you get,
is you can start to build rich and deep
types of toke.S or digital assets. That's what a lot of
the enterprises are doing. That's occurring now. We're
engaged with some of these organizations. This is what I really wanted to
kind of cover in the next 10-15 minutes. So number one, if we assume —
and there's economic theory and
research that supports this, if we assume
supply can create a market, if you have supply, not demand, but
if you have supply, that can induce the existence of a
market. If that's true, right? And if transactions per second
correlate with capacity, and capacity is correlated with
supply, positively correlated, right? And if the size of a market
indicates how much data is emitted by that market, so the larger the market the
more data that market would emit, if
that's true, and if we agree that where
there's sufficient data AI comes, if there's no data,
there's no AI, right? If we agree to these things,
right? Then — sorry, my clicker is not

Then we can say the rate of AI
adoption is correlated to capacity. Right? Of a market.
Right? So the more I can do in a
market, the more transactions I can do, the deeper the
transaction, the faster the transactions, the less fees on
the transaction, the more data I will produce, the more likely I can
have AI come along and have — do something with that kind of
data. Right? Interact with that kind of data.
This is where the financial world is heading. Right?
They see exactly this. Say, I can build certain types
of assets. I'm going to talk about a little bit about that. And then I can start to employ a
certain type of AI. There's a breakdown it's going
to go down to like four different steps of how the
industry is moving forward.

Some of those steps already
occurred is and some of those steps we're still in the middle
of and some of those steps we can clearly see are coming. First, I'll call it eras or
steps, whatever you want to call it, is a proof of concept era.
We're kind of past that, especially in the permission DLT
space. People say, hey, we've done the proof of concept. We
see this makes sense. I now want to build something and I
want to take it into production. Era two is people start toying
around with smart contracts and tokens. Any blockchain event, it's
pretty much about smart contracts, and
tokens. Then the era three, which we
move into now, in the next two years,
is native digital assets and machine learning applied to
that. And finally, AI. I don't mean AI as some of the stuff we
saw today, but AI directly involved
in the blockchain, not something external. So kind of at quarter one 2019 a lot of enterprises
said we did our proof of concepts in 2018 or we're
wrapping up a lot of proof of concepts and now we like what
we're seeing with these DLTs or using
or blockchains we're using and we want to build a team into

At that stage, there was no AI
or machine learning needed. For you to even say you wanted
to bring machine learning into that
project would require extreme audit
audacity. Like, there was no real data for you to apply.
We're now in the phase of adoption of low-hanging use
cases. I had to put acronyms there. These sectors are heavily
investing in blockchains now. Enterprise blockchain, and use
cases and applications. We see that happening because
we're getting the calls and emails. Again, I got an email
today from a major bank saying, hey, can you come over and talk
about DLTs? Important point: At this stage,
tokens represent something. A token represents an asset or
it represents a right or it
represents a utility. That's what we think of who we think of tokens, but that's not where
tokens are going to say. A lot of the DLT projects right
now are around cost reduction. How do I eliminate or reduce
reconciliation costs? Which is hundreds of billions of dollars
in costs that are hidden in most businesses because businesses
think that's how you do business. Those reconciliation
costs can come out.

People are using spreadsheets. People are rekeying stuff in,
all that kind of stuff. At this stage, AI and ML become
applicable. It's, hey, this is something we
can start to combine. How do I figure out certain things using, and I'll give examples,
using, let's say, machine learning and
token economics. The third era we'll move into, especially the beginning of next
year, is native digital assets. They're digital assets that used
to represent things but now they are exactly those things. For
example, I'm going to give you an example, like a credit
default swap, you may have a token that represents a
financial instrument, like a credit default swap, but that
token is going to go away. The credit default swap itself will
be born on the blockchain. Right? In fact, I will start to be able
to create my own types of financial instruments on a

We call it smart contracts, but
it's beyond smart contracts. It's smart contracts that are
tradable themselves. I'm trading those smart contracts.
Tokens in the Etherium world are just a number in an
allocation table. You have the Etherium address and a number
associated with that address. That's what a token is.
But tokens will shift from being from represents an
underlying asset or underlying thing, to becoming that
underlying thing.

Right? I will now create a digital
asset. I can design its economic
behavior. This is the economic behavior, and push it out on the
blockchain. At this stage, AI and ML become
a strategic competitive advantage. Companies say I'm going to
combine the two and anybody can create digital tokens and digital assets but
now I need a sustainable advantage, and this is where AI
and ML will converge with blockchain, and there will be
tons of data at this point all sitting on the blockchain. As
much of data is sitting on Etherium, a lot of it is
garbage. There's a lot of garbage data on
Bitcoin. The fact you can extract that data and mine it is
of little use.

What are you going to do, predict the price
of Etherium? There's not a whole lot you can do. If you
have a very rich trading data, I'm creating all these assets
and trading all these types of
assets, you have an enormous wealth of data you can mine and
do predictive stuff with. For example, there's an
organization that creates standards for
complex derivatives.

If you wanted to issue a credit default swap, you would use ISTA
contract, let's say it's a template, and you fill out the
fields in the template, and, boom, you have a credit default
swap. They just announced they've created smart contract
templates, and then you can start to use this to create
your own credit default swap that is totally and natively
digital. It doesn't refer to a contract
or it's not a token that refers to a
contract, but it's itself a digital asset. So it's digital first, and maybe
not even paper at all. And then you see semantic
analysis of the legal documentation which is ultimately training AI machine
learning models to eventually produce their own legal
documents. So the ability to do semantic
analysis is the first step for these models to be trained, then
to reverse out and produce documents

This is from Gartner. By 2025 we'll see 176
billion-dollar blockchain market. By 2030, that's 3.1 trillion. That's not from necessarily
public blockchains. This is from enterprise
blockchains and enterprise adoption and the movement of
money and movement of value through enterprises, which is
significant. Right? And this is going to be the ramp towards, which was mentioned
earlier, the digital assets. I can design these assets. It could be visual or
programmatic. I design the asset's economic behavior, or I let AI do it. They run the risk and the model
and the Monte Carlo, and then I click a button called publish and I
publish the asset on the chain and it gets

What happens then, exotics,
exotics are these esoteric financial
instruments. They're not the norm. They become the norm.
Meaning an enormous amount of financial creativity comes into
the digital asset creation, says, hey, I want to create an
asset that is pegged against the treasury but I want to cap it
here and reference that and do this, and I create this really
interesting digital investment asset that's never existed. I
can do it rapidly. I can prototype it rapidly. If I like
it, I run it through my models. If I like it, I publish it, and
it's out on the market fairly quickly. And it's traded.
Then I can apply machine learning. So I can put a letter of credit
as a digital asset, a digital investment asset, a letter of

I publish the letter of credit. I can do predictive analytics on the letter of
credit. When do I expect it to be cashed out by the issuer or the beneficiary
and all kinds of things. I can start to apply machine
learning to that. That's probably the next couple
years. Then we move into this other
probably a little scarier world. Where AI become economic agents.
The AI start trading and the AI are doing the risk analysis and
even the AI is doing the designing of the assets, based on the data of the
counterparty they may have on the blockchain.

The idea of smart contracts go
away, because the term smart contracts is a bit of a
misnomer. It actually means nothing. Right? These smart
contracts are basically AI agents. They're plugged into
the blockchain. Right? Because blockchains will extend
themselves and be able to be invoked by external things, like oracles
and things like that, and smart contracts
basically became AI agents. The blockchain get pushed down
into the stack, just basically part of life, not something we
think of consciously, like we don't think about the
internet consciously anymore. Maybe ten years ago we did. And
the need for AI shifts. Instead of being a competitive advantage, those who don't have
AI and ML cannot even enter the market.

Because the incumbents would be significantly stronger than the
new entrants. And we'll start to see that. And what are the agents trading?
Digital investment assets. AI creates an asset and it feels
like there is a market. It detects there's a market. It
structures the asset. It runs the models on the asset. It
publishes the asset. Another agent, AI agent,
purchases the asset, and it's traded. How do we know that
will happen? Because it happens now.

There's Algo and high frequency
trading. You have algorithms plugged in. They just don't
happen to be using blockchain because it doesn't make sense to
use blockchain and there's no blockchain that can support the
types of transactions per second, the high-frequency
trading and algorithmic trading required. With algorithmic
trading the way you do price discovery is flash
bids. You keep doing that, until
somebody nibbles. That's how you find your price.
That's not really possible with any blockchain today. Once we
get there, maybe in a couple years, blockchains will be
mostly amenable to doing that are these enterprise blockchains that can
sustain high TPS rates will be the
places you can do this high frequency trading and basically the trading is

It's an easy step for these AI
to start trading and collecting data they can collect directly from the B blockchain.
So where is this all going? We are going towards a world
where AI will create assets and trade assets and these assets would be traded on
a high-speed blockchain. These digital investment assets,
DIA's, traded on a high-speed blockchain.
When that happens, maybe five or ten years or fifteen years
from now, but that's the direction we're headed.
And the blockchains that support that type of capacity
are going to get their faster, are going to get the AI
adoption faster, are going to need or require the AI adoption

An example of AI trading is this project on GitHub called
Genotech. Basically, it creates random algorithms, completely random. Takes yesterday's closing, let's
say, adds two, subtracts five, and
each algorithm is associated with a
single bot. It bonds a million bots. These bots trade. And the bots that do well or
trade well create a benchmark and
spawn additional bots. The others are killed off. And then they're evaluated and
improve on their performance. This work has been ongoing for
some time, at least for five years. You'll start to see
projects like this get on high-speed blockchains. Say I want to be able to trade
native digital assets and I want these bots to find my alpha or the profits
above the market return rate. And then also getting involved
in the digital investment assets base or the digital assets base
are central banks.

They use systems called RTGS and
LVTS, which are these core systems to reconcile and to net out. If they're also using
blockchains, there may be a possibility for
AIs to plug into that, as well. This is the statement on how the
central banks are starting to position. The U.S. Fed said they're not really
involved, but I think they are, but other central banks around
the globe are starting to look at and were involved in
advising a company that's involved with the central bank,
as well.

In conclusion, DI's represent a
real commercial opportunity for blockchain and AI to converge
with real impact. There's real money to be made. My robot tie-in is that they'll
show up at your door. You can't make the margin call.
That's all I have. Thank you.
I'll take any questions if anybody has any questions.
I have the slides up there. Yes? [Applause] Thank you. Actually, I have two
questions. The first one is more of a
design question. I'm just trying to think about
how you piece together blockchain and AI together. Are
you thinking more about using, like, a blockchain data to,
like, train AI models? Although if you have a lot of,
like, data in a blockchain, it may not
be very sustainable from the blockchain's perspective. Or
are you using blockchain as some kind of a secure storage for AI,
for example, storing model programs or check points of AI
onto the blockchain? I'm thinking about how you're
thinking about putting those two together.
It depends what blockchain you use. A pedibot for pedibytes for certain
blockchains is not a problem.

For certain other blockchains,
it might be a problem. Your point, yeah, this AI would
mine that kind of data. It would be an enormous amount of
data to mine. And they would build models off of that.
Doesn't mean that's the only source of data they would use,
but that's a big part of that. In the academic world, it's very
easy to do — and I know because I'm
in the academic world.

It's easy to create models off of
clean data. In the real world to get clean
data is very, very difficult. But I think there's a point I
put up there. You have crypto graphically
ensured consistent data model. Right? All of the nodes must agree
that's the data model. Therefore, that helps with the
data quality. Thanks. And my second question
is more about the fourth era.

Where AI agents becomes the
norm. I think one of the risks a lot of people have about AI is really
deep nerdy models that ends up
becoming a blackbox that makes it difficult to debug. I was
wondering, given the scenario, let's say that you have an AI
agent that has developed this very complex structured exotic of
derivatives, and then decides to then sell it into another AI
agent buyer, if there's some type of complex negotiation, do
you think that this could be a risk
that this exotic derivative structure
could become something that may not be explained or
interpretable? Yet, it's been conducted by two
very deep AI agents? Yeah, I think that's a very good
point. That's a very fair point. I think that is definitely a
possibility. So today, when you design —
like, you have financial engineers that design an
instrument. That takes a long time.

It can take six or eight
months or even longer than that. And then you have to go through
compliance and actually before that you go through risk models, do Monte
Carlos and all kind of stuff. And then things can still fall
through the cracks, even though relatively speaking those models are simple let's say
compared to an instrument or model the AI
produces. There's all these correlations and I'm going to
produce this here now, the digital asset, based on what I
see in the market. This produced an alpha for me. Could
these slip through the cracks and not catch something and
potentially become a financial contagion? Yes. We see that in
the market, is one bad algorithm can rip apart a
trading floor. I have a question. You are tokenizing the assets,
on smart contracts.

So smart contracts do create it
or use — and if so, if smart contracts
are using, like, open, how are you
approving for — Era 3 — or era 2 I was
referring to tokens, I meant in a very generic way. It could be
any kinds of things. The point is you have a digital — if you look at a token, what a
token is is a hash map. It's a hash table. The key is an
Etherium address and some arbitrary number. That's
basically what a token is. Right?
Yeah, my question is because you were saying you were
creating, it seems to me, probably I misunderstood, that
you're creating those contracts on the go. In my experience,
creating automatically the contracts,
nonstandard contracts, you have to have it
right, so you have no security issues and stuff.
Yeah, because you're coming from the Etherium world. I do a
lot of Etherium. That is a problem in the Etherium world. Then you have to go to Zeppelin
and get audited and all that stuff
and they bust you up. But in another world you have a
lot of tools because you're using the full strength of a
well established programming language.

Right? So the
ability to create that in the risks on that are lower. There
may not be a crypto currency involved, let's say, Etherium,
are not native on the chain, even though
that digital value is something that's traded, but there may not
be a crypto currency involved. The complexity around
creating a smart contract in the Etherium
world, it's not from one-to-one translatable into the DLT
world, even though there are complexities there. Your point
is correct there are complexities, but it's not the
same as in the Etherium world. The Etherium world, one tiny
mistake, like some of the best guys can't
catch, and, boom, you've locked up ten
million dollars worth of ether, right? And boom, you're done. In the DLT world, if somebody
does a bad trade, you pick up the phone and call the guy. You
say, dude, I know who you are, and that trade was bad, and
we're going to court. Right? So there are other types of
circumstances around it. Okay. Thank you.
[Applause] Thank you, Eduardo. By the
way, this is my first time in Boston.

I came to New York back
in August and I must say it's a lot colder
now, but there's definitely a traditional Christmas feeling because in
Australia it's summer, it's hot, so it's
great to be here. My name is Emma-Jane. I'm part of AEROID technologies,
it's an Australian-based space
companies. Over the next few years,
machines are going to start playing a much
bigger and much more complex role in
industries and each one of our lives. I also want to look into the
future and convey an idea of how
self-verifying communication protocols modulized swarm robotics and de-centralized autonomous
organizations are going to change where we are now.
Let's start with what industries have achieved and
what we know so far. Software is crucial for
communication, verification, and connectivity. Lately, lots of the research and development has been around
applying secure immutable technology
layers, like blockchain. Most usage of DoT's in robotics focuses on maintaining a secure
log, serving as a ledger, storing
event, and validating and publishing information to the

Blockchain is still in its early
development. And yes, although improvements
have been made on individual
blockchains, integration into the real world is still in its early prototype stage. So talking about applying
distributed ledger technologies into robotics, the main problem that is still yet
to be solved is having a truly
distributed and de-centralized connectivity from the lowest layers so that all the
robotic components with verify and interact with each other,
removing the probability of single point of failure within
individual systems. Imagine the main components of a
robotic system. We have vision, control, power distribution, communication, locomotion, all checked and verified by a secure
peer-to-peer communication protocol. One of the most challenging
aspects of robotics and system
automation, as you all know, usually comes down to the
senses, because they are never 100% reliable.

If you tell a robot to move 100
centimeters, most likely it is going to move 90 centimeters or 105
centimeters. So we need a better way of
validating that the output information is
accurate and reliable. So why is this so important? Well, often, it involves
people's lives, and hundreds of millions of dollars. Let me give you an example.
Reliance on sense information, without proper verification,
caused the crashes of the Boeing 737 Max
planes earlier in the year. This cost the lives of around
600 people. Another example where the risks
are higher and the chance of failure
is much greater: Outer space. In 1999, the Mars Polar Lander went on a
mission that took over 5 years of planning and work and cost
over $300 million.

It crashed into the surface of Mars after a
sensor received the wrong information, causing the engines
to switch off some 100-200 meters
above the planet's surface. An entire mission failed in a
matter of seconds because of one wrong data feed from the sensor.
I can stand here and give you dozens of examples, but these
incidents, these failures, they slow down the
progress. In fact, often, they slow down
the entire industry. There's still a long way to go. There has not really been any
memorable or inspiring missions since the 1960s when the first
astronauts landed on the moon. I have always had a passion
for space. I remember begging my parents
for a telescope for my 15th birthday.

I remember the day I got
accepted into the space engineering program at the
University of Sidney. And I remember writing letters
to NASA since I was ten about my
ambitions to become an astronaut. Since they never replied, I
promised myself by the time I turned 21 I would be part of the
space industry. I am fascinated with the idea of applying emerging technologies
into space to make a change and bring
about new innovation. Technologies like de-centralized verification, or
machine-to-machine communications and operations
can give a greater level of autonomy. This autonomy and its
capabilities are vital for isolated outer space operations
and missions. If we are really thinking about setting up colonies, operations,
in the next couple of decades, then we
need to start seriously looking at these
technologies, especially when thinking about the communication
relay issues that come with outer space.
So to give you a little bit of context here, every one
hundred thousand kilometers you go from earth,
there's a latency of 0.

74 seconds between the ground
and the spacecraft. I know this sounds
insignificant, just but latency creates bottlenecks. For the moon there's a two-way
time delay of just over 2 seconds and
for Mars this latency goes to over 4
minutes. In the Mars mission, because of
this latency, sometimes the rovers
could literally only move a couple of meters in an entire
day. So we can't scale these missions
with the current way that we're doing
things. Imagine even having just a few
rovers or some small pieces of
machinery and equipment. It is going to take weeks to
months to actually achieve anything
useful. Providing accountable autonomy, self-verification, and
cluster-based communication capabilities is the only way that large-scale operations
like natural resource mining and
human colonization can take place in
outer space. At AEROID we're aiming to
develop and apply a communication layer to the
machines at the most fundamental layer. We are developing software for
robotics and autonomous systems. Our goal is to create a scalable
and compliance-ready software
platform for machine-to-machine communications and operations.

Many space innovations over the
past have been focused around hardware. For example, Space X with
reusable rockets. I believe space hardware
innovation should go hand-in-hand with software. So we are addressing one of the
key needs of creating a stable and
scalable software communication protocol
for outer space missions and operations. We have been working for the
past six months. We have been looking at light
weight and low power consumption blockchain protocols.
We have been understanding accountability and governance in
autonomous operations. And also, verification for machine-to-machine communication
methodologies. According to our findings, some
of the software architecture used in space applications today is around 10
years old. These systems are highly
concentrated. They're highly centralized.
As you know, this brings in the idea of single point of
failures, can limit the data transfer bandwidth, and
also leave the system open to lots of
vulnerabilities. If we talk about sending
payloads into space, this costs tens of
millions of dollars.

Right now, the costs of sending around 1 kilogram into space is
around $20,000. I know that these costs are
reducing with the use of reusable
rockets, but they're still very high. So similar to reusable rockets,
why wouldn't we send payloads that
are adaptable and reusable? Imagine self-adapting, reusable
robotic modules that can connect like LEGO blocks to work
together to perform different tasks, achieve different goals, far
from earth, without having to rely on any communication from earth. At NASA, for each space mission, there is a dedicated operations
team. By this, I mean they have to
allocate specific resources and people,
purely for the task of watching over the machines, watching over the
spacecraft, the rovers, performing checks and balances, and monitoring the
machine health. This is hugely expensive.

It takes up a significant cost
of the overall mission budget. And also, it's not feasible,
because sometimes these operations teams
end up costing more than sending the
spacecraft into outer space itself. So we really can't scale. We can't do a lot of things
efficiently and properly if we are always relying on the ground. Over the next few years, we are
going to see a huge transformation in
the commercialization of the space industry.

This will not be from NASA for private companies doing
exploratory or discovery missions. A huge focus will be on
industrial engineering, resource
extraction, defense, and tourism. So imagine if we could set up
mining operations on the moon or an
asteroid. Imagine multiple machineries,
robotics, working together, at a far distance, without having to
rely on communication or verification on
earth or systems on earth. Having secure self-governing
machine-to-machine interactions and checks and balances will
give a greater level of autonomy for the operations, and also take off a huge load of pressure from these operations
teams. IOT, robotics, blockchain,
machine-to-machine economies, and the space industry are the
potential to generate over 8-$12 trillion in economic value by 2025. So something back to AEROID and
our machine-to-machine communication technology, we are
also looking at incorporating the concepts and principles of DALs, de-centralized autonomous and principles of DAO's,
de-centralized autonomous organizations.

They work through smart
contracts. So this brings in the idea of machines performing checks and
balances on each other. In terms of validation and
verification. Think of it a bit like machines policing other machines, but
having the communication and the
verification operate on a more cluster-basis
mechanism. Now you have a hierarchal system in the
network. Unfortunately, I would love to
sort of go into more detail with you
of exactly the technology that we're working on, and the specifics
behind it, but we are currently in the process of acquiring two
patents, and as you know it's sort of very touch and go with these things, so I've had to
restrict a lot of what I really wanted to
say, and I guess that is one of the reasons why my team and I are so excited,
because our technology doesn't just
apply to the space industry, but theoretically it applies into any
machine-to-machine operation, including IoT devices
and industries like smart cities,
autonomous vehicles, and supply chain
logistics. Profession going to Boston, I
was actually in Dubai.

Over there, they're very
future-forward thinking in terms of making their city a lot
smarter and more connected. They even have a government
blockchain initiative and just last week, actually, they announced that blockchain
policy strategy approach for 2020. To give you one example, the
road transport authority in Dubai are
looking at implementing autonomous vehicle
infrastructure to that a driverless car in the city, they can pick up,
drop off passengers, collect payments, pay for tolls, pay for parking, act as a
monetary agent. So I know that there are a lot
of use cases like this emerging around the world. And my team and I do some work
in Du bai. So we're excited to really see these technologies
being properly implemented. Going back to AEROID and our
vision, we really thought about how we can effect change with what we're working on, how
we can show it is something that is
really important moving forward, and
also inspire and catalyze the future of innovation.

Particularly space innovation. So we made a huge commitment. We made a very bold move. We started the lunar industrial initiative
2021. The lunar industrial initiative
is our plan to send four adaptable robotic modules
to the moon in 2021 to demonstrate our proof of
concepts of our software communication technology.
So I'm sure that most of you are wondering how are we going to
execute this? Well, there are three major
stages. Stage one we are applying our
software protocol into swarm robotics,
swarm technology. We currently are working with
the University of New South Wales
robotics lab in Australia to refine and improve our protocol. Stage two we will be testing
these adaptable hardware modules, these custom-built rovers, in
moon-like environment test chambers.

So in Australia there is a space
testing facility, and this is really good for understanding what
changes, what adjustments, need to be
made to the hardware and, also, our software
protocol, because I can tell you that building robots for earth is
very different than building robots
for outer space. As part of our initiative, we
also want these robots to demonstrate
some proof of commercial viability.
So at stage three, we are looking at incorporating aspects of project
wilds into our initiative. Project wild, it is headed by Professor Andrew Dempster in
Australia, and also part of the Australian center for space
engineering and research. The main goal of Project Wild is
to eventually mine water on the
moon. Right now, they're looking at
the mining at the moon's polar craters, particularly the south
pole. Why water? Well, again, in terms of commercial viability, water
is the most versatile natural resource to be found on the moon. It won't only be used to resupply astronauts with oxygen
and water, but also power generation
and refueling rockets.

One of the unique things about
Project Wild is that they are taking learnings and
methodologies from Australian mining sites. So instead of taking a space
engineering approach, they're looking at current earth mining. The Australian mining industry
is very well established, well advanced, particularly in terms of the
tools and the extraction techniques they
use for extracting and mining natural resources in remote, isolated
environments, in the middle of Australia.

So as part of stage three, we
will also be mapping and modeling the
specific course and details of the operation. We know that mining water is a
huge undertaking, so we're also
looking more closely at navigational
operations for the swarm robots to perform, because our software
technology will give these robots better communication and
better verification. So send these robots to the room
we'll also partner with a commercial
space carrier. We know this is a huge feat, so
we're not alone. We're working with the University of New South
Wales. We're working with members of the University of Sidney, the
National Space Society and also aerospace consultants
is and a capital venture firm. If we pull this off, it will be
the first autonomous effort of this kind on the moon.
This will be amazing.

This will be memorable. This will go down
in the history books. I'm sure that most of you are
aware of the huge benefits of using
swarm robots, swarm technology, instead of having one specific machine set
for one task. These benefits become even
greater in rough terrain environments like the moon. We're sending one moon where sending one robot for
one reason isn't viable. The Apollo missions cost around
$600 billion in today's standards. And project Artemis, that is
NASA 's later lunar mission announced
this year, is set to cost at least
$30 billion. Major if we could perform mining
operations on the moon and also solve logistical problems that will
inevitably come from sending one piece of hardware or machine to the moon to
perform one specific task.

These are the kind of
technologies that are going to contribute to human colonization in outer space. So if the problem is high costs,
time delays, and single point of
failure, then adaptable, reusable, robotic modules and
self-verifying communication protocols is the way to go. Providing a de-centralized
self-governing infrastructure for machines will give greater
accountability, will improve autonomy, and will also enhance the
communication capabilities. We know how crazy, how risky,
our dreams are.
So whenever anyone says we're working on something that is too
ambitious, too challenging, too hard, I always
like to think that someone has to do it. Because deep down, I believe
that we are all explorers, and we are
all curious creatures.
It has been an amazing journey for me.
I am learning everyday from the challenges that I have to face,
but I love the energy and I love the inspiration.

My dream is to say that one day
that I contributed to the space
industry, even if that is just industry.
It has been a pleasure speaking to you all. Thank you
for having me. I know I had to keep it high level. I'd love to
speak to some of you more technically afterwards. Please,
if you have any questions or feedback, I'd love to hear from
you. Thank you. [Applause] It's too crazy of a question.
. It's more a statement than a
question. When you do it, yeah? Maybe you should make an
announcement, like say you made the
announcement here, that we remember you were here when you
made the announcement.

I just have to say, yeah, you
should push like for that direction, definitely. The goals are very high, but
definitely somebody has to do it. Yeah. Let's have applause for Emma. [Applause] Once again, just so that
everybody remembers, blockchain is immutable, de-centralized,
distributed, and synchronized database. I won't spend too
much time on it. There has been in 2019 itself,
2.7 billion spent on blockchain solutions, which is set to grow over the
next four years by about 48%. I don't know why this is not
coming here. And a total of 44% blockchain companies which reside outside
of the U.S. The major investments and the
major experimentations happen in the financial services industry, with retail
just lags behind by about 5%. A few of the statistics which
are important in the retail industries is that they are 18% of retail
companies who already started working, all
in the POC and experimental stage,
nothing to scale at the moment.

And about 9,000 projects every
year over the next few years which is set to grow yearly by about 8,000
projects. Supply chain in the retail and
CBG world has the most number of applications and most number of
POC's to date. A lot of new blockchain
applications and use cases are rising. Nowadays, everybody
wants to know whether their food is organic or where it's come
from or whether the products they're buying is creating
deforestation or not. These things have a huge application
with regards to blockchain and companies are more responsible and
corporately focused on sustainability to be able to now
do this. The next thing which is very
interesting is online marketplaces. People want to know whether
their products are being sold by
third-party sellers, who they are, which marketplace changes, which reacts fastest,
and who was the first to move. These are things which have huge applications, which blockchain
can be of use. There's also others which is,
you know, payments which I think
everybody is aware of. These are the things which are
upcoming and a lot of companies are focusing on them. Out of all the projects, only 8%
make it to finally production and are
actually maintained.

The reason is I will talk about
the top three challenges that we currently face in the retail sector with
regards to implementing blockchain. Yes, even though nobody in this
room would confuse it, there are a lot of executives who still confuse it
with Bitcoin. There's a lack of understanding as to what
blockchain really is, to be able to actually convince the top
management. And of course, the fear of
missing out. Everybody wants to implement it. Everybody wants
to know what it is. Everybody wants to do
innovations in it, but to be able to scale it,
it's not yet happening. Every company has an innovation
champion. They would say, yes, they have a pilot running in
blockchain. We are going to set aside budget for it in the
coming years, but that year has still not come.
There's also a lack of standardization because there's
no actual, you know, regulatory
body which is in place, and that's —
there's no network effects to be able to
actually scale.

There are two many different
consortiums. There's food trusts, taken into
consideration with about ten of the largest food companies. There are many others wherein
they all form different consortiums. To be able to get an actual
scalable solution is difficult. The main problem for
implementing would be scalability. In retail, three things matter:
Performance, privacy, and ease of use.
Performance with regard to processing of payments, and I think Jamiel
had given you around the
transactions so I won't go into that, where Visa, , where blockchain and Etherium
is not up to the mark like with Visa.

The different nanosecond
decisions that go into loading a page with
Google with advertisers is really important. To be able to
get that all working takes a lot of computing power and energy,
which, at this point, we're not there yet.
Privacy with respect to the new regulations of the European
GDPR, wherein we cannot store consumer data. We have to be
very careful about how we store it, how we share it. That is something which takes
into consideration wherein blockchain is not there.
Certain companies like Microsoft have started the confidential
consortium wherein they actually make Anubis model about privacy
and be able to track anonymous users and Bitcoin and other

This is something that is between Microsoft, Intel, and
a few other companies, wherein they use the trusted environment
to actually be able to keep your data private. One thing in use of use, Switch, which is a company that came out
with an application like a virtual
wallet to be able to use gift cards and
select different gifts online. It basically helps you between
transacting between traditional and crypto currency. This is not up to scale, but
it's something wherein you sell direct to consumers.
And of course, efficiency related to the energy consumption of what
we actually use. A few internal obstacles that companies face, whether
evaluating the cost/benefit analysis or whether convincing decision-makers are
some of the things that companies really consider today
to be able to see it as an impediment to actually
implement blockchain. There are some use cases and
POC's which are in production.

For example, chasability,
McDonald's use it to be able to track what
the IOD sends, the transportation of food, so when the temperature decreases
when you're actually transporting chicken, it would
be send through IOD sensors to your database and then to be
able to actually track that so you know
and reduce spoilage. Wal-mart also did something similar with regard to traceability from
origin to scale with regard to, you know, deforestation in that
aspect. Loyalty programs is something
that companies are also experimenting
with. Once again, these are all in the
infancy stage. Digital identity, this is
something that a lot of both financial as
well as retailers are looking at with regard to having your own
digital identity and to be able to use it to create your
facial recognition. This is also used in the retail
sector where companies like beverage in the retail industry.
They need to validate with respect to age, et cetera.

comes in handy a lot with regard to that.
Alibaba in Chinese who is using — I would say China is the leader in
analytics. They are using — they have
actually piloted their online luxury pavilion called T-mall where
they track fake goods, which is a huge industry in China, being
one of the largest producers of fake goods. Alibaba is actually using a lot
of their own cloud-based blockchain services to be able to combat
that. China is also one of the largest countries with respect to filing
patents in blockchain. As you can see about 49% come from
China. This is earlier, a use case
which actually happened, in which a large toy manufacturer,
they wished to have a blockchain-based digital
identity. What we did was put their identity on the
website, on the phone, and they have a private and public key,
and they were able to scan a QR code, once they log into a URL,
and then you can actually get through to whatever URL you want
to. This is within the enterprise. They didn't want to
go public with it.

So it is — they just wanted to test it
within the enterprise and then kind of move to their suppliers and
customers, et cetera. You would think this would be
the solution they asked for. The project architecture is
pretty simple. We used the blockchain service and we also were able to exchange with
the tokens to be able to actually
access the URL and provide, through identification, your access to the log-in page. However, once we presented it,
then the challenges came into place where they had more
questions, wherein, do I have to maintain two separate systems?
Is it cost effective? Can I access all applications?
If I do that, do I need blockchain? This is really
secure? This is what I was talking about earlier with respect to having a top-down approach and
understanding with prioritizing security.
We had to integrate this with an open source and actually
redesign it to be improving or including only the blockchain,
which as you see, the small part, but you actually have to
do the tunneling, et cetera, and then kind of basically replace your Cisco
pane, which everybody who works with it knows what I'm talking
about, with regard to that.

So this would be integration
with other applications to actually be able to pilot
blockchain. There are a lot of technical
innovations coming up. There are a few things which people
have talked about. Something which is the
blockchain has a service wherein companies can
call an API. There is hybrid blockchains where you can use
public as well as private. You can have the transparency of
a public blockchain but still have the security of a private
blockchain. There's different blockchains. A federated blockchain is where
a few people are identified from different companies or different people to
be able to actually edit or, you know, validate or give consensus to the data.
Interoperability. As many people talked about the
different types of blockchains available in the market, to be able to
transact from one to the other and seamless for the public is
something that's happening and people are working on it to be
able to come back and have it happen in the future.

Recardian contracts, which is
the new version of smart contracts. For example, it is actually a
hand-written contract which are readable and then converted into
machine language through blockchain tags. This is
something that if, for example, the person is insolvent
or the deal does not go through for some reason, this can then be tracked
and it would not be executable which definitely is something better than
currently what smart contracts have with regards to its
limitations. And finally, stable coins. Stable coins would be the new
Bitcoin wherein people are trying to make it a little bit more stable so that
it is not — I would say it is not
susceptible to price fluctuations.
There is a lot of work going into this to be able to have
one-to-one kind of transparency with regard to
the normal fiat money and stable coins currently.

So the enterprise-wide
blockchain shift and outlook for the
future, I would say, would be that the
consortiums would gain speed. There would be a lot more but
they would be a lot larger, to be
able to come up with a scale application. Companies would then be in the
POC stage and use case stage and then move on to cross-industry
applications based on what we've just seen and
there would still be a focus on payments,
trade, and supply chain finance. The key take-aways, I would
say, integration with other devices,
interoperability, and finally, IOT devices and analytics to
scale applications.

Thanks. [Applause] Any questions? I think this session is very interesting because it gives the
mindset of the industry, right? In comparison to what's going on
with academia. You know very well the systems
like the blockchain. Right? And also how people in large
companies think. You know? Do you agree with previous
speakers that it's basically it's not understanding between,
like, what these systems can give and how middle
managers talk or their benchmarks, like,
what is basically hindering the progress
of proof of concepts, of new ways to understand and
incorporate this into the corporate world? Or do you think it's because
definitely the technology is not mature yet. You know? And there's too many gaps that
we need to do more research on, and we need to be more sure that it's not going
to cause any problem? What do you think?
Right. I think specifically within the retail sector, I
think it's a combination of both.

So why people want to use
blockchain and why we can use it to power many systems, I think
it has to be integrated as a holistic solution. Say, for example, the
traceability would be the main part wherein you can't really track it from can really track it, like from
palm oil, from the time is tree is cut to where it goes, but
that's one part where it's very difficult to track, and it's
just one example.

To have an holistic solution
powered by blockchain would make people interested and give them
what they want with respect to understanding so they don't have
to understand how it works. They just have to know this part
of the traceability aspect of blockchain can actually give you
a holistic solution and have your origin table to be able to
track easily. I think it's a combination which
once a lot more applications and a
lot of the usability becomes available to consumers and other businesses, this is
going to take flight. And really take off. That's how I see it.
Just a final comment, one of the things you're saying, my
understanding of the whole day, we also have a
lack of interfaces. We didn't talk today about this,
but all these systems have nice capabilities, like with robots
and AI and data analysis and data science, but we don't know the interface to
that same.

We don't know what's the Google of blockchain,
right? And somehow, what you're
answering me, is a little bit like that. Yeah, about like the farmer,
needs a very, very engineered interface
that allows him to do whatever he wants to do. Right? But at
the end, he doesn't need to understand what's going through
the whole thing. Right. I mentioned there was
one company called Switch which are now
creating a user experience wherein like any other online website you can go and
transact and use a gift card. So it's
not complicated to use but in the back end you're using the
technology. Like any other technology,
everybody wants a simple user interface.
What happens behind, the programming, nobody really cares
or wants to understand it. That's how you have to actually
look at it.

Maybe there's a new field
coming, like human-to-blockchain
interface or interaction. Thank you. Thank you.
[Applause] So finally, we are going to
close this event with a work shop
tutorial about computational load. I'm going to introduce
two guys. One is Brian. The other is Dazza. They belong to the Connection
Science group at MIT. They are going to tell us about
technology and law, which is a big thing we don't tend to care,
but I think it really matters. Without further delay, I'm going
to let them start. Of the two, I'm Dazza.
This is Brian. I run something like called law.
MIT. edu, which has been the wrapper
for computational law research here at MIT and we're housed in the Human
Dynamics Lab, Sandy Pentland's lab.

We're about to launch the MIT
Commutational law report, but we did some research into AI robotics
and law actually goes back a little ways. You'll hear in our
presentation roots back in 2011 when we started modeling autonomous and law, and we're finally at
the launch of this publication where things are coming together
and it's a perfect time to speak with you about it and wanting to
hear back from you with your questions as well. With that,
Brian? Let's take it away. We have this idea of DAOs plus
robots. And we have this idea of
autonomous legal entities. We have to embed the legal thinking
from the get-go so we can optimize the protections of the
legal system with these new forms of entities that people
are coming up with. Because there are going to be
all these different questions that start to arise.
Who owns the entity? Who owns the IP? How are things
produced? How are any proceeds divided? Who is capable of entering into
contacts with the DAO? What happens if they do something
illegal or goes bankrupt? These are things you don't really
think about when you're setting something really creative up.

It's mostly on the back burner. We wanted to start showing why
this type of thinking is important
to, like, get in front of. It's important to get in front
of this type of thinking ahead of time, instead of when
everything hits the fan. Right. One of the kind of
questions embedded here, I think the way we phrased
it, is who is capable of entering into contracts for the
DAO? On behalf of or behest of the
DAO? There's a deeper question, when,
if ever, is a DAO capable of
forming contracts for itself? When it could or should be
treated as a legal entity itself and therefore capable of forming
and enforcing contracts and having them enforced against it.
So the answer to this kind of, we break it down into a few
different pieces, but one of the key pieces that you were touching on there was
the idea of legal personalities. That you could have a limitation
of liability that creates a separate legal entity apart from
yourself and gives you something that you can kind of
hide behind in the form of liability.
Then we get to the notion of investment securities and
contracts and intellectual property and contracts and
agency law and then tort.

And we have some different use
cases that we walk through and kind of talk about how they
apply. This is the road map for the
next 15 minutes, not for 2020. Generally, the container of
legal personality rights requires a few of those common
ingredients. Registration with the state, identification of certain
governance mechanisms. Like with the DAO, it could be the
voting mechanisms or things like that. The identification of the
individuals who are in charge of administratorring the governance mechanisms. It's
sometimes called costotome. And then the legal purpose. You
have to operate within some specific legal purpose and you
only get these protections if you fit within that container.
Just to break this down a little, registration with the
state, like how many people have ever formed a corporation or
LLC? A smattering. There's a step when somebody goes to the
secretary of state's office, in the United States, and fills out
a form. There's always a check involved. They need their
money. And they will create the
corporation for you. Your name will show up, along with who the directors are, on the
registry. I think it's noteworthy we had a
law and technology conference here not long ago, one of the presenters was interested in a legal
personality that doesn't require registration with the state.

It's called a Massachusetts business trust. It's just a
trust agreement where the parties to the trust sign
something and it's a legal entity. Although, when we
explored it, it turned out in order to maintain its existence and pay the taxes and disdissolve it, you ended up
having to do the registration with the scope. So it seems the first bullet is
something that's a capability we need to have in order to
incorporate itself.

Yeah, another thing to point
out, some of these protections are at the state level and some
are at the federal level. In the U.S., you have different
state protections that are related to business entity, but you'll have federal protections
related to securities law. It's important to know which domain
you're operating in in order to optimize for that domain.
We should probably disclose we're operating within those
different levels, we're very much U.S. centric for this
conversation. But we've been dabbling and collaborating with people in
Europe and other countries, as you'll see. So one of the most progressive
states in this record is Vermont. You register with the
state, and the state reviews the operating agreement to ensure
the safety and access of your permission protocols.

You have a summary of mission
and purpose, like I talked about. Then there's some
indication as to whether this is fully or partially automated.
Then you specify the voting protocols. The way this has played out so
far with this organization, they set
up their organization, they paid the fee, and figured out their
governance operation. De-centralized ledger, is how
they described it. I wanted to highlight that, but
it's not showing up very well. But yeah. And then other
states follow approximately the same recipe that I laid out. So with Delaware, the way they
got to the end result is a little bit different.

of having a stand-alone business entity, they allow the
use of electronic networks or database s . One thing to note, the
Delaware corporate law permits the registration of series of
entities. If you have an entity that's
nested like a thousand times, as these shell entities, you can
set that up theoretically in the state of Delaware. One way this is playing out is through the LOA, which is a
legally compliant DOA for investments. And specifically, within that,
it would be another instance where
you have to go through the IRS's
requirements for securities registration, and I think all
the members of this DAO have to be accredited investors. It's an additional protection
you get because you're an investor in
the company so you have to meet even
this more advanced threshold. Should probably say that this is
a project from a civic hacking
group in Brooklyn.

We have a bunch of sites, if you
want to follow up on any of this. And these slides will all be
available. Everybody can use them and play
around with them. Wyoming did something similar to
Delaware. They set up so certificate
tokens could be used instead of stock. Wyoming also set up a
special purpose depository bank for crypto
transactions. Now you have a specific place you can go where
you can hold some of these things, using some smart
contract framework as a way to, you know,
keep assets and keep track of all
your crypto assets, essentially.

One of the ways this is playing
out is through LASSO DAO. I believe somehow they're involved
in co-working, as well, along with
DORG, but this kind of gets us into the next bit I was talking
about earlier, where investments and securities, governed by the
IRS. The big question here, is something a utility token or —
SEC. Oh, SEC, sorry. That was a
mistake. Does it meet the Howie test? Are you putting money into the
entity with the expectation people will
do work on your behalf and get you some return? If so, you need to make sure
you're compliant. Otherwise, the SEC can come
after you.

There are only one of only two
companies in the U.S. that have successfully beat a no
action letter. Who has heard of a no action
letter? So you could just go and do
something, and believe or hope it's not a security, and discover later the
securities and exchange commission says it is, and then
they can launch an enforcement action against you. That's a
bad day. On the other hand, you could
sort of get proactive with legally structuring things and
that's very much the spirit of how we treat computational
law, is designing legal processes, and engineering law
and legal processes.

And one of the mechanisms you
could use in this case is called a "no
action letter. " You basically going to the
regulator, SEC if it's federal, and there's also state
securities regulators, and explain in detail what you're planning
to do, and ask you if they'll basically
issue a no actions letter, and sometimes they will, and that gives you a safe
harbor that they agree in advance there's nothing wrong
with what you're doing. During a bunch of legal
hackothons we'd done a couple of years ago, including revolving loan funds
grading and other investment vehicles, through DAOs, it was a legal hack-a-thon
on the teams, they structured a no
action letter with their code. So we're looking at ways to
embed that within the process. Here is an example of a
company where you've got a no action letter from the SEC.
This is a good practice when you're creating your robot AI
investment funds. Exactly.
The next kind of issue that we can run into, and I think as I
touched on something that's kind of like a theme, when you're
more proactive you have more flexibility.

With regard to the
intellectual property rights, you can set a lot of this stuff
up by contract and a lot of it is set up by contract, among
the members, or between the individual and the state, some
sort of letter that functions as a proxy of a
product between the people and the SEC, for example.
And intellectual property, it's especially interesting,
because you can start dividing fractional
ownership rights using tokens, and then different people can
programmatically verify they own part of an entity or part of the
intellectual property of an entity, or
whatever, however you want to slice it. And that really gives people a
more granular control over all these things, and it provides
new opportunities that people had not had before. And so in another context, if
you look at contracts and agency, this is a typical
example of, you know, what an agency relationship is.
You have a principal. The principal has the agent do
some task. The agent goes to a third party
to effectuate that task.

And they can either be acting
within express authority, inherent authority, or implied
authority. Basically what that means is I
can say, hey, you know, you're authorized to go to buy, you know, a bunch of watermelons. And you have the
authority to do that. I can say you have the authority
to go buy produce, and so by buying watermelons you would have the
inherent ability to buy that because I said you
could buy produce.

But if you buy furniture —
So the third party wouldn't have to, like — the third party wouldn't
have the ability to go after the agent in that situation. Just another example is like
a house. You have a broker. It's not uncommon to get a
broker to sell your house. The broker is a kind of — or an agent, real estate agent,
you'll usually have an agreement with them and they can usually do things like
solicit offers or maybe get pre
approvals on loans some cases. You can
structure it more deeply to get a letter of — I'm blanking
on the word, but what's it called?
Essentially authorized. Power of attorney. POA. I was
trying to do "letter." It's not letter of attorney.

It's power
of attorney. They can actually do the closing documents for
you. These are all different degrees of authority that a principal gives
an agent to act on their behalf with a third party, and one of
the reasons this matters in this context is when you've got,
let's say, people that are operating a robot with AI that's
maybe a DAO to do transactions, whether it's an investment fund
or whether it's, you know, we'll show you some other interesting use cases like a
publishing company or things like that, that entity is going
to be interacting with other third parties.
There's a whole framework of agency law which looks like
this. It comes down to what are the —
what are each of these parties, what are their rights and
responsibilities with respect to the other parties?
A big question is what would the third party have known this
agent was authorized to show me the house
but not sell me the house? To get furniture but not
watermelons or both or one and not the other?
So there's questions there.

So we'll talk about how this
can be designed into a legal process and have this run
smoothly. We can do this such that all
these things are narrowly scoped and
clearly understood so you don't have the edge cases where things go terribly,
terribly wrong. Hopefully. Tort is another one. We're seeing more with
autonomous vehicles. There are questions about, you know, what happens if the autonomous
vehicle decides to hit my car instead of run over the baby?
Who is liable in this situation? And what it gets back to is this
idea of an accountability gap. So basically, taking the
analogous situation for if a person was there and then figuring out where the
liability would have been apportioned if the same thing
happened. So if instead of, you know, this
was being driven by a person, instead of an automated vehicle, the liability
wouldn't necessarily go to the person, except if these five
things happen. Then you can point to those five things and
have a little bit more of a protection there.
Good enough for now.

We have so many slides. Let's keep
moving. We're going to get to the fun
part. That was all background so we could set up the things
that are the most juicy. So the use cases and the
research history. I'll let you talk about this.
Oh, yeah, I promised you there was history. Here is some
history. Has anyone here heard of the
firm Robot and Robot and Wang? It's kind of a joke but it's
also real. Wang used to hang around here
and he went to get his law degree at UC

He has this concept of how much
of a law firm can you automate? In
2011 we collaborated on a project called Corp Bot where we wanted to
create some code that would code to a
secretary of state's office and form a
corporation and conduct a single function like sell a book on Amazon, upload a book
on Amazon, sell it, get some money,
and then dissolve the corporation.
We made some progress, but then we all went and did some
other projects. We never completed that one. Also it turns out it's harder to
do that kind of robotic negotiation
with more nuanced legal requirements, much harder than
we thought.

I've been working on this since
2010, I would say. If we fast forward a little bit, in 2016,
thanks to blockchain, we were able to make more progress. So one of our collaborators, she wanted to do what she was
calling a blockchain border bank, to make
it easier to get microloans. We did what we could to model that. Turns out that was really hard,
especially with the banking and everything. We ended up
revolving a loan fund operating under Massachusetts law. Or at least I was licensed to
practice and I could understand what the forms were and we could
model it more and test it all the way through without going to a Dominican republic jail or
something. This is basically the UML that
we came up with in order to figure out how to do the loan
application in an automated way, who would issue
the loans to, and make sure we have the balance in the fund, and receive payments, and finally
provide the acknowledgment the loan was paid off which is a big legal document
under Massachusetts law that you want to be able to show.
We modeled that pretty well.

Again, it was a little hard to
do the test all the way through. The best way we could figure out
how to do it was use Pay Pal to send
and receive money. We thought, give me a break. We needed to look further, but
we made a lot more progress in modeling
the entire entity. Let's go forward.
You want to talk to this point?
Yeah, this is the fun stuff. We've been working together on
this project related to automated and autonomous legal entities for a
while now. We cohosted a workshop as it was remotely here and I was in
Berlin with a bunch of people. We wanted to walk through it's
pretty basic schematic for what all of
the actions that would be required in order to create a
publishing DAO. If you have a network of people together, and
you wanted to produce a book or something, or produce different
books, or start hiring people to write for you, what would that
look like? And so what we came up with was, okay, so there's a publishing
DAO, the publishing DAO invests, in order
to buy this es presso book machine,
which is basically a printing press for different books, and you create this smart
contract that allows you to pay logistics partners to pick up
the books and deliver them to people in the public, and the public can
deposit money to receive books and the money
goes to the publishing proposal which is either confirmed or denied, and then
the book itself is printed.

So we wanted to see what legal
rights and obligations existed in all these different steps.
One of the things that we kind of came up with and one of
the insights that we've been really trying to drive home here
is that, you know, there's a really strong need to narrowly
scope exactly what a DAO is doing from a legal standpoint so
that you don't run into any of the contractual issues, the
agency issues, those issues of uncertainty where people might
be out of money. One of the things I glossed over
before and should have mentioned, if you don't have one
of these legal containers, the United States and all these
state governments will assume you're a general partnership,
which means they can go directly after
you for whatever liabilities that the organization occurs. Just to identify the "you" it's
all you all. Every member is jointly and
severally liable, with a general partnership. That means if, you know, like
member one has, like, an extra Toyota
that can be impounded and member two as a
vacation house and $20,000 in their savings account, and
member three has whatever, like a painting, they can go after
everybody until they paid off the debt.

So general partnership is, like,
ultimate liability exposure. If you're going to have a legal
entity, you don't want later somebody to say your DAO is a
general partnership. It's better to get ahead of it to use some of our open source code so
you can select the entity and then engineer the legal
relationships and roles according to the business model
that you have in mind. Yeah, that's especially the case
if you have member four who lives in a trash can and screws
everything up and does something wrong.

They get in trouble,
then, you know, you lose a vacation home and all
the good amenities.
This is an interesting hybrid on the last one, where the DAO
was a legal entity itself. And where the individuals, where
the DAO is more like a tool or a platform, and the individuals maybe had a
different corporation. I call this an automated and
significantly — like some level of autonomousness, but there's
humans doing the book. Do we like this book? Do we not like
this book? Do we want to put more marketing behind this or
not? So they're choosing the distribution of their resources
against selecting and then pushing the new materials
and who they want to work with. This is hybrid approach on the
distribution. At one end, there's a completely automated
entity, and you could create partly automated entities to go
and form a new entity, and then dissolve
the entities.

You can imagine this different ways.
That's the far end of the scale. Most of our work here is more
practical where there's hybrid between existing businesses and
existing business models and human beings very much in the
loop or driver seat but disappearing a lot of the
complexity and making things more responsive to the strategic
and tactical decisions you make because it's all
encapsulated within a single integrated legal entity. So you imagine the bookkeeping
and the financials and the inventory and
strategy and operations and HR, when you encapsulate that, we
believe you can make decisions and adapt closer to the speed of
thought, and that you can manage and be much more flexible. It
will be a much better form of business.
Yeah, and to that end, one of the things we're also doing
right now is we're launching a new publication which is going to come out Friday, the
MIT Commutational Law Report of which I'm the editor in chief and Dazza's
the executive producer.

The whole goal, it's a little bit
different than other publications, one, it's focused
on law, which is a new thing for MIT, where, you know, looking at
ways you can reimagine and reengineer the law so it
functions more like a computational system. So we
have a lot of interest in learning, you know, what is
bridging that gap look like? We also want to, because it's
not too — because these aren't two disciplines that have
traditionally been connected with one another, we want to do
some field building. We want to have conversations about how
these processes take place. We want to get people together and
see what the good ideas are.

And then we also want to produce
content. That content comes in a few different forms. This is
where we're really excited about what we can do, because the
content is going to be traditional written articles,
but it's also going to be rich media, so podcasts,
video lectures about how to code something, so it can produce
some of these things. It's also even going to like a
data playground where you can upload a prototype of an app,
people can evaluate it, comment on it, deploy it
themselves, iterate it, and the goal is to come up with better
solutions that are accomplishing some of these goals we've been
talking about.

Indeed. This is MIT
pre-competitive research. As Brian said, it's
field-building. It doesn't exist yet, this
field. And others that we're working with, maybe some of you,
perhaps, if you're interested to find solutions and design
patterns that work and evaluate them, and then the next step, you
could choose if you want to invest in a startup or put
something out in the market. One of the things you didn't
mention and I'll highlight in the data
playground is reproducibility. It's hard to — what we really
want with engineering the law as a
computational system is predictable legal results. So
you don't want always to be talking to lawyers and have them
say, "Well, it depends." Well upon what exactly does it
depend? Can we know that up front? Can we engineer a system
to achieve more at least predictable legal
results? The answer is, yes, we can.

think using the scientific method and the tried and true almost culturally DNA at MIT, if we
test this, we think that's important and that's how we
structure the data playground. Yeah. One of the other things that
would be especially pertinent to this group here is we'll have a podcast that
will come out on Friday. What are some legal primitives
that we can come up with and fine tune and allow people to containerize and take
away with them so they know what they're getting in all these
circumstances. You might be thinking like
cavemen or so forth, but think more about building blocks. Crypto graphics primitives are
the signature, dual key
cryptography. These well-worn primitives that
are reusable. We're looking to apply this to
legal premise. The previous speaker talked about identity, to a legal contract.
There can be some overlap, but there are some others that are unique
to law.

That's actually the Berlin
working group at the top. But yeah. So to kind of like
accomplish some of these things, we've been hosting
these workshops. Here is one of the guys who came up with the
statute in Vermont. This is a drill-down on the
DORG. This is the drill-down. It went almost an hour and a
half on agency law and cross-mapping it
to DAO's and all the roles and rights and responsibilities of
the parties and play it against scenarios. This one, I think, that one is
probably contracts. This was — no, this was the
publishing DAO. There's a few more.

We kind of ran out of slide
space. But we do a lot of convening as
an input to the design and
prototype of systems. One of the things we're really
excited about with this first release is we actually have a
challenge. If you want to contribute this challenge, we
welcome it, but we want to build up this repository of people who
are working to produce code that automates certain of these
functions. If you're working on some small piece of it, maybe
you want to understand how to integrate, like, a voting
mechanism with one of these BBLC's or if you want to go the
other direction and figure out how you can, like, automate
something in a way that produces certificate tokens, this would
be a place where we would very much welcome, like, that sort of

And if there's any interest in
staying kind of up with these things, we have a computational law
telegram channel where you can get involved and kind of spit around feedback and
ideas and start populating this space together with us.
I believe that might be it. Probably the best thing to do
is go to and click on forward slash "contact." Join
the email list and we can have even more curated list of when
we're doing things and communication.

The telegram is
great. I live there. But it can be chaotic if you're not used to dense chatter on
Telegram. I feel like we should say
something else about this. This — this challenge also is
part of the release, the first release, of the publication, which is — our
soft launch is Friday of this week. The theme of the first
release is automated and autonomous legal
entities. Several of the articles are on
that. Several of the projects are on that. Several podcasts
are on that.

As well as other law themes. There's the anchor article where
the big vision is set on what is computational law and that's
amazing. And one of the things I know we
want to work on, with the conference
organizer, is actually modeling the legal entity aspect of,
like, a robot arm that creates art.
So we think this is a very — it's sort of adjacent to a robotic
and DAO publishing company. It's more art. It's not
different in kind. In some ways, it's a lot easier because
the housing of the robot arm actually has a place where we
can understand that we can work on code.
You know what the robot will do.
We think we do.

And we can start to engineer
against certain scenarios and hypotheses. What if the artist owns the art,
but the robot is doing something, or
the consortium that purchased the arm is considered the owner,
or if the robot itself is considered the owner, on and on.
There's always permutations on scenarios. You can't understand
law or legal outcomes in the abstract. Law can only be understood when
applied to facts. That's why lawyers say "it
depends," because maybe they don't know all the facts yet.
We think this will be a great platform to engineer all the
relevant facts and play it against different scenarios to
see whether they're getting the expected results for the legal
roles, relationships, rights, and responsibilities, and
fundamentally the legal outcomes we're seeking to engineer. So that's, we hope, that will be
one of the challenging results that we can hack together on.
If you have interesting ideas on this, well, intersection of
robots, AI, and law, with respect to legal entities or, more generally, we'd love to
hear about them right now.

We have some time for open
discussion. Thank you. [Applause] Any questions? Hi. I just want to ask you around
checks and balances when it comes to information coming — lately,
I've listen curious and just
interested, AI, what's your thoughts around sort
of filtering that information and also the checks and balances of
information feeding into the system? In other words, if you put
garbage in, you might get garbage out. How do you do the checks and
balances? So let me see if I've got
this right. The basic question is, like, if
you're set up so there's some stream of
data that you're trying to — that
you're ingesting as a decision-making function in the
internal governance of the DAO, what happens if that data gets
corrupted or something like that? And it starts producing
all these terrible outcomes? And to answer that, I would say,
you can start to — there are
certain things you can do that would be modeled
kind of after, like, high frequency trading algorithms.
So if there are a certain amount of calls one direction or the
other, that signals something super volatile, you could have
it set up to it meters off, or it requires somebody to look at
it or the group to come together to reach some sort of consensus
before it can proceed first proceed forward.
You have to create a legal pause button to ensure what
you're doing is not going in the wrong direction.

That's one
thing that comes to mind there. Yeah, going further, some of
it comes down to good old fashioned information security.
You've been on the high frequency trading platform, if
somebody hijacks it and gives it wrong market
information, so it starts buying other things, of course, that's probably crime
and fraud, but that is one way you can get corrupted information of garbage
in to manipulate activity. Information security doesn't go
away. It's even more important with automated, especially
autonomous systems, to make sure it's
getting the inputs expected, and the oracles
are the other sources. You also have to be thinking
beyond a direct attack. Whether the sources you've chosen are
really appropriate. I think you mentioned the word
"bias." So that's a big question that I think, for example, if you have
— like this system, for revolving loan
fund, you can't really see the swim lanes on this too well, but
imagine there are swim lanes. It didn't come through our JPEG,
but there are decision points where all the information for a
loan application is presented to a board, so they make the

The way this system is created people log in and authentic themselves
and they have authorization to approve a
loan. If there's more high risk loans
or microloans. So it depends on two things. Making sure the
authorized people are logging in to set the parameters and
approval chains and work flow points. But two, are you gathering the
right information on your loan application or other information
we assume we'll be getting from Bloomberg and other
places. This is basic business judgment. One of the things Sandy Pentland
says, in his What About Computational
Law article that we're releasing on Friday, the critical importance of the
legal aspects of these systems, of modeling them, and not
forgetting about them, assuming all your decisions initially
were absolutely correct, but he said monitoring and then
adapting them.

If some of the information is biased and you
need other information and need to change things, in order to hone the
model to make better decisions so there's less bias you don't
want, this needs to be built into the design of the system. So Sandy very much advocates the
computational law systems, everything from creating a
statute to managing your contracts or other business
types of instruments, that most of the action shouldn't be on
the initial design phase but the design of continuous adaptation.
And information that might be perfectly good in 2020 may end
up being biased and not particularly reflective of the key inputs in 2021 or
2022. You have to continuously hone
and identify where the bias or other inefficiencies are as you go
with computational law systems.

I suppose with any system, but
we think this needs to be part of the DNA of computational law. Maybe we can't conceive of all
of this today. Hey, Adrian. One of the primitives, maybe not
in the sense you meant it, is
reputation. Now, the reputation issues sort
of cross over all the
de-centralized AI stuff we heard about earlier today, in all the
domains. Where does computeddational law
impact reputation or vice versa? In other words, is there a
narrow subset of projects that are already underway? Or aspects of the discipline
that can be applied to the reputation components?
I think there are examples out there now.

Es stona has the e-birth
certificate. There's the land registry that's
on Etherium. I think these different groups are starting to plot some points
down on what the factors are of identity
you need to have in order to properly authenticate what you're doing. As more and more governments and
different players start to do this, that will become a little
bit more clear. You'll start to identify more of
the general trends and be able to
say, okay, we've seen of all these places,
here are the five most common features you should look for,
and go from there. Yeah. I think that's all good
practice. If we go deeper into is there something in there that might be
a legal primitive? First is the identity itself.

If there's a creature or human
being that has a legal personality. We'll release on
Friday this hour podcast with Drew Henkis to
start to identify what we think legal
primitives would or would not be. If we considered it a legal
primitives, there's consensus around that. You could imagine constructing
that primitive, that concept of a primitive, such that the identity has
attributes that may be part of the identity
of the primmive.

So maybe other identifiers. Some, in fact, maybe things that adhere to it, like
identification. If there were a basket of
identity attributes, you could have like an agnostic sort of generic thing
we call "replication attributes. " At that level, I guess, there
could be reputation that was like a legal primmive. But
honestly, there's not even consensus among the few people
talking about legal primitive how this
would play out with identity at all at this point, or whether identity is a
legal primitive. We're not sure at this point.
I don't want to be that speaker that puts the question
that says a question with a question, but I would love you
to think about that, Adrian, and talk to people about it, and then talk
to us about whether you think identity
is a legal primitive. And what the association of the third party adhering to the identity
might be. The link between identity and reputation is context.

And what's missing, because we
don't have science around reputation worth anything much
these days, what's missing is introducing — not worrying
about digital identity and identity as
a legal construct, but rather introducing the principles that I think law can
bring into defining the context. In other words, adjudication of
reputation, or the gaming of reputation, or how do you
control the gaming of reputation? Don't bother about
the stuff at the low levels. That's would too low level. But
rather this issue of defining context in the legal sense, in
adjudication or appeal, et cetera.
Thank you. That's helpful. Sir? Did you switch seats to be in
mic position? Just two questions. Right now we're facing with a situation with the supply chain
and some process is really long, and if we are talking about
autonomous system and fully automated processes, sometimes it's hard to define in some
cases, something happens, something goes wrong, it's hard to define who is who
will be in charge to pay for that.

And in some of juridic things, some situations
really exist, when the many participants in the process, for
example, of death of the person, and nobody can be
blamed, because there's a really long
chain. There's movies about that. But will we see something like
insurance funds for autonomous systems and robots and AI's to insure that in any
case of damages from the side of such
systems that damages will be compensated?
I can start us off.

The way you posed it, which I
thought you posed the question really well, but there's one word I must suggest
we amend. You said it's hard to know what happened and who is in
charge. But let's get right to the real point. Who is
accountable or responsibility? Who is going to be left holding
the bag if something goes wrong. So look at that dimension of it,
what we want to avoid is an
accountability gap. Some people, it seems, in the early
days of the DAO, especially, were specifically attempting to
achieve an accountability gap where the idea if something is
going to happen you can't touch us; we're not part of any
jurisdiction. It's very questionable whether that's a beneficial or sustainable or
desirable system at all. As you look from a MIT Law
perspective, we're looking alt systems that operate well based on our
social expectations, which includes accountability.
To me, when there are human beings and other corporations utilizes automated or autonomous systems
as tools, it's not a big change in terms of what is accountable.

You need attribution at that point. To
whom do you attribute the act? Now closer to your question
and assumption is what happens when the system is kind of
taking actions and causing consequences without human review or approval or even
knowledge? Okay. So now we're in the fun zone. In my mind, or I believe that it
is not just possible, but
essential, that when or likely these systems start coming
online that a major part of the equation, like required, is that
there be financial and other mechanics
to ensure there's no accountability gap. If one must — if all one has is
the automated or autonomous system
to hold accountable, we must look at things like insurance, bonds, reserve funds,
and things that are proportional to
the harm or accountability that may be required for the type of
thing it's doing.

If it's selling books, that may
be relatively low. If it's doing munitions and
nuclear weapons distribution, it might be quite high.
And everything in between. So looking at the potential
exposure of different business activities is a bit of a magical art more than a
science, but there's risk managers that can begin to size up what would
be an appropriate, you know, risk
management kind of — a premium but an
appropriate risk management capabilities to have for certain
situations, like is insurance appropriate, and if so what kind
of product and what would the premium be, do I need a bond, or
a liquid reserve fund, or other things like that. Or is there a
common defense fund? There's different ways to start to build
in accountability, but I'd say it becomes essential. And it
ought to be built into the process of having fully
autonomous systems that are capable of
causing harm. So I guess my answer to your
question, like, is this something that could be thought
of? I'd say, like, hell yeah.

In fact, I think it must be. It must be thought of and it
should really be part of the core design.
And the second question, I think it's related to the first one. Will we see something like open
source license that's for AI and robotics? Not the source code, but to open itself ? If the robot can by itself,
if it's possible, in the future? Will we see something, some
license like this? What would the license do? Like, for example, in the open
source license, you are putting that
I'm not — I'm not the owner of this code
no more. I'm opening it to society. And I'm not taking the burden of
the damages or anythings. Is it possible the same thing for
robots, AI, for example? So I think there's a couple
of concepts there. Approximate one of them is the concept of
emancipation. The open source is sort of close, but let's get
really point blank on the target.

One could imagine, one
could structure, like, legal documents and business models
and social arrangements where we deliberately intend for some
code to be emancipated. So it was owned at some point,
and at a certain point it sort of owns itself or it is
independent. It becomes autonomous or I would
say emancipated. A young person is — can't form
a contract when they're 12 years old, but by the time they're 30
years old, they can. One of the things that happens there,
technically, legally, it's emancipation. A slave,
similarly, cannot own property. In fact, they were considered
property. When the slave is freed, they're
emancipated. So an emancipation-type event is
one way we could see this happening.
Another thing is sometimes what I refer to as the broken leash.
You have this dog or something, and it's on a leash, and it seems to
be going pretty well and the dog bit through or otherwise just ran
off and the leash is out of your hands and now it's broken. We have this rogue AI going
around with all intents and purposes — or maybe the leash
is relinquished because the only person who created and owned it and operated it is now dead or
they went to jail or they don't feel like doing it anymore or
what have you.

You can imagine conditions that result in that.
I think the interesting question, in fact, I can go one step further,
in 2019, I would say I envision
this is inevitable we will see these things develop in the next
handful of years. I'm not going to put a number,
but in the future, in your lifetime. But the question
becomes, okay, how could an emancipated AI or robotic or software be
kind of a wholesome healthy desirable
legitimate creature on the terrain with us? So this starts
to create questions about what types of requirements
or constraints might be appropriate for that.
This is a question that it's just about time to have
realistic conversations about it.

It's still premature, but it's
not too premature to start thinking and talking about it.
I'm glad you asked. Yeah, a few years ago, there was
who owns the intellectual property
rights for the monkey selfie where the
animal took the selfie of itself. Who owns that? One of
the things you can look to in order to determine that
ownership is, like, does a legal personality
container exist for that entity? So whether it's an animal in
certain places.

Animals do have legal
personality rights in some places. I can imagine if there was some
sort of registration process, some sort of, you know, indicating of what
those mechanisms were, the decision-making that went into
it, you could have a legal personality for robots and what
that actually looks like remains to be seen. But I think we're
getting closer and closer to understanding what the contours of it look like. One component and inner working
that would showing up on the contour that you could connect
with it, would be something like a license plate,
even if it's virtual. You could say, ah, this AI or autonomous entity belongs to,
you know, acme corporation or Sandy Pentland. It's a personal
shopping bot or what have you.

That AI, when I look at the
license plate, the license plate is visible to others, is
emancipated. Good to know. Well, then maybe before I
conduct a deal with it, I should check, does it have the standard
insurance and bonding or not? Is it fully paid up? Am I doing something within the
scope of its capabilities? Is there a file I can query
about who owns it or if it owns itself and if it's emancipated and what
my recourse and remedies would be if it all goes terribly
wrong? But we need to keep our eyes on,
is it all going beautifully, wonderfully right? Some of
these things can be extraordinary for the innovation
and economic prosperity and social issues that they can help
us to resolve and achieve some of these deeper goals. What we
really need to be doing now is fundamental engineering and sort
of pre-competitive research and development on designing the types of
containers that we can make — we can get the best out of these capabilities while
also maintaining reasonable risk kind of management, and also
maintaining our values intact.

And I think that pretty much
brings us to the end of the session.
Yes. Sorry. Do you have a final
question? My question is does already
exist a framework which allows — could
allow automatic litigation? So this is a beautiful idea, but
this happens not to be legal, so another would detect a wrong
and open litigation. Say I don't want to pay you, because in certain context, for
example, a car, autonomous car, not wishing to
pay a parking lot, or in trading,
many times litigation could be automated litigation might be an
option. But the question is, is this le we have a framework for that, so
legally, it could be mitigated? On the trading floor or on the
parking lot? Love it. Thank you. Last
question. Quick. So you could set up a framework
for that.

A lot of times when you enter into agreements with
banks or other parties, there are arbitration clauses. You
can imagine an online disbeauty resolution process, like what
Amazon or e bay has. Those are a lot more efficient than courts
are. If you had something like that
set up where when you're setting up one of these, where you're
setting up a DAO, you kind of have a check box for, you know, this DAO prefers this sort
of online dispute resolution but it
will do any of the following online dispute resolutions. You
could have a situation where something happens and there's a
goal to quickly expedite all of these legal processes and it can
automatically run through them. Indeed. To play it out quickly, let's
take the parking lot.

That's a good one. An at ton mouse vehicle doing
things like Grub Hub. It shows up at a parking garage. It has
a chip or something so it can be identified and it knows where
payment could go. One of the things you could structure, on
those components and building blocks, if I were a parking
garage owner, I may be part of a consortium that
developed a standard that would ask can I
pre approve your credit card for the amount of time you'll be staying here
in advance? If I did that but it didn't clear by the time it was time for the car
to go, I could have an agreement when
it entered into the garage that I
can maintain possession of your vehicle until I get payment. Or
something else. That's where we get into
questions of recourse. So the credit card was pre
approved but when they did the sale it didn't go through
because there's a reached limit or a charge back.

If you can check up front as
part of a data exchange that it has a
certain insurance or fund, then at least
you know you have recourse overall
so you can let the car go. So Does that make sense? It's largely built upon and just
uses the existing systems and frameworks, but we now need to sculpt more API's
and add a little bit more to the transaction codes and the
business models in order to build out full use of the
capabilities. So with that, I think the full
use of our capability is now expired. First of all, I think we should
thank Brian and Dazza for this amazing
last session. Thank you, guys. [Applause] Listen, it was great. I learned
a lot. Also, I think this is cool that,
like, we can cool.
This is a green space.

Definitely, definitely. With
this, I'd like to close the event. It's been a long day. A
lot of information. Right? But I think we all learned something
new, like today. So I hope to see you next year,
maybe here, maybe in Europe, maybe in
Petersburg, who knows? But yeah before we leave, I have to say a
couple of things. First, thank you for coming
here. Thank you for showing up and doing the networking. Second thing, we're going to
start up a Boston meetup group. There's not much time left,
because we have to leave at six, because, and the important
message is this one, we have drinks planned in the mid hall,
which is like a bar ten minutes away from here. I hope to see you there and you
can ask all the questions you couldn't ask now, or just chat
with a beer. With this, we close the event, and thank you
very much.


You May Also Like