How Distributed SQL helps mitigate downtimes and outages - Franck Pachot from Yugabyte
Cloud CommuteMarch 07, 2024x
2
00:22:4220.79 MB

How Distributed SQL helps mitigate downtimes and outages - Franck Pachot from Yugabyte

In this episode of our podcast, host Chris Engelbert welcomes Franck Pachot from Yugabyte, a distributed SQL database company. Franck, a developer advocate with extensive experience in databases, discusses the advantages of distributed SQL databases over traditional single-node SQL databases. He explains that Yugabyte offers the robust features of PostgreSQL with the added benefits of horizontal scalability and high availability.

Franck delves into the technical aspects of distributed databases, emphasizing their ability to maintain consistency and provide ACID properties while scaling across multiple nodes. This setup ensures continuous uptime and flexibility to handle high-velocity data without downtime during upgrades or patches. He highlights that Yugabyte is designed for OLTP workloads but can handle various use cases due to its SQL compatibility.

The conversation touches on deployment strategies, with Franck recommending cloud deployment for its elasticity and cost-efficiency, though acknowledging that traditional servers or a hybrid approach can also work. He discusses the practical aspects of migrating between cloud providers seamlessly and the importance of maintaining predictable performance by using similarly sized VMs during such transitions.

For questions, you can reach Franck at:

You can learn more about Yugabyte at:

The Cloud Commute Podcast is presented by simplyblock (https://www.simplyblock.io)


01:00:00
In SQL, usually if you run the

01:00:02
same query two times on the same

01:00:05
data, you expect the

01:00:06
same results, which makes it

01:00:08
easier also to build tests and to

01:00:11
validate a query.

01:00:12
And with vectors, you may have a

01:00:16
different result.

01:00:22
You're listening to simplyblock's

01:00:24
Cloud Commute Podcast, your weekly

01:00:25
20-minute podcast

01:00:26
about cloud technologies,

01:00:28
Kubernetes, security,

01:00:29
sustainability, and more.

01:00:31
All right.

01:00:32
Hello.

01:00:33
Welcome back to our

01:00:33
podcast for simplyblock.

01:00:36
Today, I'm really happy to have a

01:00:38
good friend on the call or on the

01:00:40
podcast, Franck from

01:00:43
Yugabyte.

01:00:44
Franck, thank you for being here.

01:00:46
Thank you for inviting me.

01:00:48
Yeah, absolutely.

01:00:50
So let's, we only have 20 minutes,

01:00:52
so let's get right into it.

01:00:54
Maybe just say a few words about

01:00:56
yourself and Yugabyte.

01:00:58
What are you guys doing?

01:00:59
Yeah, quickly, I've always been

01:01:01
working with databases and a lot

01:01:03
of Oracle, Postgres,

01:01:05
and monolithic databases.

01:01:07
And I joined Yugabyte as a

01:01:09
developer

01:01:10
advocate three years ago.

01:01:13
And it's a

01:01:14
distributed SQL database.

01:01:17
Basically the idea is to provide

01:01:19
all the Postgres features, but on

01:01:21
top of a distributed storage

01:01:24
and transaction engine with the

01:01:26
same architecture based

01:01:28
on Spanner architecture.

01:01:31
So all nodes active, data is

01:01:33
distributed, connections are

01:01:34
distributed, SQL processing

01:01:36
is distributed, and all nodes

01:01:38
provide a logical view of the

01:01:42
global database.

01:01:44
Right.

01:01:44
So maybe just for the people that

01:01:46
are not 100% sure what a

01:01:48
distributed database is,

01:01:50
can you explain a little bit more,

01:01:51
you said a couple of different

01:01:52
nodes and data is distributed?

01:01:55
Obviously.

01:01:56
Yeah, yeah, yeah.

01:01:57
The main reason is that most of

01:02:00
the current SQL databases run on a

01:02:04
single node that can

01:02:06
take the reads and writes to be

01:02:08
able to check the consistency on

01:02:09
that single node.

01:02:11
And when there is a need to scale

01:02:12
out, then people go to NoSQL

01:02:14
where they can have multiple

01:02:17
nodes active, but then

01:02:19
missing the SQL feature.

01:02:21
So the idea of distributed SQL is

01:02:23
that today we can provide both the

01:02:26
SQL features, the

01:02:27
ACID properties, the consistency,

01:02:30
and the possibility

01:02:31
to scale horizontally.

01:02:33
And two main reasons

01:02:33
to scale horizontally.

01:02:35
I have ability if you run on

01:02:37
multiple nodes, one can be down,

01:02:39
the network can be down,

01:02:40
and everything

01:02:41
continues on the other nodes.

01:02:43
And also to scale, if you go to

01:02:47
the cloud, you want more

01:02:48
elasticity, you want to run

01:02:50
with a small amount of resources

01:02:52
and be able to add more resources.

01:02:54
With a single database,

01:02:56
you can scale up, but then you

01:02:57
have to stop it,

01:02:58
start a larger instance.

01:02:59
When you have multiple servers,

01:03:01
just add new nodes and then be

01:03:04
able to handle your workloads.

01:03:07
Right.

01:03:08
So that means what is

01:03:10
the target audience?

01:03:12
What is the main customer profile?

01:03:14
Is that companies with a lot of

01:03:17
data, with high velocity data, who

01:03:21
you would say is the

01:03:23
main customer?

01:03:24
For the amount of data, there are

01:03:27
also some users

01:03:28
with small databases.

01:03:30
I mean, 100 gigabytes is a small

01:03:31
database, but they need to be

01:03:34
always up, always available.

01:03:37
When you can scale horizontally,

01:03:38
you can also do rolling upgrades,

01:03:41
rolling patches,

01:03:42
so you don't stop the database,

01:03:44
you don't stop the application

01:03:45
when you upgrade, when

01:03:47
you patch the server,

01:03:48
when you do a key rotation.

01:03:50
So I have ability

01:03:51
even for small databases.

01:03:53
And of course, the more data you

01:03:55
have and you cannot do the

01:04:00
backups, like with a simple

01:04:02
PgDump, if you have a lot of

01:04:04
data, you have many constraints to

01:04:08
operate it and makes

01:04:09
it easier with the

01:04:10
horizontal scalability.

01:04:13
Right.

01:04:15
Basically, it targets any use case

01:04:18
because SQL database must handle

01:04:21
any use case, but

01:04:22
it's mostly optimized for OLTP for

01:04:25
two reasons, because data

01:04:27
warehouses, you don't need all

01:04:28
the transactions, so it's easier

01:04:30
to short and without all the ACID

01:04:33
properties on data

01:04:34
warehouses.

01:04:36
And then there are some engines

01:04:37
with columnar storage.

01:04:39
So Yugabyte is really optimized

01:04:41
for OLTP and the analytics query

01:04:43
that run on OLTP applications,

01:04:46
but not to build a data warehouse.

01:04:48
Right.

01:04:48
Okay.

01:04:49
So if I get that right, the two

01:04:51
main use cases are like

01:04:53
you always have to be up.

01:04:54
So a single instance, like I think

01:04:57
it's built on Postgres or using

01:04:58
the Postgres protocol.

01:05:00
So a single Postgres instance,

01:05:01
even with like a failover or a

01:05:04
secondary could be a short

01:05:06
timeout, not

01:05:07
timeout, a short downtime.

01:05:12
Downtime, thank you.

01:05:13
That was exactly

01:05:14
what I was looking for.

01:05:15
A short downtime.

01:05:16
So that would be one use case.

01:05:17
You cannot justify any downtime or

01:05:21
downtime at all.

01:05:22
And on the other side, you have

01:05:25
these like massive data sets, but

01:05:27
you don't necessarily

01:05:28
need a whole

01:05:29
transaction around all of that.

01:05:31
Right.

01:05:32
Okay.

01:05:32
Cool.

01:05:35
So what do you think?

01:05:37
Just to mention in the case of

01:05:39
distributed SQL, you have all

01:05:41
transactional properties.

01:05:43
Even if you run multi-shot

01:05:45
transactions,

01:05:46
that's the difference.

01:05:47
I made the difference with data

01:05:48
warehouses where you may not need

01:05:50
that and you may have

01:05:52
some optimization when you don't

01:05:54
have all ACID properties.

01:05:56
But here the idea is to have all

01:05:57
ACID properties so that you can take

01:05:59
an application that runs

01:06:01
on Postgres and just run it

01:06:03
distributed maybe with more

01:06:07
latency, but higher throughput.

01:06:10
Right.

01:06:11
Okay.

01:06:11
Okay.

01:06:11
So the idea is that with Yugabyte,

01:06:14
you have the transactional

01:06:16
capabilities you would

01:06:18
lose when you're

01:06:19
going for data warehouse?

01:06:20
Oh, that's interesting.

01:06:21
I wasn't 100% sure about that.

01:06:24
Interesting.

01:06:25
So let me see.

01:06:28
What does that mean for users?

01:06:33
You can have like all, you

01:06:35
basically can use Yugabyte as a

01:06:37
drop-in replacement for a Postgres

01:06:40
database, right?

01:06:41
Is that it?

01:06:42
In theory, yes.

01:06:44
That's the goal.

01:06:45
And we use the Postgres

01:06:46
codes for the SQL layer.

01:06:48
So in theory, yes.

01:06:50
There are a few features that are

01:06:51
not yet working like in Postgres.

01:06:55
When you want to scale out the

01:06:56
DDL, for example, Postgres can do

01:06:59
transactional DDL.

01:07:01
We are implementing that, but when

01:07:03
you want it to be scalable, that's

01:07:05
different because

01:07:06
Postgres allows it, but

01:07:09
takes an exclusive look.

01:07:10
And when you build a database that

01:07:13
must be always up with the

01:07:14
application always ongoing,

01:07:16
you have to do something different

01:07:18
than taking an exclusive look for

01:07:20
the whole direction

01:07:20
of the DDL.

01:07:22
So there are a few features that

01:07:23
are not there for the moment.

01:07:25
There are also some considerations

01:07:27
about the data model

01:07:29
because data is sharded.

01:07:31
You can make some decisions to

01:07:33
shard it on the range of value or

01:07:35
applying an hash value.

01:07:36
So there are little things you may

01:07:38
think about, but basically the

01:07:40
idea is that you don't have

01:07:41
to change the code of the

01:07:43
application from

01:07:44
Postgres to Yugabyte.

01:07:47
You may think about

01:07:48
the data modeling.

01:07:49
If you have a bad design that just

01:07:52
works on Postgres, it's always

01:07:55
worse when you add some

01:07:56
network latency.

01:07:58
So you may think a bit more about

01:07:59
the good design, same

01:08:01
recommendation,

01:08:02
same best practices,

01:08:04
but the consequence may be a bit

01:08:06
more important

01:08:07
when you distribute.

01:08:08
Yeah, that makes sense.

01:08:10
I hear you.

01:08:10
We had the same thing with a

01:08:12
different company we've worked for

01:08:13
in the past where it was

01:08:14
kind of the same thing.

01:08:15
It used the same API, but it

01:08:17
worked differently.

01:08:18
It had a network layer underneath,

01:08:21
and now suddenly everything had a

01:08:23
network operation

01:08:24
in between or a network

01:08:25
transaction in between.

01:08:27
It looks the same, but you still

01:08:31
have to think a lot a little bit.

01:08:34
So when you install Yugabyte, how

01:08:38
would you recommend

01:08:40
deploying that today?

01:08:42
Would you recommend buying some

01:08:44
traditional servers, co-hosting

01:08:46
them in a data center,

01:08:47
or going into the cloud?

01:08:49
I know the answer, but--

01:08:51
Yeah, you can buy a bare metal

01:08:53
server and install it, but the

01:08:55
real value is the elasticity,

01:08:57
and then the real value

01:08:59
is going in the cloud.

01:09:02
Because the point is that when you

01:09:04
go to the cloud, if you do the

01:09:06
same kind of provisioning,

01:09:08
then it will cost a lot.

01:09:10
You have an

01:09:11
advantage going to the cloud.

01:09:13
It can be cost efficient if you

01:09:15
can have small

01:09:16
instances and add them.

01:09:18
So any Linux, VM, or container can

01:09:24
be OK for the nodes.

01:09:26
There is no strict-- the idea also

01:09:29
is that it can run

01:09:30
on commodity hardware.

01:09:31
You just need network between

01:09:33
them, no special

01:09:34
hardware, and you can deploy it.

01:09:38
There are some users running it on

01:09:40
Kubernetes, which is a great

01:09:43
platform when you can scale,

01:09:44
because all pods are equal, so you

01:09:46
can just scale the stateful sets.

01:09:49
Of course, I will not recommend

01:09:50
deploying a database on Kubernetes

01:09:53
if you don't know

01:09:54
Kubernetes at all.

01:09:56
If you have all the applications

01:09:57
in Kubernetes, it makes sense to

01:09:59
put the database here, but

01:10:01
if it's the first time you touch

01:10:02
Kubernetes, probably a database is

01:10:04
not the best to start,

01:10:07
because it is stateful and there

01:10:10
are some considerations.

01:10:11
But yeah, that's a good platform

01:10:13
if you have

01:10:13
everything on Kubernetes.

01:10:15
It can be also VMs, it can be also

01:10:17
hybrids on premises and on the cloud.

01:10:21
That's also the idea.

01:10:22
It can be multi-clouds.

01:10:25
That's also an advantage of when

01:10:28
you distribute, you can, for

01:10:29
example, move from

01:10:30
one cloud provider

01:10:32
to the other just by adding new

01:10:34
nodes and let the cluster

01:10:36
rebalance and then

01:10:37
removing the other nodes.

01:10:39
So, yeah, a lot of possibilities.

01:10:43
The goal is to keep it simple to

01:10:45
have everything

01:10:47
done by the database.

01:10:48
When you scale on Kubernetes, the

01:10:50
only command that you do outside

01:10:53
of the database

01:10:54
is scaling the pods

01:10:55
and then the database will detect

01:10:57
it, rebalance the data.

01:10:58
The goal is that you don't have to

01:11:00
re-shard yourself when

01:11:03
you do this kind of thing.

01:11:05
I found the migration strategy,

01:11:08
you just pointed

01:11:08
out, really interesting.

01:11:10
Basically, you create a big

01:11:12
cluster over multiple cloud

01:11:13
providers and you just

01:11:16
scale down bit by bit

01:11:18
on the one and

01:11:19
scale up on the other.

01:11:21
So, that was at

01:11:22
least my understanding.

01:11:23
That's an interesting thing.

01:11:26
From the top of my head, one

01:11:30
question that probably would come

01:11:32
up, and you probably

01:11:34
had to answer a few times,

01:11:37
how does Yugabyte

01:11:38
handle different sized VMs?

01:11:40
Because in this case, there's no

01:11:42
chance to get the same

01:11:43
setup in terms of VMs.

01:11:46
Many systems have issues with

01:11:47
differently sized things.

01:11:50
Yeah, really good question.

01:11:52
The goal for predictable

01:11:53
performance is to have nodes that

01:11:57
are kind of equal and it makes

01:11:59
also observability much easier.

01:12:01
When you have to start to think

01:12:03
about the CPU usage on different

01:12:04
instances, that's more difficult.

01:12:06
So, it's possible to run on

01:12:08
different sizes, but you should

01:12:11
consider that just

01:12:12
temporary for migration.

01:12:14
So, it's more like a temporary

01:12:17
thing and you have to

01:12:18
expect some kind of impact.

01:12:21
Yeah.

01:12:22
Meanwhile.

01:12:22
And being hybrid cloud also from a

01:12:25
cost point of view,

01:12:27
that can be very expensive.

01:12:29
If you run a distributed database,

01:12:31
there is a lot of

01:12:32
data that is exchanged.

01:12:34
So, it can cost a lot.

01:12:35
Makes sense to move from one cloud

01:12:37
to the other, running

01:12:39
always on two clouds.

01:12:40
There are some customers doing

01:12:42
that just because they want to be

01:12:43
sure that Black Friday, they can

01:12:45
have enough instances

01:12:47
on two cloud providers.

01:12:50
But of course, there

01:12:50
is a cost behind that.

01:12:52
It's more about the agility of

01:12:54
changing without

01:12:56
stopping the application.

01:12:58
Yeah, I think the biggest cost in

01:13:00
that situation would be the

01:13:03
traffic between the nodes, right?

01:13:05
Because you have to pay for

01:13:06
egress, ingress, whatever.

01:13:09
Depends on the cloud provider, but

01:13:11
most of them, when you move data

01:13:17
from the cloud, you pay a lot.

01:13:20
When you move to their cloud, they

01:13:21
are happy that you

01:13:23
come with more data.

01:13:24
So, that's fine.

01:13:26
Right.

01:13:28
So, let me see.

01:13:30
What do you think is the biggest

01:13:33
trend you see right now in terms

01:13:35
of databases overall?

01:13:37
Not specifically relational or, I

01:13:40
don't know, what do you think is

01:13:42
the biggest star, the biggest new

01:13:44
thing that's coming?

01:13:46
I would say long term trend.

01:13:50
I would say simply SQL because SQL

01:13:54
was popular and then during the

01:13:56
NoSQL times, it

01:13:59
was not so popular.

01:14:00
And now the popularity

01:14:02
of SQL is growing again.

01:14:05
So, I think this is also a trend

01:14:06
considering SQL for many solutions

01:14:09
rather than thinking about

01:14:11
different databases

01:14:12
for different use cases.

01:14:14
So, that's the general trend.

01:14:16
The short term trend, of course,

01:14:19
everybody is talking about vector,

01:14:21
PgVector, storing embeddings,

01:14:24
indexing embeddings.

01:14:26
We'll see what happens.

01:14:27
It's kind of a shift in mind for

01:14:30
SQL database because we are more

01:14:32
used to precise results.

01:14:35
And this is more like fuzzy search

01:14:37
and non-deterministic results.

01:14:43
But that comes also with this

01:14:46
trend of SQL databases.

01:14:48
They are not only for the pure

01:14:50
relational data, there are other

01:14:52
use cases in them.

01:14:53
And I think vectors like PgVector

01:14:57
and Postgres and Postgres

01:14:59
compatible

01:15:00
databases will be a thing.

01:15:02
But I don't really

01:15:04
know about trends.

01:15:05
A few years ago, it was all about

01:15:06
blockchain and blockchain in all

01:15:08
databases and maybe not.

01:15:12
Right. I hear you.

01:15:14
I'm also really careful when it

01:15:16
comes to like hypes and trends.

01:15:18
They come and go.

01:15:21
Yeah.

01:15:23
So, two things on that answer were

01:15:26
really interesting.

01:15:27
First of all, you said it's all

01:15:30
going back to SQL, which kind of

01:15:31
reminds me on this like little

01:15:34
plate with the

01:15:34
evolution of NoSQL.

01:15:36
Like no SQL, we don't want SQL.

01:15:40
Then it was like all

01:15:41
NoSQL, nothing else.

01:15:43
And now it's like, no, it's SQL.

01:15:47
Right.

01:15:49
I don't remember. It was

01:15:50
like five or six stages.

01:15:52
If I can find it, I'll put it into

01:15:54
the blog post because the picture

01:15:55
or this plate is

01:15:57
just like awesome.

01:15:59
So that was the first one.

01:16:02
And the second one, right, with

01:16:04
vector databases, it was

01:16:05
interesting that you mentioned you

01:16:07
can't expect like 100%

01:16:09
real results.

01:16:12
Right. And I think that is

01:16:13
something we have to learn to

01:16:15
work with in the future,

01:16:16
especially the

01:16:17
bigger the data sets

01:16:19
gap that we need to analyze.

01:16:21
The more important it is to learn

01:16:23
to work with, well, let's call

01:16:26
it estimates. How good

01:16:28
they are or not are right.

01:16:30
In SQL, usually if you run the

01:16:33
same query two times on the same

01:16:35
data, you expect the same results,

01:16:38
which makes it easier also to

01:16:39
build tests and to

01:16:41
validate a query.

01:16:43
And with vectors, you may have a

01:16:46
different result.

01:16:48
Yeah, which you can do in

01:16:51
specifically Postgres.

01:16:52
I'm not sure about some other

01:16:54
database, but you have this like

01:16:55
table and not table spaces.

01:16:59
Not table space,

01:16:59
what do you call it?

01:17:00
Like sample space where you say,

01:17:03
OK, take this like massive data

01:17:05
set and just give me like a sample

01:17:07
rate of like 10%.

01:17:09
That kind of thing happened in the

01:17:10
past and it already gave you like

01:17:12
interesting results when you

01:17:13
reloaded a Web page and the graph

01:17:15
was like slightly different.

01:17:17
But most of the time you use that

01:17:19
when you knew you just wanted to

01:17:21
have like like a bare overview,

01:17:25
you wanted to have like the form

01:17:26
of the graph, not

01:17:27
the precise thing.

01:17:29
Right. And a vector database goes

01:17:31
into the same direction, plus

01:17:33
adding some more

01:17:35
things on top of that.

01:17:37
So Yugabyte

01:17:39
is a Postgres compatible database,

01:17:41
I think PgVector can

01:17:43
perfectly run in Yugabyte,

01:17:45
right? You can use that?

01:17:46
PgVector works.

01:17:48
The indexes

01:17:49
there is work ongoing.

01:17:52
Basically, everything that is at

01:17:54
SQL level extensions at SQL level

01:17:58
work on Yugabyte because it's the

01:18:00
same code when it touches the

01:18:03
storage, then it must be a bit

01:18:05
aware of the distributed storage.

01:18:08
We don't store in

01:18:09
tables and Btrees.

01:18:10
We store in other same trees and

01:18:12
then those

01:18:13
operations may be different.

01:18:14
So today you can use PgVector, but

01:18:17
not the same indexing.

01:18:20
And also because you were talking

01:18:21
about the different trends, the

01:18:23
goal is also not to build a

01:18:25
different index for each new trend

01:18:27
and to build an

01:18:29
index that can adapt.

01:18:31
Already in PgVector, there are, I

01:18:33
think, two or

01:18:33
three kind of indexes.

01:18:35
And this has changed

01:18:36
in less than one year.

01:18:39
So, yeah, but better to have

01:18:40
something that is flexible enough

01:18:42
to be adapted to the different

01:18:45
kinds of indexes that will come.

01:18:47
I think there is one big

01:18:49
difference because you also

01:18:49
mentioned blockchain.

01:18:50
I think there is one big

01:18:51
difference. Here

01:18:52
we have a solution to an actual

01:18:54
problem, something

01:18:55
we want to resolve.

01:18:56
Whereas I'm not going

01:18:58
to bash on blockchain.

01:19:01
No, but that's right.

01:19:05
So you said you can deploy it into

01:19:08
Kubernetes and since this podcast

01:19:11
is very cloud and Kubernetes

01:19:15
focused, what do you think is the

01:19:18
worst thing people can overlook

01:19:20
when running Yugabyte in the cloud?

01:19:23
I think storage is probably a

01:19:26
complicated thing.

01:19:28
Yeah, for sure storage is the big

01:19:31
advantage of a distributed

01:19:33
database is that you don't have to

01:19:35
share the storage because the

01:19:37
database handles that.

01:19:41
So you can have local storage, but

01:19:43
of course you need also to think

01:19:45
about durability and performance.

01:19:48
You can run with local NVMe disks

01:19:51
on each instance, which will

01:19:54
provide the best performance,

01:19:56
which may be okay for availability

01:19:58
because if you run in multiple

01:20:00
zones, you can lose one and

01:20:02
everything continues.

01:20:04
But if you lose two zones and it

01:20:06
is local storage, then

01:20:07
you may lose some data.

01:20:09
So usually, currently, customers

01:20:12
run on block storage like EBS in

01:20:17
AWS, which has the advantage that

01:20:21
storage is persistent in addition

01:20:23
to the high availability of

01:20:25
multiple zones, the

01:20:27
storage is persistent.

01:20:28
Of course, there are some

01:20:30
performance considerations and

01:20:34
sometimes the performance reminds

01:20:37
me when we were running on

01:20:39
spinning disk a few years ago

01:20:41
because you have the performance,

01:20:45
the latency and the throughput

01:20:47
limitations of the storage itself,

01:20:49
but also each instance.

01:20:51
An easy to instance as a limit and

01:20:54
you can reach also those limits.

01:20:56
So the storage is important

01:20:58
thinking about performance,

01:21:00
durability and the agility also.

01:21:03
It's good. You said EBS and it

01:21:05
reminds you of spinning disk, at

01:21:07
least not floppy

01:21:07
disk. That's good.

01:21:10
No, I mean, EBS

01:21:12
is a good solution.

01:21:14
It's just kind of expensive when

01:21:17
you need high performance storage.

01:21:19
I think that is the trade-off you

01:21:21
have to understand.

01:21:23
Yeah, cool. We're at the 20

01:21:26
minutes mark. Unfortunately

01:21:27
already, so much more questions.

01:21:32
Thank you very much for being

01:21:33
here. I'm happy to have you back

01:21:35
on the show at some point. There's

01:21:37
so much more to talk about.

01:21:40
20 minutes is really small.

01:21:42
Yeah, it is. For people having

01:21:46
questions, happy to forward

01:21:48
questions to Franck, apart from

01:21:50
that, what's your Twitter handle

01:21:52
or whatever you need to say?

01:21:54
My first name and last name. I'm

01:21:56
easy to find on Google.

01:21:59
All right. We'll put a couple of

01:22:01
things in the notes. Yeah, thank

01:22:06
you very much for being here and

01:22:08
hope to see you again.

01:22:10
Thank you.