Giant Robots Smashing Into Other Giant Robots
00:00:00
/
00:39:04

418: Aitomatic with Christopher Nguyen

April 14th, 2022

Christopher Nguyen is the CEO of Aitomatic, which provides knowledge-first AI for industrial automation.

Chad talks with Christopher about why having a physical sciences background matters for this work, if we have artificial intelligence, why we still need people, and working in knowledge-first AI instead of knowledge-second, knowledge-third, or no knowledge at all. Data reflects the world.

Become a Sponsor of Giant Robots!

Transcript:

CHAD: This is the Giant Robots Smashing Into Other Giant Robots Podcast, where we explore the design, development, and business of great products. I'm your host, Chad Pytel. And with me today is Christopher Nguyen, CEO of Aitomatic, which provides knowledge-first AI for industrial automation. Christopher, thanks for joining me.

CHRISTOPHER: Thank you.

CHAD: So I was prepping for this interview, and I noticed something that jumped out at me that we have in common, and that is your first computer was the TI-99C/4A.

CHRISTOPHER: No kidding.

CHAD: And that was also my first computer.

CHRISTOPHER: Oh, okay.

CHAD: [laughs]

CHRISTOPHER: You got no storage, correct?

CHAD: No storage; everything was off of the solid-state disks. And I remember I was a little late to it. My parents actually got it for me. I think I was 9 or 10. And my parents got it for me at a garage sale. And so all I had was the manual and the basic manual that came with it. And because it had no storage, I needed to type in the programs that were in the back of that book from scratch, and there was no way to save them. So you would type them in -- [laughs]

CHRISTOPHER: Oh my God. Every single day the same code over and over again. And hopefully, you don't turn it off.

CHAD: [laughs] Exactly. There definitely were times where it would just be on in my room because I didn't want to lose what I had spent all day typing in.

CHRISTOPHER: Yeah, yeah, I remember my proudest moment was my sister walked into the living room...and there was no monitor, and you connected it directly to the TV.

CHAD: To the TV, yeah.

CHRISTOPHER: And younger people may not even know the term character graphics, which is you pick in your book the character space, and then you put them together into a graphic image. And I painstakingly, on graph paper, created a car and converted it to hex and then poked it into these characters and put them together. And my sister walked in like, "Oh my God, you made a car."

[laughter]

CHAD: That was a good time. It was difficult back then. I feel like I learned a lot in an environment where I see people learning. Today it's a lot more of a complicated environment. They're much higher up the stack than we were back then. And, I don't know, I feel like I actually sort of had it easy.

CHRISTOPHER: Well, in many ways, that very abstraction to...you see jobs like to talk about higher software abstraction to make you more productive. I think it's absolutely that powerful. And Marc Andreessen, my friend, likes to talk about how software is eating the world. But it turns out there's one perspective where people have gone up the stack a little too far, too fast, and too much. We're still physical in the industry that I work in.

You know, our previous company was acquired by Panasonic. And I've been working on industrial AI for the last four and a half years. And it's very hard for us to find people with the right physics or electro engineering background and the right science understanding to help automate and build some of these systems because everybody's in software now.

CHAD: Why does physical sciences background matter for this work?

CHRISTOPHER: Let me give you a couple of examples. One example is one of our customers is a very large global conglomerate doing marine navigation and marine sensors. And one of the products they do is fish finding so that amateurs like you and me would go hold one of these systems and shoot it down straight to the ocean. A sonar beam goes down, kind of like submarines. But hopefully, an image would come back. And so to build a system to convert all of that into something other than jumbled what they call echograms, maybe convert to a fish image, you have to build a lot of machine intelligence, AI, machine learning, and so on.

But just to understand the data and make the right decisions about how to do that, you need to understand the physics of sound wave echoes in the ocean. If you can't do that and you got to work with another engineer to tell you how to do that, it really slows things down a lot. So knowing the equation but also having a physical intuition for how it all works can make or break the success of an engineer working on something like that.

Another example is we worked on avionics. Don't blame me for this, but if you have had poor experience with Wi-Fi on a plane, we may be involved in one way or another, Panasonic Avionics.

CHAD: [laughs]

CHRISTOPHER: But the antenna array that sits on top of the plane to receive satellite signal and sends out a signal, so you can expect there's some kind of optimization involved. It's not just line of sight. If there's a cloud coming nearby, then there's some distortion, and there's some optimization needed to take place. Again, an understanding of...at least if you remember, if not an expert in college physics, about antenna radiation pattern and so on, which help tremendously a data scientist or an engineer working on that problem whereas somebody who's a pure computer scientist would struggle a lot and probably give up with that problem.

CHAD: Yeah, this may be a little bit of a facetious question or leading question; I'm not sure which, but if we have artificial intelligence, why do we need people to do this stuff?

CHRISTOPHER: [laughs] Well, I have a broader, you know, I've thought about that a lot. And I'll answer it in the broad sense, but I think you can specialize it. The problem with machine learning, at least today and I really think for a very long time for the rest of the century at least, is that it is trained on data. And data is past examples. And when I say past, I include the present. In other words, whatever it is that our algorithms learn, they learn the world as it is.

Now, we're always trying to change the world in some way. We're always trying to change the world to what we wish it to be, not what it is. And so it's the humans that express that aspiration. I want my machine to behave better in some way. Or I want my algorithms not to have this built-in bias when it makes a decision that affects someone's life.

If it's pure machine learning and data, it will indeed reflect all the decisions that have ever been made, and it'll have all those built-in biases. So there's a big topic there to unpack and who's responsible for doing what. But I think coming back to your question, we'll always need humans to express what it is that is the world that you want in the next minute, the next day, the next week, or the next 50 years.

CHAD: So let's talk more about the ethics or the biases that can be baked into AI. How do you prevent that at Aitomatic?

CHRISTOPHER: As I said, this is a big topic. But let me begin by saying that actually, most of us don't know what we mean when we say bias, or to put it more broadly, we don't agree on the meaning. The word bias in colloquial conversation always comes with a negative connotation on the one hand. On the other hand, in machine learning, bias is inherent. You cannot have machine work without bias. So clearly, those two words must mean something slightly different even though they reflect the same thing, the same underlying physics, if you will.

So first, before people get into what they think is a very well-informed debate, they must first agree on a framework for terms that they're using. Now, of course, I can accommodate and say, okay, I think I know what you mean by that term. And so, let's take the colloquial meaning of bias. And when we say bias, we usually mean some built-in prejudice, it may be implicit, or it may be explicit that causes a human or machine to make a decision that discriminates against someone.

And here's the thing, we've got to think about intent versus impact. Is it okay for the effect to be quote, unquote, "biased" if I didn't intend it, or it doesn't matter what my intent was, and it's only the impact that matters? That's another dimension that people have to agree or even agree to disagree on before they start going into these circular arguments. But let's focus on, for now, let's say it's the impact that matters. It doesn't matter what the intent is, particularly because machines, as of present, there is no intent.

So, for example, when the Uber vehicle a number of years ago hit and killed a bicyclist, there was no traceable intent, certainly not in the system design to cause that to happen. But yet it happened, and the person did die. So coming back to your question, I know that I've neglected the question because I'm unpacking a lot of things that otherwise an answer would make no sense, or it would not have the sense meant.

So coming back, how do we prevent bias as an effect from happening in our system? And an answer that I would propose is to stop thinking about it in terms of point answers; in other words, it's not that...people say...well, myself I even said earlier it's in the data. Well, if it's in the data, does that absolve the people who build the algorithms? And if it's in the algorithms, does that absolve the people who use it? I had a conversation with some friends from Europe, and they said, "In America, you guys are so obsessed with blaming the user." Guns don't kill people; people kill people.

But I think to answer your question in a very thoughtful manner; we must first accept the responsibility throughout the entire chain and agree on what it is the outcome that we want to have, at least effect. And then the responsibility falls on all chains, all parts of the chain. And one day, it may be, hey, you got to tune the algorithm a certain way. Another day may be, hey, collect this kind of data.

And another day, it might be make sure that when you finally help with the decision, that you tweak it a certain way to affect the outcome that you want. I think what I've described is the most intellectually honest statement. And somebody listening to this is going to have a perspective that disagrees vehemently with one of the things I just said because they don't want that responsibility.

CHAD: I like it, though, because it recognizes that we're creating it. It may be a tool, and tools can be used for anything. But as the creators of that tool, we do have responsibility for...well, I think we have responsibility for what that is going to do, and if not us, then who?

CHRISTOPHER: That's right. Yeah. But if you follow the debate, you will find that there are absolutists who say, "That's not my problem. That's the user, or the decision-maker, or the data provider. But my algorithms I have to optimize in this way, and it's going to output exactly what the data told it to. The rest is your problem."

CHAD: So it strikes me in hearing you describe what's involved, especially at the state that machine learning is at now; it probably varies or what you are going to do specifically varies based on what you're trying to achieve. And maybe even the industry that it's in like avionics and what you need to do there may be different than energy.

CHRISTOPHER: Yep, or more broadly, physical industries versus the plane falls out of the air, or a car hits somebody, somebody actually dies. If you get a particular algorithm wrong at Google, maybe you click on the wrong ad. So I really advocate thinking about the impact and not just the basic algorithms.

CHAD: Yeah, so tell me more about the actual product or services that Aitomatic provides and also who the customers are.

CHRISTOPHER: I think what we discussed is quite relevant to that. I think it does lead in a very real perspective directly into that. We do what's called knowledge-first AI. And that knowledge-first as opposed to knowledge-second and knowledge-third or no knowledge at all, there are very strong schools of thought that say, "With sufficient data, we can create AI to do everything." Data is reflecting the world. As I mentioned, it's in the past as it is, not as what we want it to be.

When you apply it to some of the concrete things that we do, let's take a use case like predictive maintenance of equipment, you want to be able to save cost and even to save lives. You want to replace things, service things before they actually fail. Failure is very costly. It's far more costly than the equipment itself. Today, the state of the art is preventive maintenance, not predictive. Preventive means, let’s just every six months, every one year replace all the lights because it's too costly to replace them one by one when they fail.

Lots of industries today still do what's called reactive maintenance, you know, fix it when it fails. So predictive maintenance is the state of the art. The challenge is how do you get data and train enough machine intelligence to essentially predict? And the prediction precisely means the following: can you tell me with some probability that this compressor for this HVAC system, this air conditioning system may fail within the next month? And it turns out machine learning cannot do that.

CHAD: Oh, that's the twist.

CHRISTOPHER: Exactly. [laughter] And I know a lot of people listening are going to sit up and say, "Christopher doesn't know what the hell he's talking about."

CHAD: [laughs]

CHRISTOPHER: But I really know. I really know what the hell I'm talking about because we've been part of an industrial giant. I'll tell you what machine learning can do and what it cannot do. What it can do is with the data that's available...the main punch line, the main reason here is that there are not enough past examples of actual failures of certain types.

There's a lot of data. We're swimming in data, but we're not actually swimming in cleanly recorded failures that are well classified. And machine learning is about learning from past examples, except today, algorithms need a lot of past examples, tens of thousands, hundreds of thousands, or even millions of past examples, in order for it to discover those repeating patterns.

So we have a lot of data at places like Panasonic, Samsung, Intel, GE, all the physical industries, but these are just sensor data that's recording mostly normal operation. When a failure happens, that tends to be rare. Hopefully, failures are rare, and then they're very specific. So it turns out that what's called the labeled data is insufficient for machine learning.

So what machine learning can do is do what's called anomaly detection. And that is look at all the normal patterns, and then when something abnormal appears on the horizon to say, "Hey, something is weird. I haven't seen this before." But it cannot identify what it is, which is only half of predictive maintenance because you have to identify what the problem is so you can replace that compressor or that filter. And it turns out humans are very good. Human experts are very good at that second part.

The first improvement might be to say let's get machine learning to detect anomalies and then let's get human experts to actually do fault prediction. And after you do this for a while, which is what we did at Panasonic in the last three, four years across the global AI units, we said, "Well, wait a minute. Why are we making these very expensive?" Human experts do this if we can somehow codify their domain expertise. And so that's what Aitomatic is. We have developed a bunch of techniques, algorithms, and systems that run as SaaS software to help people codify their domain expertise, combine it with machine learning, and then deploy the whole thing as a system.

CHAD: The codified expertise, there's a word for that, right?

CHRISTOPHER: Probably you're referring to expert systems.

CHAD: Yes. Yes.

CHRISTOPHER: Yeah. Expert systems is one way to codify domain expertise. At the very basic level, you and I wrote actual BASIC programs before. You can think of that as codifying your human knowledge. You're telling the computer exactly what to do. So expert systems of the past is one way to do so. But what I'm referring to is a more evolved and more advanced perspective on that, which is how do you codify it in such a way that you can seamlessly combine with machine learning?

Expert systems and machine learning act like two islands that don't meet. But how do you do it in such a way that you can codify human knowledge and then benefit as more data comes in, absolutely move into this idea of asymptotically this world where data tells you everything? Which it never will. And so the way we do that, the naive way, as I mentioned, is simply to just write it down as a bunch of rules. And the problem is rules conflict with each other.

We, humans, work on heuristics. Whatever it is you tell me to do, you could be an expert, and you start teaching me, and you say, "Okay, so here are the rules." And then once I learn the rules, you say, "Well, and there are some exceptions." [laughs] And then, can you tell me all the exceptions? No, you can't. You have to use judgment. Okay, well, what is that? So the way we codify it is you can think of that evolution. I'll give you one concrete example from the machine learning perspective so people that are machine learning experts can see how we do things that are different. There's something in the machine learning process called the loss function. Have you heard of that term?

CHAD: No. Yeah.

CHRISTOPHER: So it's very simple. Training, which I'm sure everybody has heard, is really about how do I tweak the parameters inside the algorithm so that eventually, it gives the correct answer? So this process is repeated millions of times or hundreds of thousands of times. But let's say the first time, it gives you a random answer, but you know what the right answer should be. These are training examples.

So you compute an error. If you output a five and the answer is actually six, so I say, "Oh, you're off by one, positive one," and so on. So there's a loss function, and in this case, it's simply the subtraction of one. And then that signal, that number one, is somehow fed back into the training system that says, "Well, you were close, but you're off by one." And the next time, maybe you're off by 0.5, next time maybe you're off by -2, and so on and so forth. That value is computable, what's called a loss function. That's machine learning because you have all these examples.

Well, human knowledge can be applied as a loss function too. A simple example is that you don't have all the data examples, but you have a physical equation. If you throw a ball in the air, it follows a parabolic pattern, and we can model that exactly, an elliptic equation. That is a way to produce the correct answer, but there's no resistance there. And so, we can apply that function back as a loss function to encode that human knowledge.

Of course, things are not always as simple as a parabolic equation. But a human expert can say, "The temperature on this can never exceed 23. If it exceeds 23, life is going to end as we know it because you're going to have a disaster." You can put into the loss function an equation that says if your predictor is greater than 23, give it a very high loss. Give it a very strong signal that this cannot be. And so your machine learning function being trained can get that signal coming back and adjust the parameters appropriately. So that's just one example of how we codify human knowledge in a way that is more than just expert systems.

CHAD: That's really cool. Now, is there a way, once you have the system up and running and it is making decisions, to then feedback into that cycle and improve the model itself?

CHRISTOPHER: Oh, absolutely, yeah. I think there's a parallel to what I say during training to also while it's in production, both in real-time, meaning one example at a time, as well as in batch after you've done a bunch of these. In fact, the first successful predictive maintenance system we deployed when we were part of Panasonic employs a human being that they feedback at.

So our system would try to learn as much as it can and then try to predict the probability of failure of some piece of equipment. And the human being at the other end would say, "Okay, yeah, that looks reasonable." But a lot of times, they would say, "Clearly wrong. Look at this sensor over here. The pressure is high, and you didn't take that into account." So that's a process that we use both to certainly improve the output itself but also the feedback to improve our predictive AI.

Mid-Roll Ad

I wanted to tell you all about something I've been working on quietly for the past year or so, and that's AgencyU. AgencyU is a membership-based program where I work one-on-one with a small group of agency founders and leaders toward their business goals.

We do one-on-one coaching sessions and also monthly group meetings. We start with goal setting, advice, and problem-solving based on my experiences over the last 18 years of running thoughtbot. As we progress as a group, we all get to know each other more. And many of the AgencyU members are now working on client projects together and even referring work to each other.

Whether you're struggling to grow an agency, taking it to the next level and having growing pains, or a solo founder who just needs someone to talk to, in my 18 years of leading and growing thoughtbot, I've seen and learned from a lot of different situations, and I'd be happy to work with you. Learn more and sign up today at thoughtbot.com/agencyu. That's A-G-E-N-C-Y, the letter U.

CHAD: So on the customer side, whether you can share specific customers or not, what kinds of companies are your customers?

CHRISTOPHER: So I've mentioned in passing a number, so Panasonic is one of our customers. When I say Panasonic, Panasonic is a global giant, so it's run as individual companies. So, for example, avionics automotive coaching, how a fish gets from the ocean to your table, Panasonic has a big market share in making sure that everywhere in the chain that fish is refrigerated. So it's called the cold supply chain or cold chain. Supermarkets their refrigeration systems keep our food fresh, and if that goes down in an unplanned manner, then they lose entire days or weeks of sales.

I mentioned the example of Furuno, F-U-R-U-N-O. If you go to some marina, say Half Moon Bay, California, you would see on the masts most of the navigation equipment is a Furuno, the white and blue logo. So we help them with those systems and fish finding systems. As well as off the coast of Japan, there's a practice called fixed-net fishing. What that is is miles and miles of netting. And large schools of fish would swim from different gates, A into B. And once they get to B, it's set in such a way that they cannot go back to A. But it's very large that they feel like they're swimming in the ocean still and eventually to trap C.

And so Furuno is working on techniques to both detect what kind of fish is flowing through as well as actually count or estimate the number so the fishermen can determine exactly when to go and collect their catch. So I can go on. There are lots of these really interesting physics-related and physical use cases.

CHAD: So is Aitomatic actually spun off from Panasonic?

CHRISTOPHER: Spin-off, I think legally speaking, that is not the correct term because we're independent. Panasonic does not own shares. But in terms of our working relationship as customer and vendor, it's as good as it ever was.

CHAD: What went into that decision-making process to do that?

CHRISTOPHER: To do the so-called spin-off?

CHAD: Yeah.

CHRISTOPHER: Lots of things.

CHAD: I'm sure it was a complicated decision. [laughs]

CHRISTOPHER: Like we used to say at Google, to decide where to put a data center, lots of things have to intersect just the right way, including the alignment of the stars. In our case, it's a number of things. Number one, the business model just, as I said, at a very high level, it makes a lot of sense for us to be an independent company otherwise inside a...if we're a small unit inside a parent company, the business incentives are very different from if you're a startup, that's one. And the change is positive for both sides.

Number two, in terms of venture capital, as you know, today, once you're an independent company, you can access a very large amount of scale in such a way that even a global giant doesn't have the same model to fund. Number three, certainly, the scope of the business we want to be able to apply...everything that I talk to you here is actually an open-source project.

We have something called human-first AI rather than just knowledge-first, and so being able to put it out into the open-source and being able to have other people contribute to it is much easier as an independent startup than if it's a business unit. And then finally, of course, aspirations, myself and the rest of the team, we can move a lot faster. People are more passionate about the ownership of what they do. It's a much better setup as an independent company.

CHAD: Were there things from Panasonic, either in culture or the way that the business works, that even though you had the opportunity to be independent, you said, "Hey, that was pretty good. Let's keep that going"?

CHRISTOPHER: Well, I can comment on the culture of Panasonic itself. It's something that I was surprised by. This is 100 years old. The anniversary was in 2018. I gave a talk in Tokyo. So a 100-year old conglomerate. Japan might seem very stodgy, and, sorry to say, in many ways, it is. But I was very impressed. And I say this as a headline in cocktail conversation. I say the culture of engineering at Panasonic is far more like the Google that I knew than it is different; in other words, very little empire-building. People are very engineering-driven. There are a lot of cordial discussions and so on when people go into a meeting. I was very impressed by this.

The Japanese engineers in Panasonic were always really well prepared. By the time they got to the meeting, even though they are in this context our customers, they will come with a slide deck like 30 slides talking through the entire use case. And they thought about this, they thought about that. And so I'm sitting there just absorbing it, just learning the whole thing. I really enjoyed that part of being part of Panasonic. And many of those folks are now lifelong friends of mine.

CHAD: And so that's something that you've tried to maintain, that engineering-focused culture and great place.

CHRISTOPHER: Well, when we were acquired by Panasonic, both Tsuga-san, the CEO, and Miyabe-san, the CTO, said the following, he said, "We want you to infect Panasonic, not the other way around." [laughs] From their perspective, we had this Silicon Valley setup. And they want this innovation, a fresh startup, not just the algorithms but also the culture. And they were true to their word. We kept an office, our own unit, kept their office in Downtown Mountain View. And folks were sent in to pick up our ways and means. What I enjoyed, the part that I just shared with you, is what I didn't expect to learn but what I did learn in retrospect.

CHAD: As you set out on everything you want to achieve, what are you worried about? What do you think the biggest hurdles are going to be that you need to overcome to make a successful business, successful product?

CHRISTOPHER: Well, I've done this multiple times. So people like to say, "You've seen this movie before," but of course, every movie is told differently, and the scenes are different, the actors are different, and so on. Of course, the times are different. So concretely, our immediate next hurdle you have to have proof points along the way. So we've got good revenues already. As a startup less than one-year-old, we have unusually good revenues but mainly because of our deep relationships in this particular industry.

The next concrete proof point is a series of things, metrics that says we have a good product-market fit. And, of course, product-market fit means more than just a great product idea. It's a great product idea that is executed in a way that the market wants it in the next quarter, not ten years from now. So product-market fit is that iteration, and we're quite fortunate to have already customers what we call design partners that we work with. So hearing from that diverse set is pretty good confidence that if they want it, then other people will want it as well.

And then after that, certainly after in timing but in the doing now, is scaling our sales efforts, our sales volume beyond just the founder-led volume that we currently have, so building the sales team and so on. But these are things that I will say are generally understood. But it does have to still be; you just got to sweat it. You got to do it. It doesn't happen automatically. I think the much bigger challenge that I see, and maybe it's an opportunity depending on how you think about it, is I'll call it a cultural barrier. Silicon Valley, in particular, the academic side of us...and you may know I used to be a professor, so when I say academic, I'm talking about myself as well. So any criticism is self-directed.

CHAD: [laughs]

CHRISTOPHER: We tend to be purists. The purism of today, if I can use that term, is data. And so, whenever I talk about knowledge-first AI, it offends the sensibilities of some people. They say, "You mean you're going back to expert systems. You mean you are not going to be extolling the virtues of machine learning and so on." And I have to explain data is nice if you have it, but 90% of the world doesn't have the data. And you do need to come up with these new techniques to combine human knowledge with machine learning.

We look forward to being the Vanguard of that revolution, if you will, that say maybe it's a step backward. I think of it as a step forward of really harmoniously combining human knowledge and machine data to build what we call AI systems, these powerful systems that we're purporting to build. And that's almost directly at odds with the school of thought where people say, "Eventually, we'll have all the data." [laughs] And maybe, as you stated at the beginning, we don't need humans anymore. I will fight that battle.

CHAD: The customers that you talked about, a lot of them seem to be pretty big enterprises. So as you talk about scaling sales beyond the founder-led sales that you're doing now, are you continuing to sell to enterprises? Or do you ultimately envision the product being accessible to any company?

CHRISTOPHER: Well, I would say both. But I say that in a very careful sense because it's very important building businesses with a focus. And so let me break down what I mean by both, not just from some ambitious thing, you know, A and B. We will focus on enterprise as a matter of business. And the reason for that is A, that's where the money is but B, but more importantly, it's also where the readiness is. We've gone through...it's amazing. It's been a decade since that first New York Times, what I call the cats’ paper about the Google Brain Project.

We've gone through a decade of the hype and everything, but this vast physical industry, the industrial of the world, is ready. When I say ready, it means that people are now sophisticated. They don't look at it with wide eyes and say, "Please sprinkle a little bit of AI on my system." So they have teams, and they can benefit from what we do at the scale of what I've just described. But the reason I say both is because, quite happily, it is an open-source project.

Our roadmap is designed with our design partners, but once it's out there, the system can be contributed to by others. The nature of open source is such that people tend to use it more than contribute. That's fine. So I think a lot of the smaller companies and smaller teams, once they overcome this cultural barrier of applying knowledge as opposed to pure data, I think they can really take advantage of our technology.

CHAD: I'm glad you segued there because I was going to bring us there, too, which is that that open source that you've made available was it ever a question whether you could build a business where you were also open-sourcing the software behind it?

CHRISTOPHER: It was absolutely a question 10 years ago. The industry has evolved. And now you and I talked about the TI-99/4A. I was already writing what's called public domain software before the term open source. Ten years ago, CIOs would say, "Why would I do away with the relationship with a big company like a Microsoft or an Oracle in favor of this unreliable, unknown open source?" It turns out, as we now look back, it was nothing to do with the business model; it was the immaturity of open source.

Today, it is the opposite. Today, people don't worry about the lock-in with a vendor whose source code that they don't have. But I think equally important, source code is no longer a competitive advantage. Let me say that again. Source code is no longer that intellectual property. CIOs today want to be able to have the peace of mind that if some company locks them out or the company becomes defunct, that the engineers still have access to that source code so that they can build it. But that is not the real value.

Amazon, Microsoft Azure, and GCP, Google have proven that people are very willing to pay for some experts to run operationally these systems so that they can concentrate on what they do best. So every day today, you know, every month, we're sending checks to AWS. They're running something that my team can easily run but probably at a much higher cost. But even at cost parity, I would like my team members to be focused on knowledge-first AI rather than the running of an email system or the running of some compute.

So likewise, the value that our customers get from us is not the source code. But they're very willing for us to run this big industrial AI system so that they can focus on the actual work of codifying their expert knowledge. And by the way, I probably gave too long an answer to that. Another way is simply to look at the public market; there are very well rewarded companies that are entirely open source.

CHAD: Yeah. Well, thank you. That was great. Thank you for stopping by and sharing with me. I really appreciate it. If folks want to find out more about Aitomatic or get in touch with you or follow along, where are all the places that they can do that?

CHRISTOPHER: I think the website, aitomatic.com And it's just like automatic, except that it starts with A-I-. So I think the website is a great place to start to contact us.

CHAD: Wonderful. Thank you again.

CHRISTOPHER: Awesome. Thank you.

CHAD: You can subscribe to the show and find notes for this episode at giantrobots.fm. If you have questions or comments, email us at hosts@giantrobots.fm. And you can find me on Twitter @cpytel.

This podcast is brought to you by thoughtbot and produced and edited by Mandy Moore. Thanks for listening, and see you next time.

ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.

Support Giant Robots Smashing Into Other Giant Robots
Sponsors