One-Person Enterprise
24 Jan 2025 09:15h - 10:00h
One-Person Enterprise
Session at a Glance
Summary
This World Economic Forum panel discussion focused on the emergence of “one-person enterprises” enabled by AI and technology. Experts explored how entrepreneurs can now leverage AI to build nimble, high-performing companies with minimal human resources.
Panelists discussed the potential for AI agents to automate tasks across industries, from drug development to software engineering. While acknowledging AI’s transformative potential, they debated its impact on jobs and society.
Some viewed AI as primarily augmenting human capabilities and creating new opportunities, while others warned of significant job displacement, especially for entry-level positions. The panel emphasized the need for upskilling and rethinking education to prepare for an AI-driven future.
They also stressed the importance of responsible AI development and deployment, with transparency and accountability in how AI agents are integrated into organizations. Differing views emerged on the timeline for truly autonomous one-person enterprises, with estimates ranging from already possible to decades away.
The discussion highlighted both utopian visions of AI freeing humans to focus on meaningful work and relationships, as well as concerns about social disruption. Ultimately, the panel agreed that while AI will dramatically reshape business and entrepreneurship, human judgment and relationships will remain crucial in many areas.
Keypoints
Major discussion points:
- The impact of AI and automation on entrepreneurship and business structures
- The potential for “one-person enterprises” leveraging AI to scale rapidly
- Concerns about job displacement and the need for reskilling as AI advances
- The importance of responsible AI development and deployment
- Differing views on the timeline and extent of AI’s transformative impact
Overall purpose:
The goal of this discussion was to explore how AI and automation are changing entrepreneurship, business models, and the workforce, as well as to consider both the opportunities and challenges presented by these technological advancements.
Tone:
The tone was primarily optimistic and excited about the potential of AI, but became more cautious and nuanced as panelists discussed potential downsides like job displacement. There was a mix of utopian visions of AI-enabled efficiency and more measured perspectives on the pace and impact of change. By the end, there was general agreement that while AI presents immense opportunities, responsible development and deployment will be crucial.
Speakers
– Dan Murphy: CNBC journalist, moderator of the session
– Richard Socher: CEO of You.com, founder of AIX Ventures, former chief scientist of Salesforce
– Sarah Franklin: CEO of Lattice, a people management platform
– Mitchell Green: Founder and managing partner of Lead Edge Capital
– Kanjun Qiu: CEO and co-founder of Imbue
– Benjamine Liu: CEO and co-founder of Formation Bio
Additional speakers:
– Audience member: Dan Vedat, founder and CEO of Huma (healthcare AI company)
Full session report
The World Economic Forum panel discussion on “one-person enterprises” enabled by AI and technology brought together experts to explore the transformative potential of AI in entrepreneurship and business. The panel, moderated by CNBC journalist Dan Murphy, featured insights from industry leaders including Richard Socher (CEO of You.com), Sarah Franklin (CEO of Lattice), Mitchell Green (Lead Edge Capital), Kanjun Qiu (CEO of Imbue), and Benjamine Liu (CEO of Formation Bio). Notably, Mitchell Green’s background as a former nationally ranked alpine ski racer was mentioned in the introduction.
Impact of AI on Entrepreneurship and Business Models
The panellists unanimously agreed that AI fundamentally changes how businesses operate, allowing for more efficient scaling with fewer human resources. Kanjun Qiu conceptualized AI agents as customizable software interfaces that empower users to interact with computers in novel ways. She provided examples of junior software developers learning from AI, shifting the discussion towards viewing AI as a tool for empowerment rather than replacement.
Benjamine Liu introduced the concept of “AI native” companies that scale using AI rather than human resources. He provided a concrete example from drug development, where an AI system with human oversight now accomplishes in hours what previously took teams two months. This dramatic efficiency gain illustrated both AI’s potential and the continued importance of human involvement.
Mitchell Green highlighted how AI allows companies to operate with minimal physical infrastructure. While not having backed any AI-specific companies, he noted that companies in his portfolio are deeply involved in AI. Sarah Franklin described a new age of human-agent collaboration, while Richard Socher suggested that every employee will become a manager of AIs, not just CEOs. Socher also shared an example of Mimecast realizing the potential of AI for various departments after a workshop.
Job Displacement and Societal Impact
The discussion acknowledged the significant potential for job displacement due to AI, particularly for entry-level positions. Benjamine Liu emphasized this concern, especially for young people, while Richard Socher argued that AI would create new types of jobs we cannot yet anticipate. This difference in perspective highlighted the uncertainty surrounding AI’s long-term impact on employment.
Sarah Franklin stressed the crucial need for reskilling and upskilling as AI advances, noting that the pace of change is “exponentially fast” and that current reskilling efforts may not keep up. She shared a personal example about her concerns for her daughter’s future, drawing a parallel with previous technological revolutions and emphasizing the need for proactive measures to address potential negative consequences.
The panel explored the possibility of AI enabling shorter work weeks and more leisure time, though opinions varied on the likelihood and timeline of this scenario. An audience member, Dan Vedat, cautioned that AI might be overhyped and that adoption could be slower than expected, providing a counterpoint to the generally optimistic outlook.
Responsible Development and Use of AI
The importance of responsible AI development and deployment was a recurring theme. Sarah Franklin emphasized the need for companies to be transparent about AI use and hold themselves accountable. She also took a strong stance against using AI for employee performance evaluations, highlighting an area where human judgment remains crucial. Franklin further elaborated on her perspective regarding AI in talent management and recruitment.
Kanjun Qiu introduced the concept of “resistibility” in software and AI, emphasizing the importance of user control and customization. She argued for the need to make software not just creatable but also editable and remixable, allowing users more control over their digital environment.
The panel agreed that human oversight remains necessary for important decisions and that relationship-building should remain a human activity. This consensus underscored the view that AI should augment rather than replace human capabilities in many areas.
Future of One-Person Enterprises
Views on the timeline for truly autonomous one-person enterprises varied widely among the panelists. Kanjun Qiu suggested that one-person billion-dollar companies are already emerging in some sectors, while Mitchell Green was more sceptical, stating that public one-person companies are unlikely in the near future.
Despite the enthusiasm for AI-enabled one-person enterprises, there was an unexpected consensus that human co-founders and larger teams will still be valuable or necessary, especially for public companies. This nuanced view highlighted the complexity of AI’s impact on business structures.
Sarah Franklin speculated on the possibility of future AI-run companies or even nations, while Benjamine Liu maintained that co-founders will likely remain important despite AI capabilities. The varying timelines proposed by the panelists, ranging from already possible to decades away, reflected the uncertainty surrounding the pace of AI advancement and adoption.
Conclusion
The discussion highlighted both utopian visions of AI freeing humans to focus on meaningful work and relationships, as well as concerns about social disruption. While the panel agreed that AI will dramatically reshape business and entrepreneurship, they emphasized that human judgment and relationships will remain crucial in many areas.
The conversation evolved from initial excitement about AI capabilities to a more nuanced exploration of its societal implications. Key unresolved issues included balancing AI adoption with potential job losses, the long-term implications for work hours and leisure time, and how to effectively regulate AI use in businesses and society.
As the first generation of leaders managing both people and AI, the panelists emphasized the need for new skills, adaptive organizational structures, and a focus on empowering users rather than just discussing AI capabilities. The discussion ultimately underscored the transformative potential of AI while highlighting the need for careful consideration of its implementation to ensure it empowers rather than displaces humans.
Session Transcript
Dan Murphy: You Hello and welcome to this World Economic Forum special event. My name is Dan Murphy from CNBC and I’m thrilled to be leading this Friday morning session. Welcome to everyone joining us online and welcome to all of you in the room. Thank you so much for being here. The next session is called the One Person Enterprise. It’s fair to say that technology has now evolved to the point where businesses no longer need to rely on. traditional structures of large teams and physical infrastructure to scale. Instead, entrepreneurs today are leveraging technology to build nimble, high-performing companies, sometimes with just one person at the helm. So what does this mean for the future of entrepreneurship, talent strategy, capital raising, and the very notion of what it means to be an entrepreneur? Well, joining us are an expert panel of guests, including leaders from the forum’s innovator community. Please join me in welcoming Kanjun Qiu, the CEO and co-founder of Imbue, which is pushing the boundaries of what AI can do to help humans make better business decisions. Benjamine Liu is the CEO and co-founder of Formation Bio, which is doing exciting things to speed up and streamline biotechnology and healthcare innovation. Mitchell Green is the founder and managing partner of Lead Edge Capital. Mitchell has backed some of the biggest tech success stories of the last decade. You might have seen him on CNBC or on the slopes as a former nationally ranked alpine ski racer. And Sarah Franklin is the CEO of Lattice. It is a people management platform which is changing how companies approach talent management and performance. We’re also joined by Richard Socher. He’s the CEO of You.com, which is a personalized, customizable search engine. He’s also the founder of AIX Ventures and the former chief scientist of Salesforce. Ladies and gentlemen, thank you again for being here. Let’s dive straight into the conversation. First of all, technology has evolved so rapidly now that entrepreneurs can essentially leverage it to build entire businesses from the ground up with minimal human resources. So how do you see this trend impacting the role of the entrepreneur?
Kanjun Qiu: Please begin. Yeah, so I think people often think that technology trends will manifest much faster than they do. And so what we see, we work on AI agents. And most people, they think of agents as, oh, you deploy an agent, it’s going to replace a bunch of work. But what we see is that it’s actually very hard to take an agent off the shelf and make it work well, especially in the enterprise. And so we’ve actually reconceptualized agents. It’s hard because of a few reasons. One, it doesn’t necessarily know and fit all of your workflows. Two, delegating to an agent is actually a very hard thing to do. And so as a human, I struggle to delegate to my team. And so we’ve actually reconceptualized agents as, what is an agent? It’s this interface. It’s a piece of software on your computer. It talks to you, talks to your computer, talks to your computer in code. So what is it? It lets me write code on my computer. And we find that that actually is a much more powerful view of agents because everyone implementing agents, they have to custom code stuff anyway. And so yeah, that’s what’s happening. And I think for the new businesses coming up, there’s going to be some really interesting stuff. But yeah, as the old businesses, you can only get so much efficiency.
Dan Murphy: Benjamin, what’s your view? Can I get your take as well?
Benjamine Liu: So I think we’re living in one of the most exciting eras to be building companies. We have PhD level intelligence in your pocket. And we’re beginning to see AI systems do the work of entire teams. And I think in that world, AI native companies have a pretty significant advantage. I’ll kind of give you an example. It used to take our teams, many teams, about two months to do a patient recruitment campaign in the process of drug development. You had teams that had to research a patient population, segment the different indications, put together IRB compliance, regulatory compliant, patient brochures, ads, things that kind of touch the patients. And now it’s one AI system with a human in the loop. And I think that has kind of profound impacts for all companies, but specifically really challenging implications for incumbents. It’s easy to adopt software or AI tools when it allows one person to do 10x more work. It’s really challenging when the system replaces entire sets of teams. And the common kind of refrain you’ll get, you know, is the systems aren’t ready and, you know, maybe just to agree with what you just stated, they’re not ready unless they’re trained and integrated in the right workflows kind of over time. But if you’re able to do that, you know, there’s profound kind of upside. But you’re asking, you know, a human to basically train something that, you know, all goes well, not only eliminates their job, but potentially kind of broader teams. And I think that’s an enduring theme that will only get, I think, more consequential as these AI tools and systems get better and better over time.
Dan Murphy: Consequential and perhaps dystopian as well. So we’re going to unpack that a little bit further. Mitch, give me your hot take.
Mitchell Green: No, so we, Global Growth Equity Fund, we have not backed any AI specific companies, although you might argue, might dance, you know, as an AI company that’s been using this stuff for 10 years to build it. That being said, every one of our companies in our portfolio are deeply involved in all the different aspects of using it. Now, a lot of it is trying and, you know, sometimes it always doesn’t work as good as they say it does, but it’s a constant process of tweaking. And while we, you know, we do think, we tend to think that people, we think we’re living in an AI bubble, haven’t invested in 98, 99. That being said, there will be giant companies that are created out of this. And like, we tend to think that people usually get like near, they get overexcited in the near term about technological change, but they massively underestimate it like long term. I think we’re living to the, will be the biggest, you know, tech revolution in the next, you know, 20 to 30 years.
Dan Murphy: The biggest tech revolution. Okay, really interesting. Sarah, what’s your take?
Sarah Franklin: So, as you hear from everyone, this is a very exciting time and AI is everywhere. Agents are everywhere. And this is a great new age of collaboration that we’re moving into where humans are collaborating with agents. And July of last year at Lattice, we got ahead of this because this is also, has very big societal implications for how we work together with AI agents in a way that we’re putting the success of people as the primary. And so, as agents… into the workforce, as a digital labor enters the workforce, the questions come about, how do we manage our agents? How do we interact with them? How do we keep ourselves transparent and accountable to the success of them? You may have an agent as a customer service representative. It could deflect a lot of cases. But are your customers still happy? We need to make sure that that happens. And we need to help humans work well together with agents. And this is what’s most important, we think, an opportunity for leaders of people because this is their north, is the success of people. And so this is very important for us with agents to also think about how we prioritize the success of people with them in our workforce. Exactly, and where agents fit into the organizational structure as well, which is something that we’re going to unpack.
Dan Murphy: While we get your microphone fixed up, let’s bring it over to Richard as well for his hot take. What’s your perspective on the question and how are you thinking about this issue?
Richard Socher: So we’ve shipped agents at u.com about a year and a half ago. And our customers have created over 50,000 different agents already on the platform. And they told us that they’ve seen somewhere between 5% to 80% automation for some tasks. And the way people reacted to this very differently. Sometimes we have folks that sign up with their company email address. We reach out and say, hey, should your whole company maybe get on this platform? And some say, I just love this as my own superpower. I don’t want everyone to know that I can create this much text and write these beautiful emails and reports so quickly. Some people say, I actually just work half the time. I don’t want people to know that I am doing the same work and half the time. And then there’s some executives who are like, yes, all my team should have this. And so it’s a really interesting time, right? We are as CEOs are going to be the first generation that manages people and AI. So we have learned to manage. But I think the most interesting change here is actually that. every individual contributor, every employee, is going to become a manager of AIs. And in that sense, they’re all, like everyone is gonna become kind of an entrepreneur. If you care about the output of things, then you’re just happy that AI will allow you to have more output. The impact of jobs, of course, will be a different one, right, because if it’s in sales, and you have twice the sales because you’re twice as productive with your agents, you’re just happy to make twice the sales. If in service, you have twice the productivity, or you have agents deflecting a bunch of cases, you might not need as much of your service staff. And so it’s a really complex story of the whole impact.
Dan Murphy: So do you think we’re also barreling towards a future where founder is basically going to mean glorified AI wrangler?
Richard Socher: I think, yeah, we’re all going to be managers of AI. Every individual contributor will be. And that is one interpretation of the one company, solo entrepreneur. The other interpretation is, of course, you will have one person who will really run an entire unicorn company with just AI agents. And I think that will be a little bit further out. You’re going to have to have expertise. And I think as we all have to upskill our own employees and everyone else in the world has to upskill, we have to learn more and more discernment and evaluation of outputs of AI, understanding how it can quickly verify that the processes are working well with the AI versus creating the original work product myself.
Dan Murphy: Really interesting. I’m back over to Kanjun because you mentioned that you’re actually building these AI agents. So give me a real world example of how your technology helps entrepreneurs and businesses to make more informed decisions. And what do you see as the most disruptive force that will ultimately impact how founders operate in the future?
Kanjun Qiu: That’s a great question. I think people talk about AI agents as if it’s something new. And it kind of is, but really it’s just fancy software. And it’s fancy software that makes decisions a little bit more than previously and can deal with data that’s a little bit more ambiguous. That’s how we see it. We let people make fancy software. That’s what our product does. It’s in private alpha right now and so I won’t be able to talk about the impact yet but maybe next year. Okay very interesting but we don’t know where the technology is going to be next year as well I think that’s also fair to say. Yeah I think there’s actually some pretty good sense of where the technology will be next year and I think in a lot of ways the software just gets fancier and fancier. Something where you know you were talking about how enterprises are using you.com to automate you know much of their workflows and putting their workflows into software. That’s really what’s happening is that all of us are putting our workflows into software. Now the software can run our workflows and that’s where you get more and more automation and a single person being able to do more and more.
Dan Murphy: I wanted to go over to Benjamin as well because Formation Bio describes itself as an AI native pharma company. Just unpack that for me and the reason I ask is because this AI native phrase is something that we hear more and more often now. It’s something that a lot of startups actually claim. So how is it different to say Sergey Brin and Larry Page experience in building Google in a garage back in the 90s?
Benjamine Liu: Yeah it’s a great question and maybe to start with what is a tech company and you know what is an AI native company. So I’ve always thought kind of a tech company is a company that scales nonlinearly with productivity and output and kind of humans right. And an AI native company is a subset of that that instead of scaling with kind of nonlinearly it actually just scales with AI right agents. Agents, models, systems with kind of a human loop until frankly you know the human is out of the loop. And you know I think that’s one of the most kind of profound shifts and how this you know my opinion is a little bit different from from kind of software is that in the long run every company will look like you know founder or you know some set of really important kind of decision makers with agents, models, infrastructure and the humans a loop. And almost kind of everyone’s job is almost to train the systems that increasingly will, you know, frankly, displace a lot of current kind of human roles. The irony here, and there’s a lot of dichotomies here, you know, in the short term, human talent is going to be extremely important. And specifically, the senior level talent and the folks with kind of judgment, right, and they understand what great looks like, what a great decision looks like. In our case, it’s experienced drug developers who understand, you know, what is a great drug, how do you pick kind of the right drug, all these kind of things. In pharma, so much of what we do is knowledge work, right? So I kind of gave that patient recruitment example, but if, let’s say we’re thinking about a new obesity drug, you know, a big kind of task is researching all the current obesity drugs, where’s the white space, how do you put together a drug development plan to go and develop a drug to tackle, you know, where there’s still unmet need? That’s something that these AI agents and systems are doing a really, really great job, but with a very experienced human kind of expert in the loop. And so, you know, one of the things that we anticipate will happen is, you know, the kind of entry analyst kind of roles, you’re kind of seeing this even in a Google and in a Microsoft, those are the roles are getting displaced, because the companies are training the AI agents, right, versus actually the entry level employees. Increasingly, there’s a talent for those folks that know how to really make those kind of higher order decisions and that experience that a kind of a human has. And then long run, you know, these systems will garner increasingly more of those kind of judgment calls. And so you’re looking for, as an AI native company, for a few things. You know, employees who can rapidly adapt, have the ability to update their mental models quickly, because their reasoning models every quarter get that much better. So something that you thought the agents and the models couldn’t do a good job, they tend to kind of increase kind of really, really quickly. You look for folks that pride themselves not in the mastery of one specific narrow task, but building the… engine that can do the task kind of for you in the future. And I think, like, these kind of characteristic traits are kind of quite different than your standard enterprise, right, that might look for someone who’s great at medical writing or just does kind of, you know, SOPs or drafts documents or reads legal contracts very narrowly. You want the lawyer that can build the system.
Dan Murphy: Really interesting. And I also wanted to talk about the financing of all of this as well. And Mitch, this is where you come in. You mentioned before that you haven’t written a check yet for an AI company. Maybe you can just expand on that for us. And when I speak with capital allocators and investors on CNBC, one thing they often say to me is that they choose to allocate capital based on the personality, whether or not they actually like the person or they believe in the person that they’re speaking to. What happens when that person is ultimately replaced with AI? Would you put your money behind an AI founder?
Mitchell Green: Well, look, the quant hedge funds have been around for 10, 15, 20 years in some cases, like D.E. Shaw, Renaissance. And so this stuff has been going on for a long time. I think you’re going to need somebody to still read them. One thing, and I’m actually very curious to hear Richard’s view on this, given he was the chief scientist at Salesforce, is, like, incumbent versus new. It is our belief, and by the way, it is an amazing time to be an entrepreneur right now because there is, you know, Matt Kohler from Benchmark is a very good friend of mine, and he says, look, you know, you want to be an investor when, like, capital is tight, because then you, like, you get better deals out of the entrepreneur. The reverse is true when there’s tons of capital. So it is an amazing time to be an entrepreneur right now and start in these companies, because there is so much money flowing into them. And look, it is. It’s all about the people. It’s about the people, the CEO. We tend to invest in businesses that are slightly larger, 10, 20, 50, 100 million revenue, 50, 60 percent of our businesses that we invest in are profitable companies. But at the early stages, it’s all about the founder. Like, if I was an early stage investor, I don’t even know Richard, but given he was the chief scientist. of Salesforce, like that’s an amazing background. Like those are the types of people that you want to back on. The one thing that we struggle with a lot of these companies, not all of them, some of these companies have incredible metrics, is we focus a lot on gross dollar retention. And what gross dollar, net gross dollar retention is if you start, if you end the year with $100 of sales, what did you, you know, end the next year at from those same exact customers but don’t include upsells? And so it actually, and what we’re seeing in a lot of these companies are, you know, like so good gross dollar retention is 90 plus percent. Great gross dollar retention is like 95 plus percent. If you look at like the enterprise, I don’t obviously know what Salesforce is on the enterprise side’s gross dollar retention is, it’s probably like 98 or 99 percent. Because once you install Salesforce and spend millions or tens of millions of dollars installing the product, it’s effectively impossible to get rid of it. And so what we’re seeing is a lot of these companies, like lots of people are, if you listen, look at every Fortune 500 company or every Fortune 1,000 company, if you listen to their earnings calls, the word AI would probably be mentioned in every one of them. So everybody is trying this stuff. Some of these companies are delivering incredible promises and some of them are like, yeah, it doesn’t really work or, you know, I can’t get rid of all these people. But maybe, it’s making, it’s definitely making me more productive though. And so that’s been our biggest challenge so far.
Dan Murphy: How do you see this being disrupted though and what do you think it means for the role of a traditional VC in the funding ecosystem?
Mitchell Green: So what do I think it means? There’s too much money chasing like too few great entrepreneurs. That’s just the reality of it. I do think though, the whole COVID, I mean, you’re seeing entrepreneurs, you don’t have to be in Silicon Valley. Where 10, 15 years ago, Sequoia or Benchmark would not, I mean, there are exceptional cases where really you needed to be in San Francisco to get some of these great venture firms to fund you. I think now, you know, we have a company in our portfolio. Grafana Labs, that’s nearly 400 million of ARR, and they don’t join 70% a year in the DevOps space, and like they don’t have, they don’t have offices, it’s all remote. And I think, I suspect the percentage of new companies being started now is exponentially higher remote. So it’s great, entrepreneurs can work anywhere.
Dan Murphy: Wow, it’s so interesting. Sarah, I want to bring it back over to you because we are five years post-pandemic now, and there is still a conversation taking place, an argument if you will, about the future of remote work. But you started a really interesting conversation last year when Lattice rolled out a digital workers feature that basically put AI agents into an org chart and then assigned them a responsible manager. So I want you to speak to this. People said it put bots on the same level as humans. So when you made that decision, were you thinking, okay AI should be treated as an employee and a tool, or were you thinking, this is where AI is moving in the future, so these AI agents need to be part of the org chart on a full-time permanent basis? So
Sarah Franklin: what we were thinking with that, with that innovation, was that we want to prioritize the success of people as the primary. And when you’re working with AI agents, it’s important to understand what they’re assigned to be doing, and for other people in the organization to understand what they’re doing. And also to have transparency within the workplace. CEOs right now, everyone’s looking at how to reshape their company in this new age. You have also, I want to add, I think this is an incredible time for entrepreneurs and venture to go outside of what is the traditional entrepreneur and look to incredible people that have ideas that might not have come from a technical background. And this could be a great creative age for us where so many more ideas are able to be, you know, unleashed and built and created. But this world requires us to look at this reshaping and hold ourselves accountable. Companies spend today about 70% on average of their total capital on headcount. And as we shift that around, we need to be very, very responsible and transparent for the decisions that we’re making with headcount dollars, if we’re going to replace them with AI agent dollars, and hold ourselves accountable to that success. And so that’s really where it comes from, is that there’s a balance of excitement for all of the innovation and opportunity that awaits us, with our obligation to be responsible in how we deploy this. And that was really the thoughts behind it. Not saying that AI is human, more saying we need to clearly identify where AI is. And as AI speaks on behalf of brands and people, makes decisions on behalf of brands and people, and integrates with other systems, we need to be able to track that. And just like we track any interaction I may have in my CRM, or my JIRA, or my source code repository, we need to know where it came from, so that we can hold ourselves accountable to the outcomes.
Mitchell Green: Sarah, when did I get to write my reviews using AI and Lattice? We’re customers. Were you guys working towards it? It’s amazing. I just wrote a recommendation. I had a bunch of recs that I had to write in the last month for business schools, for people that work for us. It is incredible how easy this stuff is. I wrote a couple of them manually, and then went and used some of the systems to do it. How far in a way do you think you are to making the review process at the end of the year just so much easier for people?
Sarah Franklin: We have that today. You do have to turn it on. We do believe also that it’s a choice of whether you want to bring AI into your system, so you need to have your administrator toggle it on. But that is there.
Mitchell Green: Is it a well-used feature so far, or is it still pretty limited?
Sarah Franklin: Yes, about two-thirds of our customers are using it. Also, AI helping to create custom growth plans and career coaching, democratizing this for all employees. So, within systems like Lattice, it is so powerful what we can do to make things more personalized. All of your onboarding is no longer, you know, one size fits all. It’s bespoke and real time. And this is an incredible age. But, yes, today in Lattice, you can do that. And thank you for being a wonderful customer.
Richard Socher: We might get into a place where you say, take these three bullet points and write a really long, nice letter of recommendation. And the other side will say, summarize me this long letter into the three main bullet points after.
Mitchell Green: Make me a great review. It’s already happening. Young, trustworthy.
Sarah Franklin: The one thing, though, that is an interesting topic is we do believe at Lattice that AI should not do the stack rank and then decide, like, these are the performers that you might need to have conversations with. Because you do need to have the human decision. What’s incredible is that systems within Lattice, we also integrate to your other systems of Again, even if you have an agent working for you within a source code repository or a CRM, being able to pull that information in, the human brain cannot understand all of the information and the nuance to it. So, I think that it’s going to help us have better understanding. And also go into this great age of better collaboration. Because language is no longer a barrier. Understanding is no longer a barrier. We can have information distilled for us in a way that helps us be better together as humans.
Dan Murphy: To play devil’s advocate here, sorry, just to jump in, I also think that while we’re seeing this rapid disruption take place within organizations and in industry with technologies like yours, at the same time, there’s a genuine threat that all of this is going to lead to widespread job losses and perhaps even social collapse. Can you speak to that?
Sarah Franklin: This is why reskilling and staying ahead of this is paramount. We need to define what are the roles of the future? What are the things that people need to be doing as they work together with AI? How can we redeploy our talent? And again, I just go back to holding ourselves accountable. accountable to the decisions. We can go to the moon right now, but we can’t sustain human life at scale on the moon, so we’re not sending people at scale to the moon right now. So maybe a weird analogy, but we need to hold ourselves accountable as we bring AI in. Let’s do it responsibly so that the humans can also adjust as we’re working together with AI.
Mitchell Green: I think you’re also going to have a lot of job creation, too, by the way. If you think about when the iPhone came out in 06, 07, whatever it was, Uber is a $100 billion company. Airbnb is a $100 billion company. Uber could not have existed before this. We think you’re going to see a whole new class. I’m sure there are ten people going after Sarah’s legacy business that’s going to be, you’re going to use AI to build a big business, but your business already existed, but you probably have ten AI first ones. A lot of those companies will just disappear. Where are the opportunities going to be? The companies we’re not even thinking about yet that are going to be these next giant businesses that’s going to leave jobs.
Kanjun Qiu: There’s also something really important that you said about empowering people and upskilling them. I think it’s actually the narratives that we tell about agents matter a lot. Right now we talk about agents as if they’re autonomous bots that are supposed to replace tasks and supposed to replace people. But kind of why I’ve said they’re just software is I think it’s actually really important to think of them as tools for people and to think about how do we use those tools and train people to use these tools. Because what we see is there’s a huge difference between someone who’s really good at using AI systems, like prompting them, et cetera, versus someone who’s not. They get so much more value out of it. And so thinking of an AI as a tool that is supposed to be custom built for people is really important. And then secondly, training them in order to use those tools. And what we find in, we work with software developers. In software development, AI is actually teaching junior software developers how to be better. So a junior software developer will ask a system to generate code. and then to explain that code, and then to explain why there’s a bug. And we see them learning and upskilling really fast. And we also see people who are non-technical on our team starting to learn to code and starting to learn to make software. I think the most empowering view of AI agents is actually as one that allows everyone in the world to create their own software on their computer and to mold their computer to what they want. And when you think about one person enterprise, that’s a situation where you’ve got this system that’s molded to you and helping you do everything that you want to do in this company or that you want to deploy.
Benjamine Liu: I’d like to offer a bit of a different perspective because I do think there’s a lot of talk about retraining, upskilling. But there’s something quite unique about the pace of developments and how quickly these models are getting better. And specifically, where we’re seeing these AI systems do the work of entire teams again. And I think the secular trend is actually there’s going to be huge amounts of job displacement, to Sarah’s point, we as a society make a decision that there’s certain things that we don’t want to displace these set of jobs this quickly. We’ve thought deeply at Formation Bio around this issue because to your point, it could feel very dystopian, very unsettling. And the way we think about it is, today there are a lot of amazing drugs that are discovered and with AI, the discovery is only getting more efficient. A lot of drugs can’t get developed because of the high cost and time of drug development. And so we kind of did like this exercise, like kind of deep, like first principles, moral kind of exercise and said, well, if every job in the pharmaceutical industry goes away, but we’re able to do drug development drastically more efficiently, and we can get more medicines out to patients, cheaper, lower cost because drug development is now not 2.6 billion for every new drug, it’s a fraction because now it’s, you’re scaling with AI versus humans. That’s not good for the world. I’m not sure that’s the case for all this kind of job displacement. because, you know, and we do something pretty sophisticated and we kind of, if we kind of zoom out and think about how this, you know, kind of evolves over time, you know, reasoning models went from, you know, top, you know, 100,000 coder to top 150 in ELO scores just with those three kind of model. Probably there won’t be a human that’s better than the AI kind of coder very shortly. Even when we think about something that’s complicated as drug development, you know, our chief development officer is using this kind of tool now, you know, from a reasoning perspective to think about his kind of chain of thought and training on the AI systems to say, well, you know, why did we think this one drug had a higher probability of success than this other kind of right drug? And the consequence of that over time is we hired the great kind of human experts as we discussed is the AI systems will be able to kind of think through a lot of these multidimensional things probably better than we can in the long run. And so I think the whole theme of this is we’re talking about a one-person unicorn. We’re talking about Uber, we’re talking about NVIDIA, $3.5 trillion company that’s hiring orders of magnitude less people, right? We think about the market cap per human ratio. What is a tech company? What is an AI native company? They’re companies that scale super nonlinear by definition. So they’re eating a lot of productivity and they’re hiring less people. So I think that’s something that we just have to be open about because I think everything that I’ve seen in the long run is that if we don’t take a proactive approach and I totally agree with Sarah and think about, you know, how to do this responsibly, the natural kind of course is that this displacement is going to first actually affect our young people, right? Because these are the analyst jobs that we talked about where Google, Microsoft, all these big companies are hiring less entry-level software engineers. And we kind of all know the challenges when you have really young talented people that feel disenfranchised.
Richard Socher: So I think medicine is a great example where everyone agrees that it should be mostly outcomes and output-based. No one wants more jobs in medicine. They want more drugs developed that work well for people. They want healthier people, faster, cheaper, not more jobs. So it’s an interesting one. But I think it helps us a little bit to zoom out, think about previous disruptions. 150 years ago, over 90% of people worked in agriculture. If you told those people there’s going to be these agents called tractors that will take most of your jobs, they’re like, oh, what else are we going to do? And now they want to be social media influencers. There’s so many new roles that zero people were able to anticipate 150 years ago, right? So Google has led to more jobs, not less jobs. Exactly. And so I think what will obviously happen is that we’ll all work at much higher levels of abstraction, right? Instead of doing very small, medial, repetitive tasks, all of those will go away, and we have to execute. I think maybe to talk to Kenjin’s point very concretely, when she talks about software, how you can just say, I want the new app. I want an app where it shows me the things where I’m on a panel at WEF, and this app should also triangulate where I get the best food option that doesn’t suck in Davos. And at the same time, I don’t want to miss these two politicians. And then you have a custom app on your phone that you just built. Very few people are thinking, wow, and now if that app takes off, and other people are like, should I prompt this myself? Or can I just pay that because I don’t have the time right now, even the 10 hours to prompt this thing properly? That changes the entire economy. And I think what that ultimately means for a lot of CEOs is that we have responsibilities to help people upskill, but also it’s a government question of social safety nets and continuing adult education.
Sarah Franklin: I don’t disagree. I think an X factor in this is the pace. And you look at the banking when the ATMs came out, and the amount of time it took for tellers to go into other jobs that were being created, the timeframe was long. The pace is not long here. The pace is exponentially fast. And this is a curve where the re-skilling is not going to keep up. And so this is why, I rewind, you mentioned social media and iPhone, those were innovations in early 2000s. It wasn’t until over a decade plus later that we had things like GDPR to regulate data privacy or the iPhone came out with their own way for us to self-regulate our addiction to mobile phones with screen time. But fast forward a decade and we’re living in an anxious generation and we’re addicted to our devices. And so I just want us to agree that hope cannot be our strategy. And this is why we feel that we need to have a clear way to manage AI in the digital workforce and that we need to go into this eyes wide open, very clear eyed, so that we can make the right decisions for society, so that we can steer us to a more utopian than dystopian, and so that we can proud. My own daughter, my oldest daughter is 20 and in college and I worry for her future where I see entry level jobs evaporating. And I tell her, this is why you need to not just be AI literate, you need to be AI proficient and this is what we need to be teaching. Even ourselves, myself, I turned 50 this year, this is an old dog that I teach myself new tricks every day and I go in and make sure I keep myself relevant. This is the responsibility for us and all of us here at the forum to do so that we go into a utopian society with this incredible technology.
Dan Murphy: Absolutely fascinating. I’ll jump in.
Kanjun Qiu: The last thing I want to say is I also think the levers and tools that we give people to be able to modify and control their agents is really important. And we think of this in terms of, there’s a term that we use when it comes to software that we call resistibility. Today we live in a software environment where actually a lot of the software that we’re surrounded by is not that resistible. The notifications we get, the way our social media feed is structured. our social media feed is actually the first example of an agent that is really impacting our lives, that is making decisions on our behalves. And so I think going into the future, to your point about people being able to make your own app, I’m actually gonna take that one step further and say it’s actually really important for most software not just to be creatable, but also editable, remixable, so that I can have a lot more control over my digital environment and my future. And I think that’s gonna create a world where it’s a little bit more empowering. And at the same time, we do really need to think about job loss and re-skilling and think about, I think the narrative here is really important because if we think about agents, that’s not great, but if we think about how do we build tools, more tools so that people have more levers, more choice, more empowerment at work, that’s important. And one final thing on this is if we say, okay, we’re gonna build an AI agent that automates jobs, then I as CEO will think, okay, how do I find swaths of jobs that I can automate? But if we say, we’re gonna build AI tools that help all of our people develop software, or we’re gonna build AI tools so that each person can build their own agents, then I’m gonna think, oh, how do I train my team so that each person knows how to make agents so that they can automate their own workflow? And that’s a really different set of actions, a really different philosophy. And so I think we need to be talking about this, the technology quite differently.
Dan Murphy: It’s a fair point. We have a really good-looking audience surrounding us in the room, so I think it’s their opportunity to ask some questions as well, if you would like. If you have any questions for our panel, please raise your hand. Now is your time to speak. We’d love to hear from you. And if there isn’t anyone who has an immediate question, I’m just looking around. No one’s curious to ask a question. I’ve asked a lot of questions already. Okay, we’ll wait a while. I have one more question. While you maybe do some idea generation in the room to think about some questions, I want to know from our panel, what is one thing that an AI agent can automate for you today? but you are willing to allow it to automate, and why? Not willing to allow it to automate, and why?
Richard Socher: I’ll just give a concrete example from one of our customers. There’s actually a company called Mimecast, and they came to us and said, we compared you to Chachapi, everyone, you’re better, we buy 200 seat licenses. We’re like, great. And then we did a workshop still with them, and then they bought 500 more seat licenses after the workshop, and that’s when I realized, even a tech forward company that understands all this impact, it helps to realize, wow, engineering, HR, recruiting, sales, service, marketing, everyone can benefit from this technology. And one example that I hadn’t thought of was a marketer. A marketer basically, they said, every month or so, I get a big PDF file of a bunch of new features, and I’m told to basically then write two email marketing campaigns and then write three tweets and maybe five LinkedIn messages about these new features. And then I go publicly out and see, well, are they actually novel? Does the competition have them? What kinds of features do they have? How do they differentiate? And then basically we said, well, just describe exactly what you just said to your agent, describe those end steps, and then drag and drop that PDF into U.com, and then it’ll just go through those steps. And they’re like, whoa, that’s basically 50% every four weeks. That’s 50% of my week or more, and it just did it. They told us a few days after, and they’re like, now I understand it. And then that’s how that company realized, wow, even marketing can benefit from an agent.
Sarah Franklin: Something that I find exciting right now, just a little bit of a different lens on the job piece, is that agents are able to help us find talent, people talent in other places. Like Nancy’s company, MoonHub, is an incredible one where it helps you recruit. And if you want to hire people, you can hire them from all over the world and find people that you might not traditionally be looking for. And so that’s something where for us to look at, how can we put an agent to help us recruit better people that are from anywhere, from different walks of life, but have the skills that we need, I think this is an incredible agent for us to be using right now to bring people into the workforce from all over the world. What are you unwilling, unwilling to… Oh, you asked not willing. Yeah, what are you unwilling to allow this thing to do, these things to do? So I mentioned earlier, but not willing to do the performance assessment to give somebody a score. It still needs to have the human that can take the information, synthesize, understood. But when you’re evaluating people, that still needs to be done by a person.
Benjamine Liu: Our kind of philosophy is there’s really nothing. There’s certain judgment calls that you still want a human to loop in the long run. I think that’s where the world is going to go. And maybe to your early question, what are some hallmarks to AI native companies? One is the talent that you hire in. And so we really struggle to hire anyone who isn’t proficient in the tools today. We’re two, three years into the GPT moment, right? And if you haven’t started using the tools today, it kind of not only means you’re not an early adopter, but you’re not using some of the best practices. And having that inherent curiosity, I think is a really important kind of thing. You mentioned kind of Google, you know, John Doerr and Mike Moritz are investors in both our company and kind of Google collectively. And they said what made Larry and Sergey special was they always used what’s kind of best practice tools in AI. Clearly, you know, one of these kind of things. And then we look for folks who also are excited about a world where their org is actually completely transformed by AI. And so instead of kind of that normal friction, you know, that maybe a bigger enterprise might face, you know, you have a really aligned group that is excited and actively always looking to kind of update their tools. There is one thing, though, that we and I personally don’t replace AI with. And that’s any kind of relationship building. If we’re going to have dinner or I want to create a really kind of meaningful relationship. meaningful relationship where I want to be a caregiver and deliver care to people I really care about and spend time with, those are things that I think are deeply human. I have a utopian version of AI, and I think it’s relevant to what you said, Richard, earlier. If we get this right, we might have the ability to have actually 20-hour workweeks where your AI systems are doing the consequential job, and there’s certain kind of responsibility and important decisions that you always want the humans to have some sort of oversight over. And then everyone gets to spend their time on things that truly matter, right? Time with loved ones, living longer and healthier, going and really understanding what it means to be deeply human, people creating arts, people having times to really hone their hobby versus really needing to do these tasks that we’ve historically maybe found mission and purpose in, that if we kind of zoom out and say, is this actually where you want to be spending the majority of your time, which historically has been work? I think if we get some of the broader regulations and how we want to deploy these things right, there’s a really kind of compelling future ahead of us.
Dan Murphy: Can we just pause for a second? Does anyone have any questions they’d like to ask just in the moments that we have left? No? Or maybe one in the front? Right here behind you. Yes, sir. You can introduce yourself.
Audience: Should I stand? Okay. Yeah. Dan Vedat, founder and CEO of Huma. Huma is one of the leading healthcare AI companies. So I’m a big believer of AI myself and we use it within the healthcare space. I guess the questions I have, in human history, when you look at the past, you know, thousands of years, nothing has been achieved easy. Every, you know, iterations, every evolutions we’ve gone through, it took a long time. And when we listen to everybody. in the past two years, feels like the AI is taking over really easy, too easy, which kind of with the word kind of perspective of NRC and all these things, it doesn’t add up to my mind. And then it brings the other questions that are we, all of us, and I’m like beneficiary of AI, momentum, are we hyping it? Because we all benefit from it. More funding come to it, and it creates more momentum, maybe it benefits us, maybe it accelerates, but by hyping it, we create some other consequences that we may not think about it correctly at the moment. And also then lots of resources get wasted, and some of the paths that we could have explored, we may miss. I was just wondering what the panel think about that.
Kanjun Qiu: I think you’re not wrong about the fact that things are being adopted really easily, and it feels much easier than it should be. And I think you’re hearing some success stories here around adoption, but there are a lot of failure stories where the success stories, how real are the success stories?
Audience: We all can say we do this, we do that. If I come and like dig in, I’m like one of the biggest healthcare AI companies in the world, in our category. And we use AI, and I know lots of companies. But when you dig in, it’s not as much yet. Now maybe one year from now, magic happens. I’m not saying it’s not possible. But I feel there’s a sense of hyping, too much.
Dan Murphy: You don’t think? Yeah, there’s definitely.
Richard Socher: Yeah, so like the future’s already here, it’s just not equally distributed. Exactly. And so we’ve had agents for a year and a half. Some in the world didn’t notice, we didn’t market it enough. Other people now have it, now they talk about forces of agents, and now the people get it, right? So that same thing will be true. Yesterday, the president or chairman of the International Monetary Fund said that she expects a massive 1% improvement. to the world economy based on AI. And, you know, in Silicon Valley we all talk about thousand percents and things like that, right? Like, but the truth is, yeah, like, there are some stretches of Germany and the US that don’t even have broadband internet yet, right? Like, Biden’s tried to spend billions of dollars, like, very little new broadband internet coverage was created. So, when you don’t have internet, you’re not going to have agents, right? So, there, yes, the future is going to be here. Agents will be there. There will be companies like yours that are full on in, right? So, there will be a gap more and more. But doubling down on your very positive and utopian things, I do think whenever technology gets into the field and into an area of products and services at scale, we eventually have a phone where a middle-class teenager and a billionaire use the same iPhone. And if you think about what are the products and services that current billionaires have that a middle-class teenager doesn’t, and you realize, like, the potential of AI, there is a lot of good reason to be very positive. We will all have personal assistants. There aren’t enough people, and logically, not every person can have a personal assistant, because then, you know, personal assistants will have to have a personal assistant. But we’ll have that. We’ll have personal healthcare teams for us, personal doctors. We’ll have personal tutors for our kids. Those are all goods and services that, like, only the wealthy can afford right now. And in 100 years from now, it will be obvious that, of course, you’re going to ask your assistant to book that trip. Of course, you’re going to ask them. You’re going to ask your account agents, and people say, oh, yeah, I’ll just ask this agent to book my trip, London, hotel, flight. Oh, and it’s done. And I’m like, no way.
Mitchell Green: I just want Microsoft Outlook to success. If I put something on my calendar, it’s like, only you should buy it.
Benjamine Liu: Operator just came out yesterday. I thought it was pretty compelling.
Dan Murphy: So those kind of booking things are beginning to happen. Okay, team. So we are in overtime now, but I just wanted to finish up with maybe one final thought from each of you with this question. And it’s about where we go from here. I think it’s going to be the first one-person enterprise. becoming a billion dollar company and in what industry or sector will it be in?
Mitchell Green: Revenue or valuation? Yeah, let’s talk about valuation. Oh, valuation can be very quick.
Kanjun Qiu: I think actually we’ve already seen it in a way, so WhatsApp is an example of a company that was valued at a huge amount and had 17 people or 30 people, I think when they were acquired for 17 billion dollars. Mid Journey is actually another good example, very small team, huge amount of revenue and we’re seeing that more and more with AI companies. I think the places where it’ll be easiest and first are kind of bottoms up either consumer or prosumer products that don’t require large go-to-market teams. I think go-to-market is actually one of the places where, you know, as you said about relationships, it’s gonna be difficult to automate all of these relationships with other people. Enterprises buy our products not because our products necessarily perform better but because they trust us more and so that trust, that human to human trust, I think is still very necessary, very important and also when it comes to agents, there’s this issue around trust. You’re asking about, you know, what do you not let your agent do? Decision-making is something that we should be cautious to let our agents do too much of and so when it comes to decisions that are really important or high value, there are some, and relationships that are really important and high value, there are some companies that naturally require a lot more of those and other companies that require less.
Dan Murphy: Mitch?
Mitchell Green: I think it’ll be a long time until you see a one-person public company but I do think there are already quant hedge funds that run billions of dollars with a couple people like within these quant hedge funds have been around for a while. It’s only, and so but again, a two-person quant hedge fund that’s making, you know, 50 million dollars a year, it’s not gonna be a public company but they’re great businesses but they already exist.
Sarah Franklin: I agree. I think it’ll be a while for just a one-person company but I think another interesting thing is when we’ll have AI more running a company or running a nation and so that might be something else which is part of the future. as well.
Dan Murphy: Ben?
Benjamine Liu: I think it depends how you define it. NVIDIA, $100 million per employee right now, you know, a $3.5 trillion market cap company. I think we have the potential to have it. My kind of view is it’s going to take a long time because, you know, being an entrepreneur is kind of a lonely journey and you want a co-founder. I’m very grateful to have a co-founder. So you might want to people, but the potential to do there, I think, is early in people think. The desire to do that because, you know, companies are still started by humans. I think you will want some people to share the journey with.
Dan Murphy: Rich, you get the final word. Bring it home for us.
Richard Socher: You all said it very well. Co-founders are amazing. It’s, you know, it’s lonely at the top and if you’re the only one, they’re only agents that are going to get kind of boring too, to some degree. I think we heard it. It’s somewhere between minus five years to plus 50 years, depending on how we define that.
Dan Murphy: Okay. That was, I’ll let you off. That was a pretty easy one, but appreciate the conversation. Ladies and gentlemen, please thank my panel, Kanju and Benjamin, Mitchell, Sarah and Richard, and on behalf of the World Economic Forum, thank you so much for watching and thanks for tuning in. you you
Dan Murphy
Speech speed
181 words per minute
Speech length
1271 words
Speech time
419 seconds
AI enables one-person enterprises to scale
Explanation
Dan Murphy introduces the concept that technology, particularly AI, has evolved to allow businesses to scale without relying on traditional large teams and physical infrastructure. This enables entrepreneurs to build high-performing companies with minimal human resources.
Evidence
Entrepreneurs today are leveraging technology to build nimble, high-performing companies, sometimes with just one person at the helm.
Major Discussion Point
Impact of AI on entrepreneurship and business
Agreed with
Agreed on
AI is transforming business models and enabling one-person enterprises
Kanjun Qiu
Speech speed
201 words per minute
Speech length
1436 words
Speech time
426 seconds
AI agents are fancy software that can be customized
Explanation
Kanjun Qiu argues that AI agents should be viewed as advanced, customizable software rather than autonomous entities. This perspective emphasizes the importance of training people to use AI tools effectively.
Evidence
We see them learning and upskilling really fast. And we also see people who are non-technical on our team starting to learn to code and starting to learn to make software.
Major Discussion Point
Impact of AI on entrepreneurship and business
Benjamine Liu
Speech speed
189 words per minute
Speech length
2156 words
Speech time
680 seconds
AI native companies scale with AI rather than humans
Explanation
Benjamine Liu defines AI native companies as those that scale their productivity and output using AI agents and models rather than human employees. This approach allows for exponential growth with minimal human intervention.
Evidence
An AI native company is a subset of that that instead of scaling with kind of nonlinearly it actually just scales with AI right agents. Agents, models, systems with kind of a human loop until frankly you know the human is out of the loop.
Major Discussion Point
Impact of AI on entrepreneurship and business
Agreed with
Agreed on
AI is transforming business models and enabling one-person enterprises
Differed with
Differed on
Pace and extent of AI adoption
AI will lead to significant job displacement, especially for entry-level roles
Explanation
Benjamine Liu argues that AI will cause widespread job displacement, particularly affecting entry-level positions. He suggests that companies are training AI systems to perform tasks traditionally done by junior employees.
Evidence
The kind of entry analyst kind of roles, you’re kind of seeing this even in a Google and in a Microsoft, those are the roles are getting displaced, because the companies are training the AI agents, right, versus actually the entry level employees.
Major Discussion Point
Job displacement and societal impact of AI
Agreed with
Agreed on
AI will lead to significant job displacement and require reskilling
Differed with
Differed on
Job displacement due to AI
AI could enable shorter work weeks and more leisure time
Explanation
Benjamine Liu presents a utopian vision of AI where it could lead to significantly reduced work hours. This would allow people to focus more on personal relationships and pursuits outside of work.
Evidence
If we get this right, we might have the ability to have actually 20-hour workweeks where your AI systems are doing the consequential job, and there’s certain kind of responsibility and important decisions that you always want the humans to have some sort of oversight over.
Major Discussion Point
Job displacement and societal impact of AI
Human oversight is still needed for important decisions
Explanation
Benjamine Liu argues that while AI can automate many tasks, human oversight is still necessary for critical decision-making. This ensures that important judgments are not left solely to AI systems.
Evidence
There’s certain judgment calls that you still want a human to loop in the long run.
Major Discussion Point
Responsible development and use of AI
Agreed with
Agreed on
Human oversight and decision-making remain important in AI implementation
Mitchell Green
Speech speed
202 words per minute
Speech length
1238 words
Speech time
365 seconds
AI allows companies to operate with minimal physical infrastructure
Explanation
Mitchell Green suggests that AI enables companies to function efficiently without traditional physical infrastructure. This shift allows for more flexible and distributed work arrangements.
Evidence
Grafana Labs, that’s nearly 400 million of ARR, and they don’t join 70% a year in the DevOps space, and like they don’t have, they don’t have offices, it’s all remote.
Major Discussion Point
Impact of AI on entrepreneurship and business
Agreed with
Agreed on
AI is transforming business models and enabling one-person enterprises
Relationship-building should remain a human activity
Explanation
Mitchell Green emphasizes the importance of human interaction in building relationships, particularly in business contexts. He suggests that this aspect of work is not easily replaceable by AI.
Major Discussion Point
Responsible development and use of AI
Agreed with
Agreed on
Human oversight and decision-making remain important in AI implementation
Sarah Franklin
Speech speed
180 words per minute
Speech length
1574 words
Speech time
524 seconds
AI is creating a new age of human-agent collaboration
Explanation
Sarah Franklin describes the current era as one where humans are increasingly collaborating with AI agents. This collaboration is changing how work is done and how companies approach talent management.
Evidence
This is a great new age of collaboration that we’re moving into where humans are collaborating with agents.
Major Discussion Point
Impact of AI on entrepreneurship and business
Agreed with
Agreed on
AI is transforming business models and enabling one-person enterprises
Reskilling and upskilling are crucial as AI advances
Explanation
Sarah Franklin emphasizes the importance of continuous learning and skill development in the face of rapid AI advancements. She argues that this is essential for maintaining relevance in the workforce.
Evidence
This is why reskilling and staying ahead of this is paramount. We need to define what are the roles of the future? What are the things that people need to be doing as they work together with AI?
Major Discussion Point
Job displacement and societal impact of AI
Agreed with
Agreed on
AI will lead to significant job displacement and require reskilling
Companies need to be transparent about AI use and hold themselves accountable
Explanation
Sarah Franklin argues for transparency and accountability in the deployment of AI within organizations. She emphasizes the need for clear identification of AI use and its impact on human roles.
Evidence
We need to be able to track that. And just like we track any interaction I may have in my CRM, or my JIRA, or my source code repository, we need to know where it came from, so that we can hold ourselves accountable to the outcomes.
Major Discussion Point
Responsible development and use of AI
AI should not be used for performance evaluations of employees
Explanation
Sarah Franklin argues against using AI for employee performance evaluations. She believes that human judgment is still necessary for assessing employee performance accurately.
Evidence
AI should not do the stack rank and then decide, like, these are the performers that you might need to have conversations with. Because you do need to have the human decision.
Major Discussion Point
Responsible development and use of AI
Agreed with
Agreed on
Human oversight and decision-making remain important in AI implementation
Richard Socher
Speech speed
199 words per minute
Speech length
1572 words
Speech time
473 seconds
AI enables individuals to become managers of AI
Explanation
Richard Socher suggests that AI will transform every employee into a manager of AI systems. This shift will require new skills in directing and overseeing AI tools to maximize productivity.
Evidence
We are as CEOs are going to be the first generation that manages people and AI. So we have learned to manage. But I think the most interesting change here is actually that every individual contributor, every employee, is going to become a manager of AIs.
Major Discussion Point
Impact of AI on entrepreneurship and business
Agreed with
Agreed on
AI is transforming business models and enabling one-person enterprises
AI will create new types of jobs we can’t yet anticipate
Explanation
Richard Socher argues that AI will lead to the creation of entirely new job categories that we currently cannot predict. He draws parallels to historical technological revolutions that led to unforeseen job creation.
Evidence
150 years ago, over 90% of people worked in agriculture. If you told those people there’s going to be these agents called tractors that will take most of your jobs, they’re like, oh, what else are we going to do? And now they want to be social media influencers.
Major Discussion Point
Job displacement and societal impact of AI
Agreed with
Agreed on
AI will lead to significant job displacement and require reskilling
Differed with
Differed on
Job displacement due to AI
Audience
Speech speed
155 words per minute
Speech length
294 words
Speech time
113 seconds
AI may be overhyped and adoption may be slower than expected
Explanation
An audience member suggests that the rapid adoption and impact of AI might be overstated. They question whether the industry is hyping AI capabilities and adoption rates beyond reality.
Evidence
When we listen to everybody in the past two years, feels like the AI is taking over really easy, too easy, which kind of with the word kind of perspective of NRC and all these things, it doesn’t add up to my mind.
Major Discussion Point
Job displacement and societal impact of AI
Differed with
Differed on
Pace and extent of AI adoption
Unknown speaker
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
One-person billion-dollar companies are already emerging in some sectors
Explanation
This argument suggests that one-person enterprises valued at billions of dollars are already a reality in certain industries. It implies that AI and technology are enabling individuals to create highly valuable businesses with minimal human resources.
Evidence
WhatsApp is an example of a company that was valued at a huge amount and had 17 people or 30 people, I think when they were acquired for 17 billion dollars. Mid Journey is actually another good example, very small team, huge amount of revenue and we’re seeing that more and more with AI companies.
Major Discussion Point
Future of one-person enterprises
Public one-person companies are unlikely in the near future
Explanation
This argument suggests that while small teams can create highly valuable private companies, it’s unlikely to see publicly traded companies run by a single person in the near term. It acknowledges the current limitations of AI in completely replacing human teams for public companies.
Major Discussion Point
Future of one-person enterprises
AI-run companies or nations may be possible in the future
Explanation
This argument speculates on the possibility of AI systems managing entire companies or even nations in the future. It suggests a potential future where AI could take on high-level decision-making roles traditionally held by humans.
Major Discussion Point
Future of one-person enterprises
Co-founders will likely remain important despite AI capabilities
Explanation
This argument suggests that despite advancements in AI, having human co-founders will remain valuable for entrepreneurs. It emphasizes the social and emotional aspects of entrepreneurship that AI cannot fully replace.
Major Discussion Point
Future of one-person enterprises
Timeline for true one-person enterprises varies widely
Explanation
This argument acknowledges the uncertainty in predicting when true one-person enterprises will become commonplace. It suggests that opinions on the timeline for this development vary significantly among experts.
Evidence
It’s somewhere between minus five years to plus 50 years, depending on how we define that.
Major Discussion Point
Future of one-person enterprises
Agreements
Agreement Points
AI is transforming business models and enabling one-person enterprises
AI enables one-person enterprises to scale
AI native companies scale with AI rather than humans
AI allows companies to operate with minimal physical infrastructure
AI is creating a new age of human-agent collaboration
AI enables individuals to become managers of AI
All speakers agree that AI is fundamentally changing how businesses operate, allowing for more efficient scaling with fewer human resources and enabling new business models.
AI will lead to significant job displacement and require reskilling
AI will lead to significant job displacement, especially for entry-level roles
Reskilling and upskilling are crucial as AI advances
AI will create new types of jobs we can’t yet anticipate
Multiple speakers acknowledge that AI will displace many jobs, particularly entry-level positions, but also emphasize the need for reskilling and the potential for new job creation.
Human oversight and decision-making remain important in AI implementation
Human oversight is still needed for important decisions
AI should not be used for performance evaluations of employees
Relationship-building should remain a human activity
Several speakers agree that while AI can automate many tasks, human judgment and interaction remain crucial for certain aspects of business, particularly in decision-making and relationship-building.
Similar Viewpoints
Both speakers emphasize the importance of viewing AI as a tool that should be customized and managed responsibly, with transparency in its implementation.
AI agents are fancy software that can be customized
Companies need to be transparent about AI use and hold themselves accountable
Both speakers present optimistic views of AI’s potential to transform work life, either by reducing work hours or creating entirely new job categories.
AI could enable shorter work weeks and more leisure time
AI will create new types of jobs we can’t yet anticipate
Unexpected Consensus
Limitations of one-person enterprises
Co-founders will likely remain important despite AI capabilities
Public one-person companies are unlikely in the near future
Despite the overall enthusiasm for AI-enabled one-person enterprises, there was unexpected consensus that human co-founders and larger teams will still be valuable or necessary, especially for public companies.
Overall Assessment
Summary
The speakers generally agree on AI’s transformative impact on business models, the need for reskilling in the face of job displacement, and the continued importance of human oversight in certain areas. There is also consensus on the potential for AI to enable more efficient, scalable businesses, including one-person enterprises.
Consensus level
The level of consensus among the speakers is relatively high on the broad implications of AI for business and work. However, there are nuanced differences in their perspectives on the timeline and extent of these changes. This high level of agreement suggests a shared understanding of AI’s potential and challenges among industry leaders, which could influence policy-making and business strategies in AI adoption and workforce development.
Differences
Different Viewpoints
Pace and extent of AI adoption
AI native companies scale with AI rather than humans
AI may be overhyped and adoption may be slower than expected
Benjamine Liu argues for rapid AI adoption and scaling, while an audience member suggests AI might be overhyped and adoption could be slower than expected.
Job displacement due to AI
AI will lead to significant job displacement, especially for entry-level roles
AI will create new types of jobs we can’t yet anticipate
Benjamine Liu emphasizes significant job displacement due to AI, particularly for entry-level positions, while Richard Socher argues that AI will create entirely new job categories we can’t yet predict.
Unexpected Differences
Use of AI in performance evaluations
AI should not be used for performance evaluations of employees
Human oversight is still needed for important decisions
While both speakers agree on the need for human involvement in decision-making, Sarah Franklin unexpectedly takes a stronger stance against using AI for employee performance evaluations, whereas Benjamine Liu’s position is more general about human oversight for important decisions.
Overall Assessment
summary
The main areas of disagreement revolve around the pace and extent of AI adoption, the impact on jobs and work structures, and the appropriate level of human involvement in AI-driven processes.
difference_level
The level of disagreement among the speakers is moderate. While there is general agreement on the transformative potential of AI, there are significant differences in perspectives on its immediate impact, adoption rate, and the best approaches to manage its integration into the workforce. These differences highlight the complexity of the AI revolution and the need for nuanced, multifaceted approaches to harness its potential while mitigating risks.
Partial Agreements
Partial Agreements
Both speakers agree on the need for workforce adaptation to AI, but differ in their vision of the outcome. Sarah Franklin emphasizes reskilling for continued relevance in the workforce, while Benjamine Liu envisions AI enabling shorter work weeks and more leisure time.
Reskilling and upskilling are crucial as AI advances
AI could enable shorter work weeks and more leisure time
Both speakers emphasize the importance of human control over AI, but approach it differently. Kanjun Qiu focuses on customization and training people to use AI tools effectively, while Sarah Franklin stresses transparency and accountability in AI deployment within organizations.
AI agents are fancy software that can be customized
Companies need to be transparent about AI use and hold themselves accountable
Similar Viewpoints
Both speakers emphasize the importance of viewing AI as a tool that should be customized and managed responsibly, with transparency in its implementation.
AI agents are fancy software that can be customized
Companies need to be transparent about AI use and hold themselves accountable
Both speakers present optimistic views of AI’s potential to transform work life, either by reducing work hours or creating entirely new job categories.
AI could enable shorter work weeks and more leisure time
AI will create new types of jobs we can’t yet anticipate
Takeaways
Key Takeaways
AI is enabling one-person enterprises to scale and operate with minimal infrastructure
AI native companies scale with AI rather than humans, potentially leading to significant job displacement
Reskilling and upskilling are crucial as AI advances to mitigate negative societal impacts
AI should be viewed as a tool to empower people rather than replace them entirely
Human oversight is still needed for important decisions and relationship-building
The timeline for true one-person billion-dollar enterprises varies widely, from already emerging to decades away
Resolutions and Action Items
Companies need to be transparent about AI use and hold themselves accountable
AI should not be used for performance evaluations of employees
Businesses should focus on training employees to use AI tools effectively
Unresolved Issues
How to balance AI adoption with potential job losses and societal impacts
Whether AI is being overhyped and if adoption will be slower than expected
The long-term implications of AI on work hours and leisure time
How to regulate AI use in businesses and society
Suggested Compromises
Using AI to augment human capabilities rather than fully replace workers
Maintaining human involvement in key decision-making processes while leveraging AI for other tasks
Balancing AI adoption with reskilling efforts to minimize job displacement
Thought Provoking Comments
We’ve actually reconceptualized agents as, what is an agent? It’s this interface. It’s a piece of software on your computer. It talks to you, talks to your computer, talks to your computer in code. So what is it? It lets me write code on my computer.
speaker
Kanjun Qiu
reason
This reframes AI agents not as autonomous entities, but as interfaces that empower users to interact with their computers in new ways. It challenges the common perception of AI agents as replacements for human workers.
impact
This comment shifted the discussion towards viewing AI as a tool for empowerment rather than replacement, influencing later comments about upskilling and human-AI collaboration.
It used to take our teams, many teams, about two months to do a patient recruitment campaign in the process of drug development. You had teams that had to research a patient population, segment the different indications, put together IRB compliance, regulatory compliant, patient brochures, ads, things that kind of touch the patients. And now it’s one AI system with a human in the loop.
speaker
Benjamine Liu
reason
This concrete example illustrates the dramatic efficiency gains possible with AI, while also highlighting the continued importance of human oversight.
impact
This comment grounded the discussion in real-world applications and sparked further conversation about job displacement and the changing nature of work.
We are as CEOs are going to be the first generation that manages people and AI. So we have learned to manage. But I think the most interesting change here is actually that every individual contributor, every employee, is going to become a manager of AIs.
speaker
Richard Socher
reason
This insight reframes the impact of AI not just at the leadership level, but as a fundamental shift in how all employees will work.
impact
This comment broadened the discussion from focusing on leadership and entrepreneurship to considering the wider implications for the entire workforce.
The pace is not long here. The pace is exponentially fast. And this is a curve where the re-skilling is not going to keep up. And so this is why, I rewind, you mentioned social media and iPhone, those were innovations in early 2000s. It wasn’t until over a decade plus later that we had things like GDPR to regulate data privacy or the iPhone came out with their own way for us to self-regulate our addiction to mobile phones with screen time.
speaker
Sarah Franklin
reason
This comment highlights the unprecedented speed of AI advancement and the potential societal challenges it poses, drawing a contrast with previous technological revolutions.
impact
This shifted the conversation towards a more cautious and nuanced view of AI’s impact, emphasizing the need for proactive measures to address potential negative consequences.
Today we live in a software environment where actually a lot of the software that we’re surrounded by is not that resistible. The notifications we get, the way our social media feed is structured, our social media feed is actually the first example of an agent that is really impacting our lives, that is making decisions on our behalves. And so I think going into the future, to your point about people being able to make your own app, I’m actually gonna take that one step further and say it’s actually really important for most software not just to be creatable, but also editable, remixable, so that I can have a lot more control over my digital environment and my future.
speaker
Kanjun Qiu
reason
This comment introduces the important concept of ‘resistibility’ in software and AI, emphasizing user control and customization as key factors in the future of technology.
impact
This perspective added depth to the discussion about the relationship between humans and AI, shifting focus towards empowering users rather than just discussing AI capabilities.
Overall Assessment
These key comments shaped the discussion by moving it from initial excitement about AI capabilities to a more nuanced exploration of its societal implications. The conversation evolved from discussing AI as a tool for efficiency to considering its impact on the nature of work, the need for new skills, and the importance of human agency in an AI-driven world. The discussion highlighted both the transformative potential of AI and the need for careful consideration of its implementation to ensure it empowers rather than displaces humans.
Follow-up Questions
How can we responsibly deploy AI while mitigating job displacement?
speaker
Sarah Franklin
explanation
This is important to address the potential social and economic impacts of rapid AI adoption.
How can we reshape companies and organizational structures to effectively integrate AI agents?
speaker
Sarah Franklin
explanation
This is crucial for companies to adapt to the changing technological landscape and maintain competitiveness.
How can we make AI tools more ‘resistible’ and give users more control over their digital environment?
speaker
Kanjun Qiu
explanation
This is important to empower users and prevent potential negative impacts of AI on personal autonomy.
How can we improve AI literacy and proficiency across different age groups and professions?
speaker
Sarah Franklin
explanation
This is crucial for workforce adaptation and ensuring people can effectively use AI tools.
What new job roles and industries might emerge as a result of widespread AI adoption?
speaker
Mitchell Green
explanation
This is important to understand the future job market and potential economic opportunities.
How can we balance the rapid development of AI with responsible implementation and regulation?
speaker
Sarah Franklin
explanation
This is crucial to prevent potential negative societal impacts while fostering innovation.
How can we create AI tools that augment human capabilities rather than replace human workers?
speaker
Kanjun Qiu
explanation
This is important to guide AI development towards empowering humans rather than displacing them.
Are we overhyping AI and its current capabilities?
speaker
Audience member (Dan Vedat)
explanation
This is important to maintain a realistic perspective on AI’s current state and potential.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.