What does the future of AI-generated video look like? In this episode, we talk to Aaron Jones, co-founder of Yepic.ai, a startup developing an AI toolkit for video. We learn how they went from using AI to create images for e-commerce brands to generating video avatars for training videos to now allowing some innovative video editing use cases.
Episode Links
Connect with Aaron on LinkedIn
Episode Transcript
Matthew Todd
Hi. My name is Matthew Todd, and welcome to Inside the ScaleUp. This is the podcast for founders, and executives in tech, looking to make an impact and learn from their peers within the tech business, we lift the lid on tech businesses, interviewing leaders and following their journey from startup to scale-up and beyond covering everything from developing product market fit, funding and fundraising models to value proposition structure and growth marketing.
We learn from their journey so that you can understand how they really work, the failures, the successes, and the lessons along the way, so that you can take their learnings and apply them within your own startup or scale up and join the ever-growing list of high growth UK SaaS businesses. Welcome back to the podcast really pleased today to be joined by Aaron Jones, co-founder of Yepic.ai. Great to have you here this morning.
Aaron Jones
Thank you very much. It’s a pleasure to be here. Thank you, Matthew.
Matthew Todd
No worries, I’m looking forward to finding out more about the journey of Yepic and the path that you’ve taken so far. But to kick things off, can you just give us a little bit of background about yourself, and a little bit about Yepic as well, please?
Aaron Jones
My name is Aaron Jones. I’m the co-founder and CEO of Yepic.ai. At Yepic.ai, we’re helping people to create, dub and personalize video at scale, mainly via API.
It’s been a really, really exciting journey of going from image generation through to video creation to date. I’m really, really excited about where the space is moving. Generative AI is moving super, super fast.
Personally, I’m interested in basketball. I’m trying to learn to fly. I really, really like Barry’s Bootcamp. My name is Aaron Jones. I’m the co-founder and CEO of Yepic.ai. At Yepic.ai, we’re helping people to create, dub, and personalize video at scale, mainly via API.
Matthew Todd
Nice. It looks really good. I’ve seen on the website, a few of the product demo videos you’ve got as well, it looks like impressive tech. Certainly, you know moving AI forward to this new use of a generative capability that seems to be emerging at the moment in a few different areas. So it’s kind of interesting to hear a little bit more about the product, as well as that journey so far as well. You mentioned Yepic started as a research project. But what led to that in the first place?
Aaron Jones
I was brought in as the managing director of the project. Initially, the project was an image enhancement project. The idea was, let’s work with Depop and with all these apps, where people consumer apps when people upload photos of products to sell. Let’s try and use AI to stylize the images to enhance them to make them all look the same. So that all the eBay images look high quality. That was the initial project vision. I came in, and I said that’s a bit crappy. That sucks. We can do much more than that. I had this very, very clear vision of where the world was heading. With generative AI, even before we sort of got there.
The very first product that we started to develop was a stable diffusion product. We were doing stable diffusion before it was even a thing. Possibly because of our incredible research team, my co-founder, Dr. Yannis Kazantzidis is a child prodigy genius of all things research and AI. He loves a challenge. So the initial iteration of the project was, let’s create images for e-commerce and not just images to show on the website, but retargeting images. So imagine Matthew has been shopping, Matthew has been looking at a pair of high heels for his wife, and a jacket for himself, some toys for his kids. The idea was that we would generate an image that sort of brought together all of these products in one. I know you’re probably thinking Matthew in high heels with a jacket and some toys wouldn’t be a very good ad. That’s actually one of the problems we encountered. It was really hard to find coherent patterns in what people are wearing.
The project evolved and moved away from this stable diffusion-type model, where we generate images for e-commerce and for advertising, towards video. I think the fundamental reason for that was because it’s really, really hard to prove that an image increased revenue. Because in e-commerce, you have the website side, which is owned by the brand, and up to the website, up to the conversion point, the website’s in control and the brands in control. The cost base for creating imagery is that they spend so much money each year on product imagery, and that is the amount they spend. So you either need to save the money and make the images for less money, which is really hard with stable diffusion because you need lots of images to input to get a high-quality output. Or, you have to prove that these new wonderful crazy images increase the conversion. But on the conversion side, these distribution channels, the advert channels, the retargeting channels, they’re often owned by third parties. They’re owned by Google, by Facebook by Amazon. On that side, everyone’s fighting for the attribution, everyone wants to say, ‘it was me, I got you the conversion’. That’s what the businesses are built on.
So for you to come in as an external vendor and say, ‘we want to dynamically give you images and increase the conversion’ was very, very, very hard. Also, the speed at which you have to generate these images is sub-400 milliseconds. So you need to have an image that’s ready for every single person that you’re cataloging, to be shown at any time. Because you can’t generate the images in sub 400 milliseconds on demand and serve the image via the ad network. So we moved away from that to video. Video was a much bigger market like production is a 170 billion pound market. 20 billion of that is making awful, cruddy corporate training videos that everyone hates.
That’s what we began, we began with these awful training videos that everyone hates, and nobody likes to make them, and nobody likes to watch them. We started developing an AI that would turn text scripts into these talking avatars with different backgrounds. That really has been the bread and butter of our business since we began. Then more recently, we’ve been working in the dubbing space a lot more, trying to take people’s existing content. Number one, allow you to edit it, and changed the words in the video. Number two, allowing you to dump the whole video into any language. Allowing you to take the existing training video have edited slightly, and then localize it to different markets around the world. We’re seeing some huge success here working with AstraZeneca, and also with companies like Axions.
Matthew Todd
Interesting. I think going back on that ecommerce use case for the images, I can certainly see that attribution problem. All those ad platforms, want to take full credibility for that attribution, because they want you spending more money there. So they’re doing whatever they can to reinforce the fact that these are the metrics that you should be looking at on our platform. They’re our metrics, you should be putting more money in, and this is where you should be putting it as well. So I can certainly see how that was profitable.
Aaron Jones
The flip side to that is, if there’s anyone listening out there right now that has been playing with stable diffusion and has this idea, it’s a great way to quickly flip a company. If you can get get this in the ad networks, deeply embedded, and get the images generated in milliseconds, there’s an opportunity to be acquired quite quickly. Because Facebook, Google, they’re not going to let you sit on their platform for long. If this works, they are certainly going to acquire you because you become a vital part of proving out the ROI.
Matthew Todd
Yeah, I see. The route is probably embedded as part of that ecosystem rather than an external party to it.
Aaron Jones
I think a complete compared to like our system years ago, you couldn’t generate images in real time in some 400 milliseconds. Whereas now you can. Our whole video generation pipeline is real time. To now generate images is quite easy. So I think there is a huge opportunity there for someone that wants to go after that market.
Matthew Todd
So were you effectively just too early from a tech perspective with that product then for it to be profitable and scalable?
Aaron Jones
Yeah, it was just, it was a bit early. Also there was a large amount of risk around brand reputation. Because when you’re generating stuff in real time, and there’s no unchecking, what’s being made you yourself up to all sorts of brand reputation risks. So generally, brands were not happy about the idea of an ad network being served images in real time, yeah, as to share with customers, the images they hadn’t approved yet. So there was also outside of it, too. But I think the really exciting thing here is that we learned that there was a much bigger business opportunity and video. Yeah, in the process of going through this, this journey. Had you told me two, three to four years ago, that stable diffusion would be open sourced, and that everyone would be able to generate images. I instantly with very little coding knowledge, I wouldn’t have suspected things would have moved this fast. I imagined this would come. But, I hadn’t imagined that this would all happen as quickly as it’s happened.
Matthew Todd
It does seem to be a very fast paced area of development. So having seen that opportunity for video, how did you go about validating that from a business concept perspective, as well as from an implementation perspective that you could actually deliver on that as well?
Aaron Jones
Sure. So I think that when you’re doing anything in deep tech, and you’re developing a foundation model, it’s really important that you that you count the wins as you go. Because there’ll be many, many, many small wins, big wins. And obviously, as a business, you have a milestone you need to hit like, we need x number of people to pay for this. But with a deep tech project, you know, it’s not like, if we can just make it, people will buy it. Everything happens iterations. Our whole research team has like, rapidly changed. We’re now working with some of the world’s best researchers. we have a deep integrated partnership with CVS SP. It’s changed the way we look at the space, it’s changed the way we look at the problem. Going back to the idea of small iterations. I think at the beginning, I used to get very frustrated as a founder, because I was like, ‘Okay, there’s a research paper here that says that you can do this that 128 X 128 pixels. Now, I want you to do this, but in full HD’. The research team would look at me like I was a madman. I’d be frustrated. Because I’d be like, ‘well, if we can do a low resolution, I’m sure we can figure it out at a high resolution’. But the research team would work on iterating. Look at different approaches and bring different approaches together. We rebuilt the whole pipeline five times. Before we actually even commercialize the product, we used to generate videos, we had a landing page where anyone can make a free video. We’d ask people to type in their script, pick an avatar, and then give us some feedback on the video. If they give us feedback, they get like 10 free videos as a thank you. This feedback was was how we measured our commercial success, before we commercialized. The likelihood of success. We asked people, if it was the lip sync, how good was the quality of the overall video? Would you pay for it? How much would you pay per video? What would your use case be? We were tracking these like five metrics across, maybe eight months, nine months and about six months in, we got something like a 93% willing to pay. We got super excited and we were like, this is amazing. So we very quickly launched the product, initially on Webflow, using a Webflow custom form that looked like a SaaS app, but it wasn’t. I wanted to keep the development team as free from front end development and customer stuff as possible. I wanted the research development team to be focused on the algorithms and on making everything work. So I worked with a with a freelancer to make the product essentially in Webflow with a payment gateway. Then we said to everyone that said they wanted it, you can buy it here it is feels like a really good deal. The 93% of people that said yes. Turned into about 20% of people of those people actually paying. It was okay. People would log in, they would make a few videos and be like, ‘oh, it’s good, but not good enough’. I think this was the big learning here. If you’re going to go into replace a human centric process, in this case, video production. The quality has to be at a point where it’s good enough to, to replace the human talking. I think we’re in a bit of this uncanny valley. It took another three months to get to a point where we had like 40% of people willing to pay. That was really the point we started tracking and started making revenues. A year ago, our churn rate was only 80%. Now our churn rate is 10%. Our top MRR percentile, TOP 25% of MRR is less than 1%. So the business really has evolved, but the quality of the output is so important to us and to our customers. I think that this goes for any thinking in AI, like clarity AI, when they started doing classification. They had a similar story. People were all excited about the idea of automating classification. But it wasn’t until their product reached a certain point where it became, as good as a human that it became viable.
Matthew Todd
So I guess it took that iteraction with real data, to get to that point and real feedback as well, to make sure that you were at that point for that customer base that you managed to attract.
Aaron Jones
Today, it’s a different problem we have. Today, it’s like, feature group, everyone wants a new feature. How do we manage customers feature expectations versus the business direction in which we’re going. We get people asking us to build all sorts of SaaS type things every week. Saying no, is, has been one of the best lessons. Sometimes you just need to say no, it’s better to have a really focused cohort that want to use your product. And that can, you know, promote you to other people, than having a million features that make the product useful to everyone. Now, we focus on our API. Wwe focus on producing really broad general features in the API that have multiple use cases. So that we can empower people that already have a web app, already have a video based product, to incorporate some video into their product. I just wish I knew now what I knew, like years ago.
Matthew Todd
Sounds like that you’ve been through kind of a process of your tech team focusing on that AI video generation quality. Then you start to move into the workflow then of people requesting those features, because they’re actually trying to use it to achieve something. Now going back to the API, so that you are not their workflow tool, but you fit into existing workflow tools, and potentially new tools that come to market as well.
Aaron Jones
100%. I couldn’t say it better myself, that definitely is the business goal, the business direction. We’re seeing a lot of commercial success there as well, like a lot of huge enterprises are using us now and having discussions with us on a daily basis. I think for us, we had to really focus on what was important, and for us, it was number one, reliability. We need to be able to reliably consistently make videos 24/7. Number two was the speed of production. So we, we now have this whole new pipeline that allows us to work in real time at scale. We don’t actually deliver the videos in real time, though, because it’s very expensive. We do them in like a few minutes. Then the third thing is the ability for anyone to create an avatar of themselves instantly. So we allow people to upload a photo, we call it talking photos, and you can instantly animate the photo. So your CEO, your head of marketing, your significant person at your company, can speak to your customers, in your tone of voice, with your voice clone, and reach people in a more personal way. Then this feeds this technology feeds very closely into our vid voice technology that takes a video feed, and just animates the lip area. Actually it animates the whole frame. But we focus on the lips and on dubbing at a really high level.
Matthew Todd
From a business point of view, then obviously as quite a different type of customer then that you’re reaching. It could almost be b2b to b2b or b2b to c or any numbers of B’s and C’s in the middle of that. So what was that process like to move into that API direction, have confidence and start to sell into those kinds of customers where I’m guessing, the acquisition of one customer is going to give you a lot more users of the technology of the platform. But what was that like in terms of starting to test that to see whether there was any merit to it and traction to it? How did you go about positioning that from a business perspective?
Aaron Jones
It’s a good question. We knew about a year ago that we had to do something different. There’s a bit of a feature race at the moment. There aren’t that many companies in synthetic video. There’s many more synthetic audio companies because they’re open source repos, there aren’t open source high-quality repos for video creation or text video. Well, not yet at least. So there’s not many companies doing what we do. All the companies seem to be going after the same goal, which is disintermediate, the markets. All the existing video editing software’s, we’re going to replace them with a video editing software that also has the production side to it.
Matthew Todd
I know what you mean, I’ve used many tools like that, that just take a small problem. It’s good, because they’re good at what they do. But then it often means oh, great, I want to do this little bit of extracting some audio and putting it over the top on this little video snippet. Now I’ve got to take that bit of the problem that they solve and put it into my existing workflow’. It makes it more complicated to deliver the end result in the end.
Aaron Jones
I feel like everyone is just racing now to create more features than to focus on a specific use case, and to create the best studio for their use case. While that is one of the best ways to extract value, with a vertical approach, it’s not a race that we that we want to join. So about six months ago, we checked out and we said, hey, we have a whole bunch of improvements that we’re doing for our studio. But everything we do is about empowering people and selling the product to other tools. We exited the SaaS race if that makes sense. Ironically, since we exited the SaaS race, our SaaS revenues have been up 50 to 100% month on month. Unintended consequence, but we started focusing on features that weren’t the same as everybody else’s. They weren’t the features that customers were necessarily asking for. They were features that we had planned strategically to deliver our API roadmap.I think over time, those have become features that people really need and want, which has made our studio app appeal to a certain type of user. Building an API that empowers people to do more and be more creative is definitely the goal for us. Because then we just empower more users in a way that you couldn’t with a with a SaaS offering. Everyone’s competing on features and once everyone has the same features, everyone has to drop their prices. It’s not a race gonna be part of
Matthew Todd
It does become a race to the bottom. You do end up with a lot of very similar studio tools that don’t really add a lot. But I can see that, providing the platform that can power, those types of tools, as well as power very specific use cases and fit into existing workflow tools, I can see how that is a very powerful thing. Similarly to the way that Stripe managed to disrupts online payments. It’s not like you couldn’t take online payments before but their API was integrating in a way that made sense for the developers of those platforms and tools to integrate with. It actually added a lot of capability and value,
Aaron Jones
You become embedded in someone’s ecosystem. You can add more value when someone is using the tool to solve the problem they have, than you coming in trying to learn about the problem, and then solve it and take the customers away. It’s an easier, faster way to solve problems. I think that’s how you create a valuable business, you solve problems.
Matthew Todd
I guess the interesting thing with this emergence of those new web based editing tools. Canva, obviously does it for graphic design and there are loads of these other tools that do it for video and everything else, as well. But for the existing players in this space, which there are numerous, as well as for the people innovating. I can see how what you provide at an API level, it makes it accessible to both right. It makes it accessible to the new startups that are trying to do something different and unique, as well as those existing players that are then wondering how on earth they can transition their offering that is already well established, becuase it starts to become threatened by, you know, some of these little pockets of things that pop up. In terms of driving up Yepic.ai forward, how do you see that panning out? Is it just continually working on that research side? Do you have a lot of customers kind of asking for new API features or new levels of scale? What does that look like to you?
Aaron Jones
Our mission is to connect the world for shared language.
Aaron Jones
So right now we’re in a really interesting injunction where we’re about to launch vid voice in public beta. We’ve been working on a private beta for a while. For us, the vision here is much bigger than just being a video company. Our mission is to connect the world for shared language. I really believe that in the next stage of the internet, Internet 3.0, the metaverse, whatever you want to call it, the way we interact is going to be different.
I think Facebook has historically had the metaverse very, very wrong. I don’t think the metaverse is going to be a place where people go. I think this is the misconception people have. The Metaverse is going to be like an augmented reality layer of data on top of the lives we live. A layer of data that is controlled and owned by all the people that are there. That’s really secure. That’s, that’s connected by blockchain. Where ownership and authenticity are the cornerstones of everything. I don’t think it’s going to be like a world powered by crypto, there will probably be one or two de facto coins that everyone uses. I don’t think cryptocurrency as we see it today will even exist.
But I do think that there will be more connection, more human connection. I think walking into a room and having a pair of glasses that reminds you of who you’re connected to on LinkedIn and remind you of the notes you’ve written on people as your last interaction would be a more powerful tool to connect people than sticking on a headset, on your own in the dark bedroom, and talking to people’s cartoon avatars. I think gaming is huge. I think gaming will become more immersive gaming will become better. But a lot of people are not interested in gaming.
Aaron Jones
I think the metaverse will touch everyone’s life in a way that’s much more human and a way that connects us. We want to be the technology layer that that connects that.
I think the metaverse will touch everyone’s life in a way that’s much more human and a way that connects us. We want to be the technology layer that connects that. So imagine you’ve got these glasses on now and imagine me and you were talking and I only speak Chinese. Imagine if the glasses could rerender the lips in real-time so that me and you are able to have a conversation and I can hear and see and understand you. You can see, hear and understand me. These are the types of interactions that we want our technology layer to enable, and to empower. Also we want companies to be able to communicate with people everywhere, irrespective of the language. I think building tools that enable this type of mass personal communication is really, really important.
Matthew Todd
That’s interesting. A very ambitious, but very worthy vision. Thinking about what you were saying about the current capabilities that you’ve got in terms of generated video, one of the things that you said, was tone of voice. So it’s not purely a video whereby you can take that text, and it is good enough to consume. But it is actually carrying across that tone of voice of the original person that created that avatar in the first place. I think if that is something that you pull off and pull off, well, that changes the time and effort required to engage with a particular type of communication. Whether that’s to an employee to a customer, or whatever it may be, it allows you to scale something that could not have been scaled before.
Aaron Jones
Absolutely. I think that scale thing is really the key thing. You know, in web 3, everything’s gonna be immersive, or he’s gonna be personal. Generative media allows everyone to experience the same thing in a personal way that resonates with them. You see it now with generative art. I think that this really is the future. It’s really refreshing to see that all of a sudden, a lot of people, they see the world I’ve been seeing for a while. I think everyone in the synthetic media space feels the same way for quite some time. People were saying, it’s just a fad. AI won’t be on par with human creativity. Now, if you see the outputs from some of these open-source models, it’s like wow, this merits attention.
Matthew Todd
Even if you look at consumer behavior, adopting different types of tech and devices you’re looking at, like, Alexa and all those other similar devices, hoping she doesn’t now start talking to me. The rate at which people now engage with voice technology, admittedly, it’s very rule-based at the moment, there’s not a lot of AI there, but it’s a new style of communicating with devices that people are now really taking for granted.
Aaron Jones
I think devices will have more human interfaces in the future. I think technology like ours is going to play a big, big role in that real-time communication. With an API-first approach and real-time generation, we empower an involve in the use cases like that. Actually, we started working on a project with an airport group just as COVID kicked off. The project got shelved because they had to conserve cash. But the idea was, how can we make avatars available? How can we make information available to everyone when they’re visiting the airport through interactive screens, through avatars that could communicate with people? That was the idea. Our avatars can speak 68 languages. How do we then enable one of your customers at the airport to speak to someone and get the information they need in their own language? I think there are all sorts of exciting opportunities here, because there’s nothing like a human face, right? Even though it’s digital, something nice about speaking to a face versus talking to a box.
Matthew Todd
I can see that’s a very, very different and evolves, user and customer experience. I’m thinking of like chat bots, on websites, you know, still relatively primitive, you see a few people kind of evolving that. But it’s completely obvious that that’s what’s going on. So I think, the ability to take those kind of interactions to the next level as well would certainly be a welcome one. But one thing you mentioned, was trust. I’d be interested to hear your perspective on deep fakes and those kinds of things. How do you prevent something from being abused with with this kind of technology as it improves?
Aaron Jones
So I mentioned earlier, this idea of talking photos. Being able to upload a photo, and then animate the photo, and give it your quirks, give it your expression, your style. That process, we require someone to first of all upload a photo. Secondly, to record a record a short video. The short video is the person giving permission. Then we compare the face on the video to the face you’ve uploaded, and we make sure that they match and that the person has given consent. So this is our first line of control. It’s similar when we when we used to make avatars for customers. When we used to record 20 minutes of video, we used to ask people to verbally give consent in the video. We don’t allow people to just upload anything. We did a project for the Jubilee where we put the Queen’s face and we cloned her voice. But we put very strict restrictions in place and what you can make the queen say. She would only invite you to a certain location or certain place at a certain time for a certain event. It was very templated. So I think that mthere are two ways to look at this. There’s either our controlled approach where you give people parameters to work within, and you give people reassurance that their data will be handled in a way that’s appropriate in a way that’s really secure. Or you create a community where everyone can do anything. There’s a one one app called fakeyou.com, where anyone can clone anyone’s voice and you can upload the audio and it’s community driven. You can type in what you like. I think there’s these two schools of thought. One is like, let everyone be creative, let everyone create what you want. The other is put a controlled barrier in place and make sure that people are safe. We’ve chosen the latter. Because I think businesses want that. I think humans need that.
Matthew Todd
It makes a lot of sense. I can see how that naturally fits into part of that process of creating those avatars, embedding the permission as a key part of that process. I can certainly see providing an API for users of that why your clients and their clients as well would want some guarantees around that.
Aaron Jones
Twitter is interesting. Recently, Elon Musk, talks about parodies and about how you can’t impersonate someone on Twitter, unless you put #parody. It opens up the floodgates to this type of content. I think there is a place for, you know, humorous content where, spitting image, for example. I think it’s brilliant. It’s really funny, they capture the essence of the character, but it’s a person creatively impersonating, and it’s a person using a puppet. It exaggerates certain physical features that are designed to make people laugh. I think in that instance, it’s very clear that it’s apparent, it’s very clear that it’s comedy. But when you have, like, have a photorealistic face of Donald Trump, or a world leader, saying stuff, it really is, it is tricky. We’re seeing in Russia, right now, with Ukraine, there’s a real information war. I haven’t seen too much deep fake stuff being spoken about. But I think the information wars are the future, will be very much driven by this type of technology. I’m sure that, you know, ministries of defense around the world are investing in this technology, because it is a powerful tool, to communicate ideas and to, build trust and also to destroy trust. I think that as a group of people, we need to make sure that this technology doesn’t doesn’t get abused and doesn’t get misused.
Matthew Todd
I suppose you can argue the same of any and every tool and medium that has lowered barriers to communication right back from the printing press, and newspapers to the internet, every single communication tool has the ability to be misused and has always been misused, as well as useful for many good things as well. It’s no different with the evolution of technology. It’s just a different implementation of the same problem.
Aaron Jones
When trains first came to America, there were posters of these trains with demon-like faces, destroying and terrorizing communities across the Northwest of America. The rant was that trains are gonna ruin our communities and destroy it destroy our lives when actually it just enabled more trade.
I think in the creative industry, right now there is this school of thought around creative AI that it is gonna ruin the job of an artist. It’s going to destroy our income streams. I think there’s two things here, I think, one is that there’s an issue around the data the things are trained on. If people’s artwork gets tagged, people should be paid like a royalty. Because right now, with most of these tools, you can type in an artist name and ask for a picture in the style of x. You can put someone’s Deviant Art handle the description, to stylize the photo. So obviously, the data that’s been scraped, has also been tagged as being by the artist. I think that sort of brings up problems because it allows people to recreate art in a certain style, which I think is unfair because there’s there should be remuneration for artists when they’ve been included in a dataset. On the flip side of that, I think that we’re seeing prompt artists. We’re seeing a new type of wordsmith, artistry,that people are commercializing, people are making money from. So there’s a whole industry being created here. I think that more than this, with any tool, you need people to direct it. In the same way, that going from pencil to watercolors, watercolors to acrylic, and all the different art mediums that have developed over time. Going from like, chipping away a marble structure to cutting one with lasers, you still need artists to design and steer and guide. I don’t think the role of an artist is going anywhere. I think it’s just rapidly evolving. The artists of today, I would recommend that, you know, if you are in digital art, that you would start using these tools and start to think about how you could manipulate them and use them to craft new types of pieces that represent your style.
Matthew Todd
I’ve seen AI copyrighted copy writing tools for all kinds of marketing, copy and other types of copy. A lot of people getting quite vocal and angry about the existence of these tools, oh, they can’t replace X, Y, and Zed. They’re doing this, they’re doing that. I think, for the people that have that as a skill and can create that kind of copy, probably better copy themselves, the people that I know are kind of a bit more progressive, in terms of the way they’re viewing this are saying, well, no, this is just a tool that can, you know, maybe get me like 60% of the way there or some percentage of the way there, across different types of client projects. Then it means that I’m doing my high value work. A bigger percentage of their time is spent doing higher value work that they are skilled to do rather than that some of the lower value more tedious, more manual stuff that they have to do to get to that point in the first place. So from their point of view, as it is a progression and an improvement.
Aaron Jones
A lot of the stuff that’s produced, it’s not going to have the context of your business, and the context of your customers. So while you can generate an article that will give you lots of information and facts, it still takes a real a real wordsmith for real artists to then take this output and tailor it to the audience that it’s that it’s designed for. Yes, you can use it to manipulate algorithms for SEO purposes and whatever else. Yes, that there are lots of immediate benefits. But if you really want to communicate with people, it does take a bit of craft.
Matthew Todd
This has certainly been a very interesting conversation to find out about your business and how you’ve developed and iterated to this point. Your vision of that future is massively interesting. One that I can certainly see coming to fruition as well. It is a big vision and mission, it makes a lot of sense. Before we wrap things up, is there anything for our audience of startup, scale-up founders that you want to get across? Any interesting lessons learned or, potential opportunities that you see that they may be able to take advantage of?
Aaron Jones
I’ve given so many nuggets away in the chat. Somebody once told me, a wise man learns by his own mistakes, and an even wiser man learns by the mistakes of others. This parable is so relevant to the open-source community of today. There are so many people that are building stuff and when you have a really active and really engaged open-source community, there are so many answers out there. So many people would have encountered the same problems. I was speaking to a founder the other day, actually, they’re currently on entrepreneur first. They have this idea to create a generative media company. They were telling me about their business and their vision. I said, have you looked at this and started sharing repos with them and started sharing threads that I’ve been reading, where people are talking about these things, and they were just blown away that someone else was doing their idea. There’s a whole community of people and when something’s out in the wild and available, like, there will be people trying to build something meaningful in the space. Open source is really, really powerful. We’re actually thinking of open-sourcing our vid voice tool because supporting multi-language translation in every language, is quite an undertaking to solve. Right now we support five languages really well, but 10 not so well. The goal is to cover the world to help them all communicate. Open sources can be an incredible tool. There are so many answers out there. There are so many cool projects and just join discord groups join chats and find out what’s going on.
Matthew Todd
Excellent advice. To summarize our discussion, which for me has been massively interesting and I’m sure our audience will find it interesting as well, I think it kind of pulls in a few different threads. But you know, what you just said certainly resonates. Something I would advise founders on as well is, don’t be afraid to do the research, don’t be afraid to go out there. Really find out whether anyone has had your idea before anyone has tried your idea and what other competitors there are. I see far too many founders doing a bit of shallow research, and then getting a bit scared off of taking that further to another depth. The result is that the offer isn’t as compelling as well for sales, as it really should be. That blindness has led them down a particular path, as opposed to the journey that you’ve described where you saw different problems, you saw some things that did work and didn’t work. But you were then kind of committed to solving a bigger problem. Also, I think it led you to different ways of being able to add value so that the products you started with is certainly not the product that you’ve got now. I think of founders that are a bit blinded by and not willing to do that level of research and wouldn’t have followed that same kind of journey.
Aaron Jones
You really find out what you and your team are made of when the darkness comes.
Aaron Jones
You have to just keep moving. There have been many dark times as well at Yepic where we’ve been on the verge of running out of money. Or grant funding hasn’t been paid. There’s been quite a few dark moments. I think you really find out what you and your team are made of when the darkness comes.
You need to, as a founder, be committed to facing the darkness because so many challenges will come your way. So many things won’t work out. You will find yourself in situations where you really feel like giving up, where you lose your biggest customer. I’ve had all these things happen. We just recently sort of signed up a gigantic user and in the week of onboarding, the whole video production pipeline broke down. We have this massive, massive technical meltdown. It took the tech team about a week to fully fix it. I was bricking it.
It’s really about facing these tough situations, and just really pushing through, and making sure that you and your team are prepared for them. Maybe ask your team. What would you all do if we were to run out of money tomorrow? See where people stand. It’s good to have these discussions. It’s good to get people thinking about whether they really believe in the vision and the mission, especially at the beginning.
At the beginning, Yepic had a third co-founder, we love him. He’s great. He did a did incredible job. But for him, just startup life, and the risk and uncertainty just wasn’t for him. It’s not for everybody. It’s better that you find out early on that your co-founder doesn’t like risk than finding out when, when the chips are down. That’s my advice.
Matthew Todd
That’s great advice. It does come down to that vision as well. You hear so many people talking about that. That people just kind of brush that off as if it’s some motivational exercise. Without a core that you can genuinely stick to and are passionate about sticking to, I think it becomes or can become a matter of just chasing whatever you think has got the biggest potential of the week or the day. I’ve seen so many businesses just kind of tie themselves up in knots, chasing whatever thing they claim to be passionate about this week based on the most recent conversation they had with Company X, Y or Zed or investor A B, or C or whoever it may be.
Aaron Jones
That’s important though, right? Before you find your North Star, you need to map out the stars. You need to know which one is the North Star, right? You can’t map them out if you don’t follow them a bit and see where they lead. I get what you mean, at some point, you need to be like, I’m all in.
Matthew Todd
It’s fine to evolve that vision. What I’m referring to is where perhaps, the vision is a bit too shallow. I think that’s where it’s not really a vision, it’s a goal that you’ve got, rather than something that you’re you’re genuinely trying to chase and solve. It has no real core problems you’re trying to solve for in those instances, I think,
Aaron Jones
I would say though if you just want to make some money, then that’s a great route to go down. If you want to do something meaningful that that’s gonna like exit at a big valuation it’s probably not the right route to take. I follow some really cool, low-code, no-code people, and their life is spent just taking a cool new thing, and wrapping it up in a product, and selling it.
Matthew Todd
That’s that is their passion, that is their vision. Just don’t pretend that you’re trying to do something that you’re not, I think, really,
Aaron Jones
At the beginning, when you’re trying to develop an offering, you should go after people in a really personal way.
Aaron Jones
Also another bit of advice. Don’t don’t attend a million networking events. Years ago, when I when I was sort of new as newer to startups, I used to have this idea that you had to just be at all the events to meet people. Actually meeting people making those connections is, it’s not how you how you build a big business. It’s what happens in the follow up afterwards, that makes an impact. Really you should only be out meeting loads of people when you have a product that you need feedback on. Or you have a product or a business that you’re selling and shipping. I’d say at the beginning, when you’re trying to develop an offering, you should go after people in a really personal way. Find people on LinkedIn, ask friends for introductions to people doing interesting things. Be very particular about your time and try to meet the right people and on your terms. Not going to an event hoping you’ll meet people. That was a big, big lesson for me. I wasted a lot of time attending a lot of events, just because I felt that being there would be helpful.
Matthew Todd
If you do those events, definitely do them strategically, with with a goal in mind that you’re trying to achieve and a way to actually do that. But like you say, if you turn up hoping to meet someone interesting, even if that person that you wanted to meet, that you don’t know of yet, your chances of meeting them at the right place at the right time at a pretty small anyway, aren’t they?
Aaron Jones
Well, thank thank you very much, Matthew, for your time. I sort of shared a lot today. Our conversation has been a bit of a tapestry. It’s gone everywhere a little bit. But it’s been really interesting,
Matthew Todd
Thank you for taking the time today as well. It’s definitely really interesting to hear about Yepic itself, as well as the foundations of it that I think people could and should take a lot from. As well as the vision from the future. I hope we have educated as well as inspired a few fellow founders listening to this episode today. I certainly wish you the best of luck with the ambitions that you’ve got for Yepic and please do let us know how it evolves. Thank you for joining me on this episode of Inside the ScaleUp. Remember for the show notes and in depth resources from today’s guest, you can find these on the website insidethescaleup.com. You can also leave feedback on today’s episode, as well as suggest guests and companies you’d like to hear from. Thank you for listening.