In this episode, we talk to Alberto Rizzoli, CEO of V7, an innovative company building an AI platform that opens the power of visual decision-making to all. We learn how the platform has grown beyond a tool for the visually impaired and is now transforming workflows across many industries.
Hi. My name is Matthew Todd, and welcome to Inside the ScaleUp. This is the podcast for founders, executives in tech, looking to make an impact and learn from their peers within the tech business, we lift the lid on tech businesses, interviewing leaders and following their journey from startup to scale up and beyond covering everything from developing product market fit, funding and fundraising models to value proposition structure and growth marketing. We learn from their journey so that you can understand how they really work, the failures, the successes, the lessons along the way, so that you can take their learnings and apply them within your own startup or scale up and join the ever growing list of high growth UK SaaS businesses. Hey, and welcome back to the podcast. Really pleased to be joined today by Alberto Rizzoli from V7. Great to have you here on this wonderful rainy day.
Great to be here, Matt. Looking forward to telling your story and chatting.
Yeah, absolutely. I’m looking forward to finding out more as well. Do want to kick things off by just giving us a little bit of background about yourself and telling people what V7 is and what it does?
Yeah, this is my second company in the AI space. Specifically in computer vision. V7 is a platform for labeling image data. Effectively tell an AI what’s on a picture that’s relevant to you so that you can let it know new object learning objects or improve its performance over time. With the goal of helping companies from multiple industries, from healthcare to industrial inspection to simply SaaS companies that process a lot of images, to create AI that can process this unstructured data, whether it’s images, or videos, so they can understand their contents, the relationship between objects and effectively study them as humans would.
I started in 2018, in late 2018 with my co-founder, Simon Edwardson. Simon and I were also co-founders in a previous computer vision AI company called Aipoly back in San Francisco. So V7 is sort of a continuation of those learnings, those mistakes that we made back in the previous one. Before that I even had another startup that died, it never really got to any VC funding level. So I’ve always been trying to start my own thing ever since I was a uni student. Never quite had a real job except for internships. I interned at Google. Nerds like like myself, like the idea of compounding effects as much as possible. I feel like starting your own company is one of the best ways to compound, either starting or joining really early and getting that initial equity.
I’ve always been motivated by big wins. From an intellectual perspective to scratch an itch, I find that the current state of deep learning makes it the most exciting piece of technology that humanity has in its hands, I can realistically mold it into something impactful. So I wanted to be part of that revolution, which is likely going to last for the next five to ten years. Then maybe we’re going to move away from from the silicon paradigm that we’re in, but it feels like the right timing is now for this space.
Yeah, interesting. Why start your own company rather than joining an established startup that has got some funding in place and potentially the ability to teach you more things? Why dive straight into the deep end?
From an internal perspective, there’s a bit of ambition and a bit of arrogance. The ambition is you see these young entrepreneurs out there that make it and you have this terrible bias because you only see the ones that make it through. Then go on the news for having started something that reached unicorn status. You think that you can do it as well. So there’s a bit of ambition, and a bit of arrogance in believing that you can learn everything on the job.
Ironically, it’s sort of a self-fulfilling prophecy because you do end up learning the job, you end up making a lot of mistakes. But if I were to have joined, say Apple beforehand, if I had joined a successful tech company beforehand, I would have avoided this mistake or I would have known exactly how to do this. On the other side, though, I think it also creates a lot of cushion around you and being an entrepreneur sometimes is also about being unreasonably risky with your career and with your choices. V7 is now a team of almost 100 people and you realize that sometimes you’re a good decision-maker when you are a little bit foolish and a little bit risk friendly. Otherwise you own overanalyze every situation and try to make 100 people happy.
On the other side, I’m also the fourth generation of entrepreneurs in my family. My great grandfather was an orphan and started a small print shop, which at its height, became the second largest publishing company in Europe. Published anything from books to magazines. Then cameras were invented, so became cinema, movies. That was inherited by my grandfather.
Then my father started a movie publishing company. None of the three ever used a computer in their lives, including my dad, who passed away without ever owning a phone. Was used my mom’s phone for phone calls, and nothing else. I was the tech guy in the family instead. So the computers first and the family always had the fascination which was never shared by them.
I think that’s really interesting, but definitely a family history of entrepreneurship and a vision-related field, albeit quite different from a technology perspective.
Yeah, indeed, from reading stuff to seeing stuff, then analyzing what you see. At the end of the day, it was also technology that my great-grandfather started a printing business, but he imported a brand new machine design from Germany, to make the first grayscale images on books for Italians. Italians in the space in between the two great wars were illiterate to a large part, they were illiterate people that were learning to read through books, and the presence of images really entice them to dig into this cool new medium. The Tik Tok of the days were novels.
Likewise, with cinema, there’s been revolutions that we consider boring today. But GPT 3 of those eras were, were you know, technicolor. That felt like magic back then, it took a lot of effort to put it together. Today, we’re just, I think chasing the next thing that feels magical to us. That motivates us to go to work every day because it’s cool. 10 years’ time, maybe the stuff that we’re working on today will no longer be cool.
At the moment, I don’t think, you know, the past, even just two months, especially, they’re just an explosion of AI related content and publicity and chat GPT has certainly massively accelerated awareness of AI technologies. I think before, people were aware of AI tech, but perhaps, didn’t really trust how many of those technologies truly were AI versus a clever algorithm and also mistrust, you perhaps didn’t fully appreciate exactly what capabilities could be opened up?
Yeah, absolutely. I think there was an underestimation by folks that were outside of the academic space of AI. There’s also an incredibly serendipitous timing with the fall of crypto and the rise of AI, which I think is completely unrelated to one another, but certainly comes at a strange timing, where we used to have these two forces, one of them that actually was more capitalized than AI, and probably for far less impact than for far less demonstration of any realism of whether it was working or not. Now there’s been this massive shift in one direction. You know, both of them very valid technologies. But I think we’re now starting to see the cats out of the bag with AI, and just these demos that are available online publicly demonstrate that the impact is going to be similar to the smartphone and the next big thing that the tech companies were looking for, to continue raising their stock prices here. It’s called the neural network and it just compresses enormous amounts of information.
We deal with vision. So it’s a subset of what AI does. The interesting thing with computer vision, or analyzing images, and video and medical scans, is that you can make these predictions in a very mathematical way. An image is a pixel grid. So it’s a matrix. It’s an order of magnitude more complex than a piece of text in natural language, which is what is now wooing the world with chat GPT.
But they’re also massive for the impact that they can have in areas that we don’t usually talk about, like drug discovery, clinical research, or Alzheimer’s research. The fact that we can now use models to understand cells under a microscope and truly understand the relationship between them truly understand the development of diseases that are going to kill us, could sort of be this Gilgamesh moment for humanity in which we now have a key for solving the stuff that are too complex for our tiny brains. The prediction you do is in two years time, we’re going to have similar capabilities. Actually, in probably 18 months time, we’re gonna have similar capabilities and vision that we’re seeing today with these models that are fascinating, even the least technical among us. The implications of this are massive for jobs that we thought were pretty safe until now.
From a technological and knowledge perspective, and what do you think that has changed so recently that is suddenly opening up and enabling a lot of these technologies to be developed?
The answer is actually rather boring. There’s many components. So there’s training data, one side, there’s network architectures, and then there’s hardware. Training data is the stuff that we deploy on the internet that has a description that we can use as a label and image bear. It’s also what V7 produces as a company. There’s training data on the internet, and this training data inside companies. The majority of world GDP is actually inside companies. Although the general internet is very valuable. Google already crawls it multiple times. Then there’s network architectures, how the neural network is put together so that it’s most efficient that giving you the right answer. We have now started to plateau on this paper called Attention is All You Need from 2017 established that this transformer architecture was one of the best ways of structuring a neural net. Still today, we’re kind of sitting on the shoulders. There hasn’t been massive changes in there, although there’s been some very credible, clever development, but the paradigm is the one that we’re working on.
This is simply because of the size of models that we’re able to garner the size of training data that’s available on the internet, we’ve written all these answers and questions on core and StackOverflow. We basically just given AI, all of our intelligence on the internet, and be able to categorize it and let it just read it for hours and hours on end. Then there’s hardware, which continues to evolve, it’s now slowing down quite significantly, because of this five-nanometer limitation that we have, in fact, in chip fabrication, which is, you know, chip design is and chip fabrication far above my paygrade. But we got to this point, at least, you know, we got to the point in which we can create these trillion parameter models and run them a massive data centers. It just happened to converge now.
If you’ve ever seen the Nick Bostrom graph of you know, when will we realize that superintelligence is here, it’s not like you see it in the distance, and you can dissipate it, it’s sort of passes you from, like a bus. The timeframe in which AI is the village idiot, and Einstein, and then super Einstein, is a matter of weeks or days. This is sort of what’s happening at GPT. Two, and other language models, I keep mentioning opening eyes, but there’s other fantastic competitors of them out there was a village idiot, a year ago, and now it’s extraordinary.
Absolutely. I think, you know, like most people have been massively impressed by the capabilities of some of these tools like ChatGPT. But also, some of the contents, generative API’s that are out there, audios as well as videos as well as text.
But I think most people are surprised that those models are able to be so good, the quality is there. The quality of data, as you say, the amount of data that it’s got available to it is absolutely massive. We’ve been putting it online for many, many years without realizing that’s how it could be harnessed.
Yeah, absolutely. If you think about it there’s a lot of work on multimodal research. So Video, Audio text at the same time. If a model were to review this podcast with the visuals with the audio, and then just learn a transcript, and maybe look at some of your other content out there, it could probably generate an episode on its own. We’re not too far from that. It makes you think like, what is it actually that that matters? What would get people to consume this content most of the time, the delta is just authenticity. That’s the one thing that I can generate particularly well, because it’s a human concept that we consider to be something that’s outside of their normal distribution.
The implications are not necessarily scary. They’re just a little bit daunting. But I think the thing that’s worth knowing is that this is sort of like a freight train that’s that’s going to smash through whatever is on its way. If you as the listener are considering boycotting this in any way, it’s unfortunately only going to be very hard because your neighbor is going to just make it happen. Then it’s going to be a very, very unfair competition.
If we ever say, hey, this is not how we should be building AI, then you know, another company or another country or whatnot is still going to develop something that will eventually make some aspects of content creation. Not obsolete, but trivial to the point where, you know, I see it in a positive light, I just see it that it’s going to be a lot easier to produce any form of written content media and creative media, and then we’re really going to start paying attention to what’s different than kooky and, and out of the norm that comes from a little bit of that human unpredictability and craziness.
Yeah, absolutely. I think for the people that are genuinely good at what they do, they will be able to stand out, whilst at the same time utilizing many of the benefits the technologies can provide. I remember at uni learning about different models of human-computer interaction, and you hear about computer-assisted versus human-assisted technology.
I think, you know, tech like this can very much put us put more problems into that domain of computer-assisted where you’re almost kind of playing that role of navigator steering it or editor, you’re creating the output, steering and guiding and refining the output. It’s a massive time saver, and it saves the boring bits.
Yeah, it’s a fascinating one up, it’s almost like, it’s taking us back to an older time in which we are the kings, and we are lazy, but we make we call the shots. We have a very clever and very, very effortful advisor that will make produce content for us. They will tell make it in this style or you know, make the shoe more blue rather than red. Then they’ll do the work for us. Sort of like being the Pope and then telling Michelangelo, how you should paint his artwork for us.
Bringing it back to V7 then and what you’re doing there. And I know you mentioned your your previous company, which was also in the kind of AI vision-related space, I wonder if you could just kind of briefly explain the differences between what you’re developing at V7 and the previous company.
Yeah, Aiproly was largely a b2c app. They use AI on smartphones to analyze the world. It was still with a general purpose model in mind of being able to analyze any object and speak it back at you. So if you were blind or visually impaired, you would have an idea of what’s in front of your phone’s camera, it would basically be a talking camera if you will.
The difficulty there is in the accessibility market, which is a very tough one to crack unless you sell into insurance. So we pivoted it into being able to provide this technology to businesses that needed some form of visual understanding. But that’s what led us to eventually develop the backbone of V7 as an independent company. Because we needed something that would allow companies to upload their data and, and train AI on their objects.
When you think about it, the general internet that large models learn from today, these massive research companies learn on today, only contains the stuff that you can find on Google Search generally. So they don’t actually know how you build a Boeing 747. Because that’s not available in general. But that’s available inside Boeing’s intranet. Boeing wants to make AI’s that can inspect whether the assembly of the 747 is going well or not, and to flag any potential issues, and to understand that the replacement parts are the right ones. All this information inside this multibillion-dollar company should also be fed to AI. That’s what V seven helps companies do with their images, video and any form of visual process that they have internally. They’re able to load it onto our platform, point out what’s relevant to be identified. Then these models are trained to identify these objects over time.
There’s nuances to it and the technical side, they can train their own models, they can train models directly through V7. We leave that up to say the current standards of how data scientists data sciences done. There’s definitely a trend that that we’re moving towards, which is similar to what we’re seeing with large language models that you can create something that just understands the world visually quite well, and understands 3d geometry or in the medical case, there’s the nuances of, of nuclear imagery.
Then from there, you tell it hey, this is a tumor. This is an enlarged bladder. This is something that I don’t want to see inside a patient. The differences one of them is the previous company was heralding the technology that was more of a b2c piece of tech and more vertically integrated whilst V7, we like to see it in the same category, that stripe is for the E-commerce market where it’s a piece of infrastructure to enable the development of any kind of AI company for the invocation of one of the 150 plus use cases that we see within the platform.
The cool thing about AI is that it’s effectively this massive sledgehammer to any process, where we will start to see 1000s of companies that will apply AI to many processes where AI can sit well within. Some of these will be acquired, and then it will become the AI division of a larger company, some of them will become specialists in serving a specific market or a specific region.
But there’s really a lot to be done in many regions of it. We enable them to effectively take data from companies and then turn them into API’s that understand this data this domain particularly well.
What kind of industries are you doing that with?
So healthcare is one of the largest ones we serve healthcare life sciences, we sometimes bundled them together, because what you see in a microscope can be used for both health and research. One of the reasons is that the completion of a task in healthcare is really high value per se image. As opposed to a content moderation example, which is one of the notoriously less relevant ones. Content moderation is when you post a picture to Twitter, Twitter, sends it through a neural net to see whether there’s any pornography or Gore on it. If there are a glimpse of it, it might send it through a queue for humans to verify that.
In healthcare, instead, you normally have a doctor analyze this, this stuff and healthcare costs are largely tied to human costs and research costs. When you think about the massive amount that Britain pays, and the US pays for health care costs. It’s because this stuff is hard, and we bundle it. Sometimes we cheat on top of it, and healthcare companies might charge crazy premiums. But by and large, it’s because we need really smart people to do this stuff. AI serves a promise of really curbing this cost and making health services available to anyone.
This means adding images of CT scans and then letting companies teach AI’s what’s wrong with this patient or what’s right with his patients, and then letting him understand it over time.
Second area of focus for us is what we call industrial, which is anything from defect inspection, manufacturing, to routine inspection of infrastructure, basically, anything that needs to be observed in the world, by companies that take care of the the physical side of things. So anything that breaks because of entropy of the universe that we’re in, and it’s a surprisingly large sector, everything that’s built around us requirements, maintenance. This is something that we see I will AI robotics is an increasingly common medium to go and check, check that things are okay.
Then another area that we see a lot of growth is what we call high tech or any SaaS company where you post a picture normally now sends that picture or video through a neural net to make sure that things are fine with it. There’s no inappropriate content, or it just understands what content it contains so that it can route it to the right user or apply the right tags so that the right person sees it.
I can definitely see how with healthcare and the other areas as well that it certainly is a kind of expert operation, therefore an expensive operation for humans to get involved with.
But one thing I’m wondering is, how have you found the trust side of things and the reception of this technology in an industry which can often be accused of being quite behind the times in terms of, of technology we get with quite outdated systems that they’re using?
Yeah, we specifically cater our product to high reliability AI usage. So if you’ve ever used an AI product, most of the time, you’ll get a result. You can set something like a temperature to make sure that the result is either edgier than it should or it’s conservative. What V7 exposes is the confidence of every minute answer that the model gives and this confidence can be quite reliable so that the user on the other side can tell that this is a suggestion from the AI. But it’s not very confident one.
Moreover, we’re able to issue corrections to these so that users can then improve the performance of that AI for future examples. This is done in a workflow system. So in healthcare, you often don’t have AI working alone, you will have AI plus humans working in a workflow. This is like a drag and drop builder where you can define the route of a file or a patient study.
Or first it goes through an AI that AI might tell maybe you know, what part of the body we’re looking at, and then routed to another AI that will scan the contents of an MRI scan, and then surface all of its predictions, and then send it to a junior person, junior doctor, or medical student that will perform some checks on it, and then maybe send it to a senior person only if it detects a certain anomaly that is, or a lesion that is worth looking at.
These are all done in an automated fashion. Because we process we as in, you know, the people of the world process hundreds of millions of patients scan every few months. So it’s a very laborious, intense process, even from just an admin perspective to to look at images.
That’s really interesting. I think how you’ve described that workflow certainly makes a lot of sense. When people hearing about capabilities of AI is often try and slot them into existing workflows.
Whilst at the high level, obviously, the workflows are kind of things that they’re doing is, is broadly the same. I think the interesting thing will be to see what that impact is on those kind of lower level processes and operations within within companies.
Alberto RizzoliAI is the new RPA.
There’s a parallel to RPA. Here. RPA was really big five years ago, and it’s still really big, but I think it’s, it’s starting to see the sunset of its explosive growth. Simply because AI is the new RPA. Something we wanted to make sure we would capture is what will RPA look like when neural networks can do all this fancy stuff? Including clicking on your screen. Like, they have the capabilities of performing actions as well?
I don’t think we’ve converged to the final design. Every piece of software has like many initial designs, and then we all converge. This is why interfaces now all kind of look the same smartphones all look the same. There is what we tend to consider an ultimate design until the next best thing comes comes along. I think we’re maybe 50% of the way there were they I workflows.
So where do you think at the moment, and AI isn’t quite there yet, whereas it need more more development more time to mature.
So a simple rule of thumb is that the more complex your data structure is, the more unstructured it is, the more AI needs time. So language is a vector is a string of numbers. We are doing really well in that images is a matrix. So you now have another dimension to it, then, space or video, if you think about it’s technically a four dimensional image, because it’s a stack of images placed over time, and these will deteriorate in the quality of results.
But I think that video, which is where AI is now doing the worst, has also the biggest potential because to read the entire internet gets you chat GPT, which is very impressive to be able to see videos about the entire world gives you more nuanced understandings of the world, that go beyond the word salad understanding that these language models are exhibiting. They only fool you into thinking that they have logical reasoning, but through vision in, in video, they could actually exhibit some good some visual reasoning.
From a practical perspective, the world of industry, so not the world of demoing AI on your browser still needs 99% accuracy to be able to input a process. You wouldn’t go to a doctor if if their results were 90% accurate. You know, if you had 10% chance of surviving something or not surviving something that would still be an unacceptable threshold for you to gamble on.
If you had a manufacturing plant that detected defects in your engine 90% of the time, that would also be unacceptable. That will be 10% of vehicles that could be recalled.
In mechanical engineering, you normally have like five nines, or sometimes six nines of reliability He needed. There’s no such thing as that an AI. So a big part of the post-academic work that companies like ourselves do that apply into industrial processes, is in making sure that we can translate this genius of AI into reliability, into being able to predict when it doesn’t know, stuff, and spotting these difficult cases. Flagging them as like, you know, I don’t know this, so I’m going to send it to a human to verify this, this actual layer.
Yeah, I think that accuracy is really interesting. Obviously, seeing a lot of the press about Tesla, Elon autopilot, all those things are I know, you know, comparing statistics against human drivers and the error rate and everything else, you know, you can prove that is statistically much, much safer than the general population yet for people consuming that technology.
Maybe it’s just the media trying to put a spin on it, but it seems that people almost because it’s a computer system, which previously would have been, you know, extremely predictable, that they seem to put on board, an extra level of expectation about that accuracy and reliability is that that’s something you’ve seen as well.
We’re far more forgiving to humans than we are to technology, about the mistakes that we make. I mean, it is probably objectively true that humans are better at driving cars and autonomous driving right now, they’re not safer, but they are better because they’re, they’re a little bit better at improvising. But even when cars will be particularly good, we will be really annoyed at their mistakes in a way in which we are not annoyed at your cab driver, because we empathize a lot more with the ape on the driving seat than with a computer on the driving seat.
So let’s get ready for that, we will be some very moody users of of AI in the future. This is something that we see ourselves as well, where we are, we’re a software company that produces software for developing AI. The software also contains a lot of humans that make mistakes. These humans are labelers, they effectively are humans that will tag images that will place boxes around objects that need to be identified or polygons to encircle them and segment them.
When our when a human makes a mistake, it’s you know, we just ask, Hey, you know, there’s a few mistakes here, there that needs to be corrected. That often forgiven a lot more than if the software croaks. Or if it deletes an annotation by mistake, there’s this immediate betrayal of trust. Because we expect, we expect software to be infallible, and entirely deterministic.
So I think we will probably carry a bit of this bias in AI, which is a lot more probabilistic as a piece of software, and we already see chat GPT, for example, making a lot of confident mistakes, without telling you that it’s not sure about something it will confidently say that something is so
yeah, I guess it’s easy to go to a person a human and say, Oh, by the way, you’ve got this a little bit wrong here. Can you try and learn from that and not make that same mistake again, but it’s a bit harder to do that with technology.
A human can break your trust if you tell them something, and then they’ll do that mistake again. On the AI, we also empathize with the way they work, we are actually the same technology. So we understand that if you tell a human Hey, please don’t make this mistake. It’s a mistake. We expect them to learn from that. If it’s a piece of software, and we point out something that’s broken. It’s not like it’s going to fix itself, because we pointed out you need to go and talk to the developers to make it happen.
Maybe there’s a middleware opportunity there for large models that act as interim bug fixes for AI by by trying to monitor a specific bug and then correcting it in some haphazard way. But for now, there’s a gap in between being able to immediately fix technology by telling it that something is wrong and actually having to fix it.
In terms of, you know, V7 from the technology and also from a company perspective as well, obviously you’ve you’ve grown pretty quickly since inception for you as CEO , where do you see the company heading next? Is it improving tech, is it expanding the use cases? What do do you see happening over the next few months?
Yeah, it’s been crazy. I think we were 30 or so people a year ago. Adding 70 People in this timeframe has been a lot of effort, but also, a lot of fun. You start to see, I’m now in the space in which I can take a half day off, and then things still happen on their own. It kind of feels like magic. In a way, I think if you’re someone who’s really used to controlling a situation, because as CEO of a small startup, you usually have everything under your finger, you have a pulse on everything, it starts to become a bitmore daunting, because teams now have a life of their own and leadership of their own. You’re proud of it. But you’re also wondering, like, how does this happen, it is certainly pride that develops. This gives us the ability to scale in in different ways, we now can just get ambitious people and give them a broad goal. Then let them you know, for lack of a better term, figure it out on their own how to get there.
It doesn’t require a division of the founders every time, it just requires a broad vision of where the company should be going. But it also gives the ability for other members of the team to really set their mark and create their own almost startup within V7.
We have these four values that we think guide people towards the right, the right direction towards the right light, one of them is moving the needle. So always do something that compounds never do something. This is something that, for example, that in some companies Account Executives might be privy to, which is that they will work for their own commission and targets but they will not add something to the overall sales organization. We instead try to encourage them to do so. Whatever you’re doing, make sure it always adds something to the pile. This has magical effects that humans aren’t very good at rationalizing. Whenever you’re doing something, if it adds point one effects to you know, the overall experience, then by the time you do it 10 times or 20 times. It multiplies in incredible ways.
Another one is to have a level of reliability. So tell people when things are about to fail. So liability is really important for us not just when things are successful, but when things are going to potentially crack.
The other one is, is writing the new playbook. That means learn what’s worked pretty well in other companies up until now. Then write a new version of that for this new decade or for this new five years. I think this is really important. There’s concepts that are still in this startup Bibles, like move fast and break things that are no longer relevant today.
We try to seek people that are excited about learning, you know about the Netflix culture, book and document and then being like, okay, well, this has been great. Let me throw it out and write a new version of this for the new values that people have in the post-COVID world for the new technologies that we have that just enable a whole different way of collaborating across the world. We think that’s really important.
How do you ensure those values are actually carried out in execution and day-to-day work? Is it the codifying of those is the creation of playbooks that are derived from this?
Yeah, sometimes it’s more religion than then management. Sometimes just writing them down. That’s something we’ve discussed recently is, we now have so much onboarding material. We use Notion for internal wiki as most startups do, and becoming a beast that I don’t think anyone ever anyone, everyone reads everything.
We started with having a reading list like people would read two books before starting at V7. Now, I think they might read them in their first quarter at the seventh. So we’re experimenting with things like what if we were to print our company handbook, and make it a physical copy. So we would mail it to people and when people have a physical copy of something that I respect a little bit more than an ocean page, and that kind of nudges them into the priority of what they should really learn first, the values then they just have to be repeated.
We try to make sure that they are embedded in one on ones that people read them before they they score folks in our HR bot forum. We have also scoring systems for six-month reviews that encompass some of these value. There’s skills that point towards these values that we have. For example, systems thinking is something that we score people on. That’s the understanding that people have about the entire company and their impact on other departments for their own individual work.
We find that people that try to think of how does this impact marketing, how does this impact the vision of the company tend to do better, and if we encourage them towards that, then we also help them develop into better managers, but individual contributors, and ultimately, we want two things with people. One is that they leave V7 as a really elite person in their remit. I’d like us to be one of those companies that eventually people look at their LinkedIn profile. They’re like, oh, you worked at V7, you must be good.
The other one is that they should see these years that B seven with nostalgia. This is actually really hard to achieve, because it’s still a job, you know, at the end of the day. It’s still it’s still work. But I would like them to look back and be like, wow, these were some of the most formational days of my life. I actually had fun.
Areally good vision and good perspective, to have. One thing that, or one question that springs to mind is, especially with the type of growth in terms of headcount that you’ve described, and those values are, how have you found hiring generally, especially in a cutting-edge area of technology, as long as that been quite a challenge? Or is that been something you’ve been able to navigate pretty well?
Yeah, it’s difficult. I’ve learned the, the more I’m terrible at coming up with metaphors on the spot. But you know, if at some point you, your family, and you you add bread to the table, then your kids are going to be used to having bread to the table your your wife or your husband is going to be used to having that’s it. So we add these tiny little perks, because we are more people, we’re not doing lunches every Thursday. So it’s a really big lunch that we do all together. But it’s that one opportunity to force everyone into the same room to talk and chat and have a beer sometimes. These things actually takes quite a lot of admin and effort. We’re moving, we’re now we moved from being a company, which everyone had a job and then the culture stuff on the side, you do it for yourself. So you would have a better time at work to a company where there’s people that are dedicated to creating these cultural elements.
We found that to be quite important, we we hired someone really excellent in our people operations to handle our retreats, our team building activities for remote and for in-person stuff. Now I’m finding myself needing to expand this team because it pays off massively. It’s a simple equation, if your office is I’d say it’s 500 grand a year, if you’re a team of like, 70 people, that’s more or less what you’re paying.
Then you have employee productivity that can shift, let’s say, on a bad day, you might be 70% less productive on a good day, or 100%. Sometimes on a bad day, people are 50% less productive. If you can shift that productivity level up by 5%, then you’re paying off rent, if you can shift it by another 5%. Because of cultural relevancy create, people are excited about the next, you know, retreat, they’re excited about the next fun QBR or in person activity, then you you get another 5% payback. So there’s a simple profitability ratio in keeping employees motivated, that just pays off. So it’s better to invest in that than to invest in you know, the next cool SaaS tool.
absolutely, I can tell you’re very kind of mathematically oriented in the way that you describe it. I think it’s a very good way of looking at that problem. But how do you measure that? How do you go about determining you know, whether people are productive or not?
I think it all comes from gut feelings, and then I try to map to justify it mathematically. So I can’t give myself too much credit for being very analytical about it, usually that I think something is the case and then I try to find a numerical way to justify it.
I think that’s a good balance. I think that’s a healthy way to do it, rather than trying to force everything into, you know, perhaps draconian systems that attempt to measure stuff that’s really just a proxy and wrong most of the time.
I am a relatively skeptical person when it comes to measuring. So I’ll say this, I do not speak for V7 on this one. I’m not a believer in NPS myself too much my customer success team knows it and obviously agrees with me. I kind of feel like at least because this is my own personal world, we experience that surveys bias you towards the intention of the survey. most of the time people survey you for an intent like NPS surveys you not to see that they’re doing well, but to see that they’re doing poorly.
Every time I see an NPS tracker on a on a website, it makes me realize like, oh, these guys are losing it. They’re losing the cluster. Because why are you gauging whether we’re doing poorly. But that is my own personal bias. We do use surveys, we do use NPS. Tis is a way of tracking both employee engagement, which is something we’re setting up now. And then customer engagement on the other side.
For any other, you know, founders, listening to this episode, going through that, that journey of scaling, what advice would you give them from your experience?
Alberto RizzoliReally refine product market fit as much as you can.
I think it’s, we’re now talking about this in early 2023. We’re in a particularly unique economic time. The advice I would give is really refine product market fit as much as you can, in an era of maybe a year and a half of people pulling the belt a little bit, and only going for either a really, really polished version of whatever you’re selling, or things that are painkillers rather than vitamins.
If you’re an early stage, founder, and you’re working on a painkiller product, something that people absolutely need to unblock something, then you’re probably fine. If you’re working on a vitamin, there’s just an existential risk with overhiring today. Don’t be too fooled from the amazing, we sort of went from 2021, in which there was no talent available out there unless you paid money for it to a period of time, which is incredible talent out there for cheaper.
Don’t be fooled by that. It’s, it’s going to be we’re basically paying today for for some of the mistakes that we made in 2020, and 2021. And the important thing is for you to come come out of it alive, healthy and thriving at the end of this year, with a very solid, solid business model. So we’re all in this together and don’t overhire for the time being.
I would definitely echo what you say about product market fit. I think it’s always been important, you know, founders often make the mistake of being too product-led and forgetting about the market side. I think they’ve been able to get away with that for a little bit longer in the past, and they can do now, but I still think the same principles hold true, right? They will still succeed or fail based on now. It’s just about, what happens in between those two points that are perhaps a little bit different these days.
Thank you for talking to us today. I think it’s been really, really interesting. You know, from a few different perspectives on the business and culture side, but also on the AI side as well. Hearing about, you know, some of the developments, your technology and the way that people are using it. I think it’s, you know, an area that’s only gonna get more interesting over the next few months. So thank you for giving a bit of insight into the way that works.
Thank you. I was excited to talk about our journey. Thank you for having me.
Thank you for joining me on this episode of Inside the ScalUp. Remember for the show notes and in-depth resources from today’s guest. You can find these on the website insidethescaleup.com. You can also leave feedback on today’s episode, as well as suggest guests and companies you’d like to hear from. Thank you for listening.