Generative AI and the Future of Work for Thinkydoers
Are you wrestling with ethical questions about AI while also feeling curious about its potential?
In this thought-provoking episode, Sara welcomes Dvorah Graeser, an "old internet" technologist who brings a unique perspective on AI democratization. From programming for the Human Genome Project to founding RocketSmart, Dvorah shares insights on how we can approach AI with both skepticism and agency.
Discover why those who shy away from AI might be surrendering power to tech giants, and learn practical considerations for responsibly engaging with generative AI tools in your work.
Episode Highlights:
Dvorah's background programming before the GUI and her journey from the Human Genome Project to AI development
The ethical considerations of generative AI and how to navigate them as business owners and creators
How to evaluate AI models based on their transparency, data policies, and public commitments
The democratization of technology and why bottom-up AI adoption benefits everyone
Why small businesses might leapfrog large corporations with open-source AI models like DeepSeek
How generative AI affects workplace satisfaction differently across roles and experience levels
Practical advice for protecting your intellectual property in an AI-driven world
Key Concepts Explored:
AI Ethics & Transparency
Role of AI in data analytics and predictive modeling
AI as an enabler vs. driver of outcomes
Practical applications in OKR workshops
Limitations and considerations
Generative AI for Work
The role of AI in automating tasks vs. augmenting human work
AI as an enabler vs. a driver of outcomes
Practical applications of AI in different industries
Small Business vs. Big Tech
Can AI level the playing field for solopreneurs and startups?
How large corporations control AI access and development
Opportunities for smaller businesses to leverage AI effectively
AI for Strategy & Execution
Integrating AI into decision-making without losing human creativity
Using AI for data analytics and predictive modeling
Limitations and considerations for AI in strategic planning
Notable Quotes:
"If a company is using AI in a way you don’t like, let them know—preferably on social media, so others can join the conversation." – Dvorah Graeser (00:31:00)
"Generative AI is more about curation than creation. It gives you 100 ideas, but you still need the expertise to pick the right one." – Dvorah Graeser (00:22:00)
"Most small businesses don’t need the latest AI model. They need AI that works with their data and processes." – Dvorah Graeser (00:26:00)
"AI isn’t just a tool for big corporations. Small businesses that use AI strategically can be more agile and outmaneuver larger companies." – Dvorah Graeser (00:28:00)
Chapters:
00:00:00 Introduction: Welcome to Thinkydoers and introduction to Dvorah Graeser
00:03:00 Dvorah's background: From programming before GUI to AI development
00:05:00 Ethics of generative AI: The challenge of retrofitting ethics
00:08:00 Choosing trustworthy AI models: Evaluating data policies and transparency
00:10:00 Democratizing technology: The historical context and importance
00:14:00 Advice for Thinkydoer leaders: Focus on process integration
00:17:00 IP concerns for creators and business owners: Strategies and policies
00:20:00 AI and the future of work: Research on workplace satisfaction
00:25:00 The potential of open-source models like DeepSeek for small businesses
00:29:00 Individual action: How to participate in shaping ethical AI
Guest Information:
Dvorah Graeser is the founder and CEO of RocketSmart, specializing in AI, intellectual property, and technology commercialization. With a background in pharmacology, AI programming, and U.S. patent law, she has worked on projects like the Human Genome Project and now advocates for AI ethics and democratization. Based in the Netherlands and Chicago, Dvorah helps businesses and universities navigate the evolving AI landscape.
Dvorah’s Resources Mentioned:
Dvorah's Linkedin: linkedin.com/in/dgraeser
Snag Dvorah’s Three-Page AI Transformation Blueprint: https://findrc.co/genaibydg
Websites: kisspatent.com
rocketsmart.io
Sara’s Links and Resources:
No-BS Strategic Achievement Intensive: Join our mailing list for waitlist information at ck.redcurrantco.com
Join Sara’s Email List: Get updates on OKRs and strategy at ck.redcurrantco.com
Book Website: Stay updated on You Are a Strategist at youareastrategist.com
Connect with Sara via LinkedIn, Bluesky, Threads, or Mastodon
Podcast Home Page: Thinkydoers Podcast
Book launch squad: findrc.co/launchsquad
Find full show notes and the episode transcript via https://findrc.co/thinkydoers !
Full Episode Transcript:
Sara: Welcome to the Thinkydoers podcast. Thinkydoers are those of us drawn to deep work, where thinking is working. But we don't stop there. We're compelled to move the work from insight to idea, through the messy middle, to find courage and confidence to put our thoughts into action. I'm your host, Sara Lobkovich. I'm a strategy coach, a huge goal-setting and attainment nerd, and board-certified health and wellness coach, working at the overlap of work-life well-being. I'm also a Thinkydoer. I'm here to help others find more satisfaction, less frustration, less friction, and more flow in our work. My mission is to help changemakers like you transform our workplaces and world. So let's get started.
Hello, and welcome to [00:01:00] this week's episode of Thinkydoers. I am excited to introduce you to Dvorah Graeser, founder and CEO of RocketSmart. Dvorah brings a unique perspective on AI, having started programming before the graphical user interface and the widespread adoption of the internet. Dvorah is an old internet technologist like me. Before we dive in though, we are getting closer to the release of my upcoming books, and I need your help to get the word out. You can join the launch server at findrc/co/launchsquad. I need advanced readers, social media amplifiers, and really just for folks to help me be as excited as possible to get this book in the world when I get nervous or scared. Alright, so, today in part one of this two part conversation, we'll explore the ethics of the generative AI tools that are multiplying like bunnies, and we'll get [00:02:00] Dvorah's take on democratizing technology and how to approach AI with both skepticism and curiosity and openness like I do.
So if you've been wrestling with questions about responsible AI use or feeling anxious about AI's role in our work and world, this episode is for you. We don't have all the answers. But you're not alone in asking questions.
I would like to welcome Dvorah Graeser to the show today. Dvorah, do you remember how we got connected?
Dvorah: I do, because of course, I'm a huge admirer of you and everything that you do with OKRs and the Thinkydoers. I've been a fan for a long time. So I had to do with OKRs. I figured we had to do OKRs, knew nothing about OKRs, went and looked it up, and you were the only person who gave a really human presentation on OKRs.
Sara: There are other wonderful humans in OKRs. I think I'm just kind of doing [00:03:00] a lot of online content right now, so you're seeing a lot of me. But I am increasingly inviting those other awesome people on, to do lives and things like that with me too. So we'll spread the love around.
So, for our guests, I'd love for you to introduce yourself, tell us a little bit about who you are, what you do, and where you're based.
Dvorah: Happy to do that. So I'm Dvorah Graeser. I'm the founder and CEO of RocketSmart, rocketing your IP out of the laboratory and into a great licensing deal. So that's to help universities with commercialization. I am mostly based in the Netherlands, although sometimes I'm in Chicago, Illinois, freezing in the winter—yay. I actually started programming when I was 16. I learned how to program before the GUI, the graphical user interface. I learned how to program before the internet really got started, when everything was still on dial-ups. And what I've seen since then is that technology is great when it is democratized, when it helps benefit all of us. But there Always seems to be a tendency for technology toget into the [00:04:00] hands of the powerful and stay there. One example, I got my PhD in pharmacology. I was programming. I programmed for the Human Genome Project. Everything was great, but a lot of people were afraid because Craig Venter's company was getting patents on all the genes, and the researchers were like, We won't be able to do anything with genes. We're gonna be stopped. Now, in the end, it wasn't like that, but that was a really big fear. And I have to admire Craig Venter, he did a great job with the human genome. He was a very big part of it. But there was this tension, and this tension has continued throughout my life as a U.S. patent agent, helping people protect their ideas. But now, also through AI programming, I try to help people find others and connect with them for commercializing early-stage innovation. That, overall, is what I do, and I want AI to be a force for democratization. But, of course, like every other technology, it can start getting into the hands of the powerful and making the rest of us feel like we don't have a way to come into it and make use of it. And I think that's the wrong, and thing I [00:05:00] want to change.
Sara: It's always so much fun to me to talk to other early internet people, and early programming people. Early as in our generation of early programming people, because I swear that mid-'90s internet changed me forever in how I interact with the world, and communicate, and operate with people, and behave online. Let's just start with the elephant in the room of ethics. And what's your point of view on ethics and how people can ethically participate in generative AI, if we can?
Dvorah: Ethics has always been a concern with me for AI. So, if we rewind a bit, I started training our company's AI models back in 2015. So they were not generative AI. They weren't even, like, really like neural nets. They're really simple kinds of AI. We had to bring the data. We had to clean the data. What it meant was that the output was predictable and we controlled the input. So if it was being used ethically, that was totally on us, right? If we screwed up, the only way we could be unethical is if we screwed up. It [00:06:00] wasn't because the AI was doing something. Now, we fast forward to generative AI. And, to be perfectly honest, we're not fully in control of what happens when we're using it. And a large part of it is because, of course, these models have been trained using a ton of data. Not our data. Not data that is necessarily been ethically sourced, given the number of people who are suing, for example, Open-AI, or some of these other companies are being sued. Some of them have come to agreements. There's a lot of thoughts about how we can make this fair. But the fact is that the generative AI models were trained first, and now we're trying to retrofit fairness and ethics onto an existing technology. So, here I would look at it in two ways. One is, how do we protect those around us? So, our clients, our employees, those may be further downstream. If we're making a product that will be used by our clients for other clients, what will happen to these other clients? That, I think is [00:07:00] a very strong concern because that's where we have more control. How we design it, how we allow our clients to use it, how we instruct them to use it. Maybe they don't understand, that's on us. Now, when it comes to how generative AI was originally trained, that is a problem. Because it's already done. You can't unring that bell. So, here, I think we need to keep an eye on what is going on. I do believe that in the future, there'll be more of a split between AI models where they pay more to creators, where they're transparent about where they're getting their data from. They're transparent about how they're using their data. They try to make the best of an ethically dubious situation, and they try to fix it. But right now, It's kind of a fog, and so I don't have a clear-cut answer for what to do with the existing generative AI models, because it's a problem.
Sara: Yeah, it makes me wonder, for someone concerned with ethics and what's right, and we're both heavy users of these [00:08:00] models. How do you choose what models you work with? Or what would you tell people who are concerned to look for when they're reading the fine print on a model?
Dvorah:
Well, so, first of all, it depends if it's a general-purpose model or if it's special-purpose model. Now, if it's a special-purpose product, let's say it's intended to help you write marketing content, the question then is, are they using an existing model? Are they using an open AI model, or did they make their own? If they made their own, then you have the opportunity to query how they got the data, what they did. If they're a company that just has a lot of access to a lot of data, then that's a completely different situation. But let's talk about the more general generative AI model. So we have ChatGPT from OpenAI, we have Claude from Anthropic, Google Gemini, and now we have DeepSeek, which is available in an open-source model being hosted in the US. So for example, through Hugging Face and other hosts. Or you can get it as an app. And I believe that the [00:09:00] servers are in China, but to be honest, I don't even know. So there, I would want to know, where are the servers? What data was it trained on? And also, will it be training on what I'm inputting into it? Now, Anthropic makes a very big point of stating that they do not train on data that their users put into their model. They have a different way of doing things, and they are trying for AI safety and for AI ethics. So there, if I had to pick any model or any generative AI, I would probably stick with Claude from Anthropic for that reason. Also because they do try to be transparent about what it is they're doing and how they're doing it. ChatGPT will train on your data. They do say this, more or less, up front. I mean, it's there, you can look at it. Some of the other ones, like if you're using the DeepSeek app that they are hosting, I don't have a clue what they're up to. It could be anything. So that's why it's important to look at not only the fine print of the legal agreement but also, what are they stating in [00:10:00] their public-facing voice and their brand voice? What are they telling you about how they handle data and what their beliefs are? Because for me, that does go a long way.
Sara: You talk about democratizing technology and the role that AI plays in democratizing technology. What's your point of view, and where does that come from?
Dvorah: Well, so my point of view, it comes a little bit from the kind of early days of software when we widely believed that everyone could have access to it, that everyone should have access to it. We were against the model of, like, the earlier IBM, which is one big computer in the room for the whole company. And We're going to have gatekeepers who could control how we can use it. We said, no, individuals should have the right to use it and to make it do the things that they need for it. And I've always believed that to be true with technology. Technology is not something that should have gatekeepers, to the extent that we can make it open and available. And a lot of times we don't, more for reasons of money or power than, well, because you don't want [00:11:00] someone who's not trained flying a plane, right? It's a different kind of a thing. A computer is not like that. So I always believe that, I was very happy when everyone got a computer, and then there was the internet. And I said, okay, great, we're all going to be able to talk with each other and share things. And people don't have access to all the research will still be able to get access. And then we just ended up with a ton of gatekeepers, is what it came down to. Every possible thing, from not being able to access scientific articles. There were, like, a few attempts with people to gain access for that. And there are copyright fights, and it kind of settled down to an uneasy truce. In the case of things like AI, I hear a lot of corporations, big corporations, who are saying, yes, we must go full steam ahead. But really, where I see the benefit is for solopreneurs, individuals, and small businesses. Just because the efficiency is so much lower there. And I also see it as being something which could potentially help people all over the world. It could help small businesses in Africa use their phone to access something. So maybe their phone isn't that [00:12:00] powerful. Maybe it's only a feature phone, it's not even a smartphone. But if they could message and chat with an AI, they could get that information. So the power doesn't have to be on an expensive phone they can't afford, in a computer they certainly can't afford. It can be handled upstream and they can get the downstream benefits. The issue is I don't see that happening right now. I do see folks are trying to make it more widespread by offering relatively less expensive subscriptions, but there's no clear path forward. Even with the big companies I've spoken to, a lot of times they're cramming the AI from the top down. What I see is, we need to actually have a groundswell from the bottom up. And that will help in a few respects. First of all, it'll help us individually understand what we feel about AI ethics. But this means we have to educate ourselves about AI, and we have to want to take the power back into our own hands. I believe that those of us who shy away from AI are actually letting the powers that be kind of run roughshod over the rest of the world. We all have to get into the fight and decide what's important. [00:13:00] So that's one aspect. But another aspect, and this may sound surprising, if we as individuals learn more, and we get into the fight, and we want to democratize it, we take it into our own hands. Yes, it will help us as individuals and as a society, but it'll also help smaller businesses and even larger corporations. One of the big problems right now is, even in a big company, they'll have an AI specialist who's way the heck over there. Oh, I just vanished. Boy, I vanished into that wall there. There we go. But that is indicative of what happens. They're behind a wall, and then all the people who need the AI, they're on the other side of the wall. But because the people who need it aren't learning about it, wanting to empower themselves, wanting to say this is what we feel should be done, the corporation ends up with folks on two sides of a wall, never the twain shall meet. And then it doesn't work. So, you see, bottom-up democratization isn't just good for individuals and society, it's even good for big companies.
Sara:
You mentioned the power [00:14:00] of these models and tools for solos and small businesses. And a lot of my people, a lot of my people are employees and trying to build happier careers in those environments you just talked about. But a lot of my people are also, a lot of our listeners are solopreneurs or entrepreneurs, they're leading companies. So, what would you tell Thinkydoer leaders who are so busy running their businesses or just trying to keep up with what they have to? What would your recommendation be if there was one place for them to become more aware? It isn't even about usage of AI, but what would you tell them to be aware of if you only got a little bit of time with them?
Dvorah: I would actually ask them to look at their process. And the reason why is that generative AI works best when you have a process and when you [00:15:00] integrate the generative AI into your process in a way that feels comfortable to you. Now, if we think about running a small business, I run a small business, you run a small business, lots of people do it. Even if you're a solopreneur, you probably still get help with taxes and accounting and other things that I'd rather not do. So, you know, I try to get help with those. So even there, you still are working with the team. There is still someone else who is working with you. So then there's a process. And one of the things I have found is that where things break down is in communication between humans. That is where processes run aground, that's where time is wasted,. That is where John didn't talk to Jane, or Jane didn't talk to George, or you end up with something that comes back like it's a broken telephone. And at the end there's, like, five people down the line, and the last person down the line, let's say, like, Bill—Bill's like, what? What? This is not what I was expecting. So, there, it's a matter of process. When we're doing things manually, if we're in the same office, we can just go and knock on Bill's door and say, "Hey Bill, I'm sorry, that was a little [00:16:00] confusing. Can I talk with you about it?" When we're working remotely, when we have a widely distributed team, maybe when our teams are super part-time, or when we're trying to do more with less and we're all under a lot of stress—that's when process becomes super important. And process is very important for AI, also for larger companies. I spend a lot of time talking to big companies about their process as well. And somehow, even with larger companies, there's this idea of, 'Well, there's these humans, and there are these software, and we're just gonna smoosh it all together with AI.' Doesn't work.
Sara: I giggle because that's what I see with OKR software implementations as well. It's the same pattern. So, what would you tell people, you know, creators, artists, writers, business owners, who are generating IP? I know this isn't a conversation about IP law, but you're a fellow business operator who generates IP. So, what would you tell folks they should be aware of as we all continue [00:17:00] to generate and publish IP in this new world order? Do you have any thoughts or recommendations for folks?
Dvorah: First of all, any content that is put out there is likely to end up in some kind of generative AI engine if we're publishing on social media. So, you know, I like LinkedIn; other people publish on Facebook, Meta, or Twitter/X,
if you're publishing on a social media channel, I would assume that that material is going to be sucked up into some giant generative AI training session. Even if you're publishing on your own blog or on your own website, there is a way to ask the robots text, etc. And you can play with that, but that can also affect how well the search engines can find you, in my experience. Now, there might be people of like different ways of doing this, but to be honest, I've talked to a bunch of people and they're just like, "Assume that if it's out on the internet, if you want it to be found, then you have to assume someone's going to be using it for training." so then the third part comes in. Well, what about the super sensitive stuff that I would never publish on the internet or in a social media channel? What happens with that? [00:18:00] There's where you got to be careful. You want to read the policy, the data and privacy policy of every single AI tool you're using. I don't care if it's Gen AI or not Gen AI. You need to read those policies carefully. If you're not sure, get on the phone with them. So earlier in the days of generative AI, like a couple of years ago, when folks were starting to use it, I had one software, which I'll go unnamed, where it wasn't clear what they were doing, and I just got them on the phone and I said, "Look, this isn't clear. I also train AI models. Here's why it's not clear." "Oh, you know, you're right. We meant to make that more clear." They changed it. So get them on the phone, get it in writing though. if once they get on the phone, do get it in writing. And then you have to make the best balance choice. So this is especially true if you're worried about your own data coming in, but also about ethics. There's always going to be a tradeoff and a balance here. And unfortunately, I don't have a really nice clear-cut, tied-in-a-bow answer. It is more thinking through it with yourself. I do recommend that [00:19:00] small businesses and large businesses have an AI policy. What are employees allowed to do or not allowed to do? What is sensitive data? So I did talk with one firm, and they said, "Well, yes, we do have a policy that you're not allowed to use these generative AI software with sensitive data." And I said, "Great. What is sensitive data?" "Oh, we know when we see it." And I said, "No, no, because everyone will have their own interpretation." So you need to have a policy, and it needs to be something that you and your employees are comfortable with. It needs to be clearly articulated, and then you revisit it periodically. But there isn't going to be, unfortunately, a super great solution to a lot of these issues at this time.
Sara: Yeah, it's funny. That was the "What do we trust which model with?" is one of the ongoing side jobs I think all of us business owners have right now. I do think it's really important, though, your point that if it's on the internet, it's likely to be vacuumed up, [00:20:00] is a really good one. I think about people affirmatively training models on my IP. I didn't think so much about the vacuuming up is still happening. I worked with one of the large multinational global technology companies — it wasn't this era of AI. It was more like when we were forecasting this era of AI. One of the talking points that was always made was AI isn't going to eliminate human jobs. AI is going to change human jobs and improve worker experiences. And I've heard that for years. I know that's the talk track. I struggle to see, even I do see the ways that the generative AI tools we have now can improve people's workplace experiences and even workplace satisfaction by using the tools. As someone who's seen it from the very [00:21:00] beginning and who comes at it from this bottoms-up kind of viewpoint, what's your perspective on the role of AI when it comes to human labor?
Dvorah: Well, it's complicated. Unfortunately, I don't think there's going to be like a clear yes or no answer to "Will it improve workplace experience? Will make it worse? Will replace jobs or add jobs or do something else to jobs?" There is some research that I've seen, which has been informative for me. So in one case material scientists were studied. And this was published in, I want to say Bloomberg, but I wrote about it also in my LinkedIn. People can check it out or hit me up if you want me to send you the link. He did a study on material scientists. And what happened was, it was a large material science company. They rolled out generative AI tools to their scientists across two- or three-year period. So not everyone got it at once. It was an experiment. And what they found was that the most experienced scientists got the most out of it, because it required a lot of knowledge to kind of [00:22:00] curate, right? So generative AI is more about curation instead of about building. You get 10, 20, a hundred things. You're like, "Whoa, what am I going to do with all these?" So yes, they could curate it, but they also express less job satisfaction. Even though they were more productive, even though they could see more things being chosen and more products being made. And they still felt less job satisfaction because they liked solving the puzzle themselves. They liked going through and doing the work. They liked sitting with the different options and playing around with them. And generative AI did take away some of that. On the other hand, it has improved job satisfaction in call centers for very junior call center people because they have access to immediate coaching. They're not getting on the line with some person who's screaming at them, which isn't their fault, but they're still getting screamed at. And then AI, in that case, generative AI can help them get out of that situation, diffuse it. Either solve the problem or at least make the person calmer and able to have the conversation. Make them feel less like they're under attack. They feel that they have [00:23:00] tools. So that's where it's really, really tricky. Two completely different situations. I agree. But in one case, generative AI was beneficial to the most junior people. In the other cases, most beneficial to the most senior people. One group liked it. One group hated it. And I think also, in terms of the kinds of jobs we'll end up doing, it will end up taking away a lot of kind of busy work. Things that, quite frankly, could have been automated, but maybe there's a little bit nervous because there's some edge cases. So they wanted a human to take a look at it. So that will go, but what it'll mean is it'll end up changing a lot of our jobs. On the third hand. All right. So I got all these hands going here, but on the third hand, what might also do for some of us who have a specialized experience and skills, we may find ourselves potentially not working for a single company, but instead specializing deeply in one particular area using generative AI and then working for multiple companies doing that. Because our experience and being deeply specialized, you match that with generative [00:24:00] AI, you're going to have a lot of power to get a lot of really great things done. But within the corporation, where you have multiple pieces and the idea behind the corporation is the pieces are working together, but quite frankly, even if the pieces are bored or doing repetitive work, as long as the system works, that's one thing. Generative AI is going to change that. So we're not going to have the same system. Now, where that's going to lead. I gave one example of what it could be, but I don't know. Okay.
Sara: It brings me back to your original point that this is being done to us. And it is also possible, instead of sitting on the sidelines, for folks who are concerned and thinking about these things: A, to get involved. And I keep hoping to B, see alternatives to the mainstream or mass-commercialized kind of approaches. And that's my early internet showing, that I think there can be alternatives. My favorite social media platform is Mastodon. I run my own server. It's [00:25:00] early-internet-like. It's very light on commercialization. And so those non-commercial or less commercial options are out there, if we build them.
Dvorah: Uh-huh.
Sara: This is tough because I'm springing it on you and it's brand new. the news is all about DeepSeek. Have you looked at DeepSeek? Do you have any point of view on it yet, or is it just too soon to say?
Dvorah: I've looked at it. I tried it through another software, not through DeepSeek itself. I tried it through another software that gave me access to the model. I liked the reasoning that it went through, it was nice to see that reasoning because that is helpful to avoid hallucinations. You can say, "Aha! That's where it went wrong," and you can come back. So I think that part is quite good. I think what DeepSeek shows is that it is possible to have quite good models, which can be released as open-source run on a variety of platforms, which I believe could actually leave lead to better specialization. That's my feeling. My feeling is that an open-source model like [00:26:00] DeepSeek —a small business owner could take their data, could either fine-tune, train it or, could use something called Rags or retrieve augmented generation, which is basically taking all your data and shoving it into a format that the AI can easily access, right? A small business owner could take one of these open-source models, and these are hosted in various places. You could even host it. You could even make a copy of the model and run it yourself if you wanted to, and so you can completely control what's going on with your data. That, I think, offers the really big chance for democratization because most small businesses, small to medium-sized businesses—do not need the latest and greatest in AI. What they need is AI that works with their data, that is set up to work with their data and their special sauce, and all their specialties to give them that big boost. Now, small businesses in the U.S. have not been growing in terms of their, ratio of the GDP. And it's not because they haven't been growing. It's because big companies have been growing faster. [00:27:00] Small businesses are estimated to maybe be 40 to 50 percent as efficient as large companies. In some cases, I've seen lower numbers. Generative AI with a model like DeepSeek, don't run it on their platform, bring it into another platform. Lots of ways to do it. This could actually be a great business. You could have like your own thing where it could be set up for you. You run it with your data and your processes. And then that is the kind of thing that could allow small businesses to actually leap ahead of large businesses because small businesses are flexible. Large businesses have this giant system, and they have to be careful. The system has to keep going. You break part of the system. The whole thing falls down. Small businesses can get everyone together and actually make this change. And then, in my opinion, they could actually leap ahead of the big businesses, become more efficient, but also make more personalization for their customers because they have access to places like Mastodon. They have access to relationships that they make because they're more people-oriented. In [00:28:00] my experience, small businesses are more people-oriented. So that plus generative AI could really enable them to grow at a really great rate. And outdo the big businesses in a lot of areas. So you can see, I actually see this as a mark of something great, something that could really help smaller companies get a leap ahead.
Sara: It's really cool. I hadn't thought of it that way. And I have had an increasing number of my intake calls start with, "I Googled. I saw you on Google. I asked chat GPT for an OKR expert, and it recommended you." I'm like, I never would have thought that that would be an inbound method, but it is. If people are using it that way. Awesome. I'm also hopeful. I was excited to see DeepSeek happen with the hope that it is slightly more environmentally responsible. That if we can have models that run, more efficiently, than we can [00:29:00] do a little less damage from an environmental perspective in terms of resources needed to run these models. Because they just vacuum everything up the, resources to operate as well as content. So, before we make the pivot to talking about practical application, is there anything I should have asked you or that you want me to ask you that I haven't?
Dvorah: We touched on this briefly, but I would like to get back to the question of what we can do as individuals. So in my opinion, as individuals, we should educate ourselves on how these models are being trained. There is lots of information available out there about what's going on. There's lots of news stories. If you find like a great source that you trust, you can continue to review that source, but there are different ways to get this information, and I suggest that we each find a way to get the information and do it. Now, I do it because I'm a geek, and I like these things. But also, I'm doing it because people come to me with questions, and I also come to others with questions. And so we want to be fully [00:30:00] informed. The second point is not to shy away from it. So some folks' responses like, "Well, we should just shut it all down. We should shy away from it, and we should stop it." I'm not a lawyer, but I honestly don't see the Supreme Court shutting down this business. So I'm a U.S. Patent agent. I have seen patent decisions which did not make a whole lot of sense in terms of the law but we're done to preserve an industry. So people do pay attention to the industry. It's not just what the laws or what logic is. So I just honestly don't think that this whole industry is going to get shut down. So the question is then, okay, if this is the case, if we assume that it's not going to get shut down, what are we going to do about it? And that is where we can join together in groups, understand how it works, you can join various nonprofits. So I'm a member of ForHumanity. We do AI ethics and guidance. We work with the European Union, and the Austrian government, and the UK government, but there's lots of them out there. Find one, join it, join with others to make your voice heard, and make certain that others [00:31:00] know how you feel. If a company is using AI in a way you don't like, let them know. On social media preferably, so that others can jump in and say, "You know, I don't like that either." And so you have the force of numbers. It is very important instead of trying to sweep the AI under the rug or hoping it goes away. Neither of which are likely to happen, in my opinion, for us to take a stand to get together, to figure out how we want as individuals for AI to work, to talk to brands, talk to our companies, but also think about how we can benefit our employees, and our clients by using AI. Because we can do that. And I think we also have an obligation to at least consider that. As solopreneurs or small company owners, or even as employees and small to medium size or even large corporate businesses. We do have that responsibility in my opinion. So this means we each need to take action.
Sara: So, that is a perfect segue into what will be our next episode. Dvorah is going to come back for our next episode and we're going to [00:32:00] talk about actually using generative AI and what we do with it, and the benefits that Dvorah has seen in her business and work.
That wraps up part one of my conversation with Dvorah Graeser. Join us in our next episode for part two, where we'll dive into practical applications of AI tools and how to shift from anxiety to agency in using them. As always, you can find episode links and resources at findrc.co/pod. If you enjoyed this episode, please share it with other Thinkydoers in your world. Your shares really help.
Sara: All right, friends, That's it for today.
Stay in the loop with everything going on around here by visiting findrc.co/newsletter and joining my mailing list.
Got questions? My email addresses are too hard to spell, so visit findrc.co/contact and shoot me a note that way. You'll also find me at @saralobkovich on most of your favorite social media platforms.
For today's show notes, visit findrc.co/thinkydoers. If there's someone you'd like featured on this podcast, drop me a note. And if you know other Thinkydoers who'd benefit from this [00:51:00] episode, please share. Your referrals, your word of mouth, and your reviews are much appreciated. I'm looking forward to the questions this episode sparks for you, and I look forward to seeing you next time.
A promotional graphic for Episode 38 of the Thinkydoers podcast. The image is split into two halves. On the left, host Sara Lobkovich, a woman with shoulder-length curly brown hair, smiles while wearing a dark blazer over a teal blouse, standing against a brick wall background. On the right, guest Dvorah Graeser, a woman with short silver hair, smiles wearing a bright pink blazer against a dark blue background. The title 'Generative AI and the Future of Work for Thinkydoers' is displayed in bold, artistic fonts in the center. The Thinkydoers logo is placed at the top, slightly tilted. The text 'host Sara Lobkovich' and 'with guest Dvorah Graeser' appear in script fonts near their respective photos. 'EP. 38' is displayed in the top right corner.