Artificial Intelligence – Vivaldi https://vivaldigroup.com/en Writing the Next Chapter in Business and Brands Tue, 27 Jun 2023 22:00:39 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.22 AI in Retail – Is the Value Real or Artificial? https://vivaldigroup.com/en/blogs/ai-retail-value-real-artificial/ Tue, 27 Jun 2023 14:46:37 +0000 http://vivaldigroup.com/en/?post_type=blogs&p=6717 Conversations and analysis around AI are omnipresent — however several important elements are being missed. First, most arguments are focused on the debate between AI optimists (see Andreessen Horowitz world-saving view) and AI pessimists. There is so much focus on what AI can do to replace humans, if it’s good enough (according to the genius […]

The post AI in Retail – Is the Value Real or Artificial? appeared first on Vivaldi.

]]>
Conversations and analysis around AI are omnipresent — however several important elements are being missed. First, most arguments are focused on the debate between AI optimists (see Andreessen Horowitz world-saving view) and AI pessimists. There is so much focus on what AI can do to replace humans, if it’s good enough (according to the genius hypothesis, it can equal or even raise the average— intelligence, creativity, creative output— but will never equal genius), and if humans can adapt. The real question, however, is not just how AI can be used to increase productivity. The larger questions are about what new ways it can be used to create value and what business models could capture this.

In many industries, AI has the potential to fundamentally reshape the value chain. Retail is no exception. There are many problems that AI could potentially help solve by:

    1. Picking up on changes in consumer demand patterns – on a category, brand and product level. AI can aggregate data from multiple sources to identify patterns and help manage inventory to meet rising demand.
    2. Simulating impact to weigh different strategic initiatives – assessing impact on revenues of different merchandising strategies, promotions, shopper marketing, performance marketing etc.
    3. Enhancing customer relationship/increase cross-selling by predicting the Next Best Purchase – the most likely purchase given patterns from both internal and external data sources brought together
    4. Sharpening targeting through a Personalized Shopping Assistant (in-store and online) assisting with search, best fit, suggestions etc.
    5. Optimizing customer service through dynamic scripts, face and voice recognition, predictive equipment maintenance etc.

However, the real value of AI goes beyond revamping current activities – ultimately, it will be in making entirely new retail models possible, where the very relationship with the consumer is reinvented. A few examples, among many others, could be:

Creating worlds and experiences – beyond transactions
Imagine planning a dinner party. You put in a theme and receive a dinner planned by Gordon Ramsay. As you select recipes, ingredients in the right quantity for your party automatically appear in your shopping cart – and a cooking flow is set for you with videos, instructions and timely reminders. Table settings for your theme dinner are delivered right on time with suggested wine pairings. AI can help create a world of experiences – where value is derived not just from the purchase transaction, but from the experience and interactions that these enable.

Lifestyle ecosystems – beyond the category
Imagine putting in the occasion for your next office party and receiving outfit recommendations, removing the traditional consumer angst of not knowing what to wear. Imagine your style choices then connect you to a community of like-minded aficionados (Recent years have seen the formation of many such lifestyle communities, from cottagecore to farm chic or now momcore that has recently been reinvented). This lifestyle community would bring together an ecosystem of providers, from clubs to sports, leisure and entertainment, supported by a stream of “endless” content. These ecosystems will not just foster a stronger sense of belonging and loyalty – each interaction will enhance the value of interactions to come.

Co-creation: sharing creativity
In the traditional model, consumers are subjects – on the receiving end of the brands’ creativity, branding and advertising. Recently, brands have been exploring reversing that pattern – putting consumers themselves in charge of advertising the brand – from Apple featuring consumers’ photography in its ads to Yeti’s UGC campaign. With the democratization of creativity brought about by AI, this can be taken one step further – with much more ease, consumers will be able to ideate, script, design and even produce an ad – and disseminating it to their networks, creating the sort of viral effects that create exponential growth.

Every retailer knows well that beyond satisfying consumers’ functional needs, there is a large market in meeting consumers’ deeper needs of feeling seen, taking control of their narrative and connecting with others. Through data and matching algorithms, AI can give consumers the ultimate control over their narrative – expressing their values, personality and lifestyle choices. In such an ecosystem, value would be created through much more than transactions – through the interactions that will create data that will ultimately enhance the value of the ecosystem at large.

The ultimate question is how these retail models will lead to new ways to increase value – an increased customer relationship leading to higher loyalty; better targeting to increase spend; and belonging to a community leading to increased frequency. This is when the AI opportunity will become real.

The post AI in Retail – Is the Value Real or Artificial? appeared first on Vivaldi.

]]>
Enhancing AI: Why New Technology Must Include Diversity https://vivaldigroup.com/en/blogs/enhancing-ai-new-technology-must-include-diversity/ Tue, 04 Apr 2023 17:30:07 +0000 http://vivaldigroup.com/en/?post_type=blogs&p=6606 Imagine if someone who was wrongfully convicted of a crime was asked to design the algorithm used by police to convict criminals. Imagine if a young person, newly immigrated to the US was asked to design the algorithm used for admissions at top US universities. Imagine if populations, historically marginalized from the use of your […]

The post Enhancing AI: Why New Technology Must Include Diversity appeared first on Vivaldi.

]]>
Imagine if someone who was wrongfully convicted of a crime was asked to design the algorithm used by police to convict criminals. Imagine if a young person, newly immigrated to the US was asked to design the algorithm used for admissions at top US universities. Imagine if populations, historically marginalized from the use of your products, were asked to design your products. The chances that the outputs from these algorithms would replicate the same output that they do today are slim to none. That is in many ways what AI and machine learning offers — but rather than having systems that embrace diversity of perspective and opinion, if we aren’t vigilant, we can end up with systems that enforce existing biases at best and actively create brand new biases at worst.

Our society has managed to steadily progress despite the myriad issues around diversity embedded in it, and some might argue that slow progress coupled with the many benefits offered by AI is good enough. I obviously disagree. While there are more than enough moral and ethical reasons for diversity, the most salient fact is that at the end of the day, DEI doesn’t just mean Diversity, Equity, and Inclusion, I believe it should also stand for Diversity Equals Income. Every time a company uses an algorithm that alienates a user, diminishes an outlier in order to fit a model, tamps down diversity when making a hiring decision, or works in a diversity-blind fashion as opposed to a pro-diversity manor, dollars are being left on the table — dollars that few businesses can afford to spare.

The first time the potentially negative interaction of technology and race dawned on me was back in the late ‘90s and early 2000s when I – and many of my black friends – found ourselves unable to be properly identified by the face recognition software used by Facebook. As we soon learned, who was in the room doing the programing mattered. The programmers, the majority of whom did not look like us, trained the machines on faces that looked like theirs and not ours, leaving those of us with darker complexions as mysteries unable to be identified by computerized eyes. One would think that as the years have progressed, things would have gotten better, but a 2018 study conducted by the National Institute of Standards and Technology (NIST) found that some facial recognition algorithms had error rates that were up to 100 times higher for African Americans than for Caucasians.

Sadly, this bias isn’t just found in visual data. A 2019 study by the National Bureau of Economic Research found that algorithms used by credit scoring companies tended to underestimate the creditworthiness of African American and Hispanic borrowers. These algorithms routinely gave these borrowers lower credit scores and higher interest rates.

What does this have to do with diversity? AI has also ushered us into a new age for HR. All across the world companies are using AI to screen resumes for potential hires. The issue is that AI-powered hiring systems have been found to discriminate against women and minorities. A study by the University of Cambridge found that an AI-powered recruitment tool developed by Amazon consequently downgraded resumes that contained words such as “women,” “female,” and “gender,” and as a result, candidates with female-sounding names were less likely to be selected for interviews.

There were two problems — both of which are interconnected, difficult to solve, and which need to be addressed. First, in all these situations, the training set was flawed. If a system is trained on biased information, it will generate and propagate a biased output. In the case of the recruitment tool, it had been trained on resumes submitted to the company over a 10-year period, most of which were from male applicants (who were chosen, in some part, due to the systemic bias of the human HR people). Second, those in charge of these systems didn’t value or consider diversity enough to actually encode it in the system.

Like a child learning what is right and wrong or how to behave, an AI needs to be taught. To properly teach it how to deal with the myriad different situations it may encounter, organizations must expose the AI to past examples of right and wrong (or success and failure). These past examples can be redolent with bias against women, immigrants, people with physical or neurodivergence, as well as race and ethnic groups. Currently, since the complexity of the AI’s computations is so high that it is virtually a black box, the best way to check if a system is biased is through testing both the input and the output. Testing for any sort of sampling bias in terms of a specific characteristic, or geography, or demographic marker in what was fed into the system as well as unwanted correlations from what comes out of the algorithm is critical. The issue is that this extra step, while relatively simple, is time consuming and time is money. That being said, a fair question to ask is — is this enough?

In our social discourse, it’s generally understood that simply being colorblind (for example) is insufficient in light of the various systemic structures at play in our society. In order to achieve some sort of equity, “color bravery” — in other words, a more proactive stance on addressing racial disparities — is necessary. So then, if in other circles, simply being color blind is insufficient, why then in this circle, would being un-biased be sufficient? As I’ve said time and again, Diversity, Equity, and Inclusion is important but it’s not just important because it is morally right or humanistically right but because in business (as I also said earlier) Diversity Equals Income.

To give a few examples:

Studies have shown that diverse teams can bring a wider range of perspectives and experiences to the table, leading to more creative problem-solving and better decision-making. They can be more effective in understanding and serving a diverse customer base, and they can be more attractive to top talent, which can lead to higher productivity and innovation. AI provides businesses the opportunity not only to ensure that their hiring practices aren’t biased but that their staff has the diversity needed produce the best goods and services for their ever more diverse consumers.

The organizations that are relying solely on AI to screen resumes, sift through applications for schools, or make decisions about credit, etc. are making a grave mistake. They are mistaking the hammer for the carpenter and the car for the driver. This is actually very similar to a problem I occasionally run into while leading product ideation workshops. Clients will get so invested in the exact rules and procedures of an exercise I’ve devised to help unlock their creativity that they’ll literally get upset when I throw out the rules and start capturing the ideas that start pouring out. They often want to hold their tongue and risk losing their idea, rather than sacrifice the well laid out rules of the exercise — that is until I remind them that the exercise is just a tool, and what really matters is the idea.

AI is merely a tool. Yes, it is a powerful tool, but it is still only a tool — one of many tools that we as businesspeople, members of society, and human beings have at our disposal. We need to remember that the goal needs to remain one of creating a more diverse, equitable, and inclusive business environment — so we can create better products, services, and experiences for our consumers. If we don’t, we are leaving money on the table, we are leaving our consumers unsatisfied, we are leaving our companies without the best talent, and we are leaving ourselves exposed to the first competitor who is smart enough to capitalize on our blind spot.

The smartest companies that I’ve worked with are the ones that define their goals first and find the tools to achieve those goals second — not the other way around. Marshall McLuhan once said, “We shape our tools, and thereafter our tools shape us.” This is one situation where we cannot, and must not, allow our tools to shape us if we hope to continue forward to a more diverse future, let alone a more profitable one for our businesses.

 

 

Cerrone Lundy is a Director at Vivaldi. He works with organizations to better understand their client needs and create products, services, experiences and more. 

The post Enhancing AI: Why New Technology Must Include Diversity appeared first on Vivaldi.

]]>
Working With AI: An Interview with Stanford’s Jeff Hancock https://vivaldigroup.com/en/blogs/working-ai-interview-stanfords-jeff-hancock/ Tue, 21 Mar 2023 17:07:24 +0000 http://vivaldigroup.com/en/?post_type=blogs&p=6559 Since ChatGPT debuted in November 2022, many conversations have been sparked in our offices and worldwide around the role of artificial intelligence (AI) — from how we use it, to what we use it for, to how it will reshape our work and daily lives. On March 14, 2023, OpenAI released GPT-4, a new version […]

The post Working With AI: An Interview with Stanford’s Jeff Hancock appeared first on Vivaldi.

]]>
Since ChatGPT debuted in November 2022, many conversations have been sparked in our offices and worldwide around the role of artificial intelligence (AI) — from how we use it, to what we use it for, to how it will reshape our work and daily lives.

On March 14, 2023, OpenAI released GPT-4, a new version of the natural language processing technology, which has the ability to handle images as well as text — though this is currently only available to paid subscribers. This advanced system has even more wide-ranging abilities and increased accuracy, with the company reporting that it scored in the 90th percentile on the bar exam.

As technology pieces are changing rapidly, Vivaldi spoke with Jeff Hancock, Founding Director of the Stanford Social Media Lab and Harry and Norman Chandler Professor of Communication at Stanford University, to get a perspective on AI’s continuing evolution and how it may be used — for bad and good.

—–

Vivaldi: Currently there’s so much news about ChatGPT and speculation about the future — how, within the last few years, has AI already changed some of the ways that we work or interact?

Jeff Hancock: Spellcheck came out of the Stanford AI lab back in the ‘80s. Now nobody thinks of spellcheck as interesting AI, but it did change how we work. Autocomplete, autocorrect —these things changed the way we communicate with each other.

In your research, you’ve found that humans can no longer tell the difference between something written by a human and something written by a machine — what are the potential dangers of that and the potential opportunities?

I think the number one danger is around trust; that these technologies can undermine trust in each other, in anything online, in communication. We saw in one of our previous papers that as soon as we told participants that some of the things we were showing them might be AI, they got really suspicious and distrusted everything.

A positive way it could be done is it just gets incorporated like spellcheck did and we don’t really think any more about it. We improve our work lives, we’re able to do more, or do the same amount in way less time and get to spend more time with our family. I think what’s going to matter is actually less of the tech and more about what we say is okay; the social contract and the social norms that go into when these things can and should be used. There’s a lot of work still to be done there.

With regard to the trust piece and disclosing the use of AI — are there “best practices” around this, or is that still in development?

I think best practices are emerging. I don’t think it’s going to be that AI gets disclosed every time it’s used, that would be like indicating that spellcheck was used, but it will come to be understood when these things are not used. I think there are a lot of conversations in society about when these things can and can’t be used or how to attribute it when it is.

It’s kind of changing every day.

As are people’s beliefs. In December it was a lot of wonder, and excitement, and then there was some fear like, wow, they’re so good, they could take our jobs, and now it’s like these things are kind of crazy. Our beliefs about them change, and then as our beliefs change, even though the technology hasn’t, that changes our interactions and what we think is reasonable and ethical and effective. There’s a lot on the human side that we’re still working out.

What fears do you think are unfounded, and what are the areas that we should be more concerned about?

It’s not going to take over most people’s jobs. Certainly not in the near term. People will find out what it’s really good for, which is producing first drafts or editing drafts in more constrained context. It will help you write, but you can’t rely on it. You have to work with it. They’re finding that connecting it with a search engine is less non-trivial than they thought, and accuracy is important, and they don’t seem to work super well. That’s not surprising. Classification is going to be something that will be used a lot, but it hasn’t been talked about very much yet.

Jeff Hancock, Founding Director of the Stanford Social Media Lab and Harry and Norman Chandler Professor of Communication at Stanford University

There’s been so much talk about different professions being disrupted — how would an industry like consulting be disrupted by this?

Consulting in some ways is about asking really good questions. I think GPT is really great at helping think out what the questions are. If you’re a consultant, you say okay, let’s try to understand whether this practice, whatever that may be, is good. There’s a way for GPT to say okay, here’s what the outcomes are in this context and now here’s what this company is doing. For Company A and Company B, which do we think will lead to the best outcome? Whether it’s right or not, it will certainly help you think out why is it answering the way it’s answering, based on its huge amount of knowledge about the way things work. I’d be excited if I was in consulting, to start using this tool as a way to gain insights.

You also have a deep expertise in social media. So much of the early online and social media world has been driven by paid advertising — what could that potentially mean for a system like ChatGPT or other AI programs?

A really short answer is, I don’t know. We made decisions about social media, where we wanted it for free and to do that we had to use advertising. It led to all kinds of issues, especially around how social media is optimized for engagement and that’s showing to be problematic, with unintended consequences. The question here will be, who is going to pay for this and how? If you don’t make some of these tools freely available, then I think there’s a risk of inequality being worsened. So a model, hopefully not advertising just by default, but a model that allows everybody to get access to most of these tools would be preferred I think.

Social media companies have had some legal protections with regard to what gets posted on their platforms. Will that also be adopted for AI usage, or will companies be held liable if something happens because of what’s said by an AI?

With social media the rules are actually quite clear, even though there are two challenges to the laws now. They are not liable for most content that’s posted, they have some responsibilities, and there’s protected free speech. Here it’s not clear. The company gets broad free speech protection, but they don’t have the same kind of Section 230 protections that platforms do. If it produces something that leads to harm — initially if you asked how to commit suicide, it would tell you. If that happened, then who is responsible? Is it the builder of the AI, the person who used it to do that, is it the tool? I think these are all important questions.

What are you most excited to see developing in the future?

I’m really interested in trust and wellbeing, and how these tools can be used to enhance rather than undermine. There’s huge space in counseling, coaching, mentorship, where there seems to be a pretty serious lack – we don’t have enough counselors or coaches or mentors for people who need them. When it comes to trust, there are so many open questions. When can you trust a machine to help you out? Will the concerns around synthetic media lead us to be like, I don’t trust anyone until we meet in person — and thereby really dial up how much people want to meet face to face? That would be pretty cool.

It’s an interesting consequence if it pushes the other direction, into the real world.

Right, if the advent of AI means that everything online will be viewed as fiction. Check back in a decade.

—-

What are your thoughts on the future of AI? Tell us how your business is utilizing AI: hello@vivaldigroup.com

The post Working With AI: An Interview with Stanford’s Jeff Hancock appeared first on Vivaldi.

]]>
How Should Creatives Use ChatGPT? https://vivaldigroup.com/en/blogs/creatives-use-chatgpt/ Tue, 14 Feb 2023 16:52:23 +0000 http://vivaldigroup.com/en/?post_type=blogs&p=6530 Short answer: Do what it can’t…    ChatGPT is changing the game… …especially in the creative industry. With its unparalleled ability to generate logical reasoning, objective information, and near-perfect output in just seconds, ChatGPT has already made a massive impact on the world. Still, ChatGPT could never replace human creativity, but instead, serves as a […]

The post How Should Creatives Use ChatGPT? appeared first on Vivaldi.

]]>
Short answer: Do what it can’t… 

 

ChatGPT is changing the game…

…especially in the creative industry. With its unparalleled ability to generate logical reasoning, objective information, and near-perfect output in just seconds, ChatGPT has already made a massive impact on the world. Still, ChatGPT could never replace human creativity, but instead, serves as a tool to amplify it. Just like the calculator, the personal computer, and even the search engine, the purpose of ChatGPT is to enhance the humans who use it.

The question is, how do we use it?

Creatives are trained to find a balance between art and science, input and output, vision and execution. But now that’s changing. Thanks to ChatGPT, creatives must now shift their approach to harness the virtually infinite capabilities of AI to meet their specific needs. This means we must tap into what makes us unique and use it in conjunction with ChatGPT. In other words, we must do what Chat GPT can’t — we must be human.

1. We must be childish

Children are just learning the ways of the world. This gives them an innate sense of wonder, curiosity, and playfulness that can lead to open-minded thinking and innovative ideas. ChatGPT, on the other hand, is bound by intelligently following the rules and directions set by 10% of the internet. It may be great at following rules and parameters, but that inherently limits the ideas that it can flesh out.

Example prompts:

  • “If there’s such thing as a black hole, there must be a white hole in existence, too. Can you define it for me?”
  • “Make up a story about a future where McDonald’s enters the fashion industry.”
  • “Write a rough draft of a comedy script for a laundry detergent that cleans your mind of dirty thoughts.”

2. We must be irrational

Making irrational decisions is fundamental to human nature, allowing us to think beyond the boundaries of logic and reason. Irrationality gives rise to our emotions, desires, and passions, and allows us to create, imagine, and experience things that don’t necessarily have a rational explanation. It allows for serendipitous discoveries, and encourages individuals to take bigger risks, dismissing the head and following the heart, even when the outcome is uncertain. With ChatGPT, we can input our most irrational ideas and trust the AI to build on them by attempting to create a logical path to execution.

Example prompts:

  • “What if our feet were our hands and our hands were our feet? How would the shoes and gloves industries be affected?”
  • “Give me a concept writeup for an idea called: Uber Bathroom.”
  • “What could happen if someone traveled back in time to warn themselves not to build a time machine?”

3. We must be imperfect

Perfect is the enemy of good. So, embrace clunky copy, half-baked ideas, and convoluted brain dumps as a source of inspiration for our ChatGPT inputs. They can be molded into refined expressions of us, generated into more original ideas, and fostered into a greater willingness to take risks and experiment.

We are constantly learning, growing, and changing. And by trusting ChatGPT with our imperfect ideas along the way, we can create better work and develop ourselves, faster.

Example prompts:

  • “Can you make this copy flow better? [Attach your clunky copy]”
  • “The concept below is half-baked, and I need you to simplify it. [Attach your convoluted concept]”
  • “Can you refine this brain dump of random thoughts into a linear narrative? [Attach your brain dump]”

4. We must be subjective

Chat GPT is a wealth of objective and (usually) accurate information, but it lacks the personal touch people come to expect from their favorite human creators. This personal touch is a representation of our subjective take on our life experiences. So, when we use ChatGPT to cut our work time in half, we should use that time we saved to inject our own unique voice into the final product. By embracing subjectivity in creative work, we can tap into our own voice and deliver ideas that are personal, meaningful, and impactful. We should keep this in mind during the editing process, after we nail the prompt.

5. We must be slow

The idea generation stage should be fast to allow for free-flowing thoughts and quick sketches, while the refinement stage should be slow and more deliberate. After generating output with ChatGPT at rapid speeds, we should take more time to think deeply and reflect on the ideas, shaping them until they fit the context in which our audience will experience them. For instance, if we’re writing a song about summer. Before we finalize it, we should make sure to listen to it while cruising down the highway with friends, with the windows down, basking in the warm weather.

This may take extra time, but it will lead to more meaningful work. It will bring out empathy, intuition, and emotional depth that ChatGPT cannot create on its own, which is often what makes creative work stand out, resonate with others, and become timeless.

All in all, we must be human.

Chat GPT will always be better than humans at generating output. And the quality of that output will always depend on the quality of the input, which of course, is controlled by humans. That’s why, in order to stay relevant, creative people need to pivot.

Instead of spending countless hours on generating output, our time will be better spent exploring the world, collecting experiences, reflecting on them, and using them as reference points as inputs for ChatGPT. Then we should spend time sculpting its output until we believe it’s good enough for the audience.

This means we’re not merely creatives anymore, we’re much more. We are directors, who use our creative taste as our greatest superpower. And now, we all have the most productive junior writer, researcher, idea generator, and coder working beneath us. The question is, how will we direct them to realize our vision?

It all starts with embracing what makes us human.

The post How Should Creatives Use ChatGPT? appeared first on Vivaldi.

]]>
The End of the Beginning for AI in the Workplace https://vivaldigroup.com/en/blogs/ai-futurework/ Thu, 19 Jan 2023 12:45:52 +0000 http://vivaldigroup.com/en/?post_type=blogs&p=6508 By now you may be a bit burnt out on the ubiquitous artificial intelligence buzz brought about by the launch of ChatGPT. Most of this media assault has focused on the cool-factor and the amazing but essentially useless things that AI can create for your social media feed. I want to spend a moment talking […]

The post The End of the Beginning for AI in the Workplace appeared first on Vivaldi.

]]>
By now you may be a bit burnt out on the ubiquitous artificial intelligence buzz brought about by the launch of ChatGPT. Most of this media assault has focused on the cool-factor and the amazing but essentially useless things that AI can create for your social media feed. I want to spend a moment talking about what this means for the activity that takes up most of our waking hours – our work.

I think we have now reached, as Churchill said, “the end of the beginning” when it comes to AI in the workplace. To relate how I have reached this conclusion, I would like to share with you my personal journey with AI in three vignettes.

2011: Watson’s AI Rules Jeopardy

About a decade ago I helped IBM reposition themselves for a “Smarter Planet,” which was about the possibilities presented by a world becoming instrumented, interconnected, and intelligent. A great example of these 3-I’s, was IBM’s emerging “Watson” technology, which represented a real breakthrough on the road to Artificial Intelligence. And what better forum for Watson’s public debut than on Jeopardy — the pinnacle of human intelligence (at least from a game show perspective)?

Watson went on to crush its opponents, including Jeopardy’s winningest champ and future host, Ken Jennings. Despite this success, Watson had a few stumbles along the way, including this Final Jeopardy round exchange:

Category: U.S. Cities
Question: “Its largest airport is named for a World War II hero;
its second largest, for a World War II battle.”
Watson’s Answer: “What is Toronto”

While anyone could have missed the correct answer (Chicago), it’s unlikely that you would have guessed “Toronto” when the category was “U.S. Cities.” This blunder was later attributed to Watson misinterpreting context, such as the fact that the Toronto Blue Jays compete in the “American” league.

A noble effort, but not quite yet the AI the movies had promised us. This made me appreciate how difficult it would be to reach true artificial intelligence and practically apply it to the work that most people do.

2017: The Future of Work

Five years ago, I led an engagement for a client on the “future of work.” The company was/is a business services outsourcing provider, so they were rightly concerned about trends that would reduce the demand for the labor that they monetize (contrary to most companies, where the incentive is often to reduce labor). Over the decade prior, I had helped them migrate from pure labor arbitrage to a more value-added mix of people within digital workflows (e.g., insurance claims processing, accounts payable, etc.), but “human” work remained at the core of their business model.

Even then the writing was on the wall that the nature of “work” was changing. What was called “software robotics” was beginning to augment/replace repetitive office tasks and early machine learning use cases were taking hold. Our quick engagement focused on projecting the implication of technology and labor trends to identify new sustainable business models. The effort was guided by two principles about the dividing line between humans and technology in the future of work.

Principle 1:
Human labor will remain where there is not a practical business case for technology.
Implication:
Non-standard labor will remain long after the technical means to replace it exists.

The notion of non-standard labor involves work that would not be “practical” to eliminate. Applying this principle provocatively, being a plumber may be a more enduring long-term career than a surgical cardiologist. If you live in a house built in 1904 like I do, you will appreciate how “non-standard” plumbing can be, but you pretty much expect a heart to be located in the same place consistently. Just as zero-defect spot-welding robots replaced humans on auto assembly lines, it’s not a question of “if” but “when” AI-driven robotics will be the norm for many routine surgeries. For our client’s outsourcing business, this led to the creation of a new Corporate Campus Logistics service that combined non-standard labor activities with digital logistics workflows.

Principle 2:
Human labor will remain where creative problem solving is at the core of the activity.

Implication:
Knowledge work where there is a multiplicity of potential solutions is most survivable.

While the first principle addressed labor with “head and hands,” most of the folks reading this blog are pure knowledge workers. Well, the future is gaining on us too. Since it’s ok to pick on lawyers, let’s consider a tax attorney billing at $500 an hour. Attorneys operate within a defined tax code (i.e. programable “business rules”), and the high cost of this service creates a strong business case for AI intervention. Perhaps surprisingly, the one thing that will preserve this profession for at least the foreseeable future is “creativity.” Creativity in subjectively interpreting the tax code to their clients’ benefit. For our client, this principle led to exploring new outsourced marketing services that could be delivered efficiently through on-shore and off-shore solutions.

Let’s carry forward these principles around what’s practical versus possible, as we look at the state of AI today.

2022: AI at the Tipping Point

I started experimenting with OpenAI’s “playground” a few weeks before they launched ChatGPT in the fall of last year. I wanted to see how far things had come along those dimensions of “practical” and “possible.” What I found suggested that AI had finally crossed the tipping point from the lab into our daily work lives.

“What is Practical”

I began by using the algorithm to categorize research verbatims into themes and to provide a summary with examples. A task that might take a junior analyst a couple of hours to do, was completed instantaneously, and the output was usable without any meaningful edits. My colleague entered a few simple bullet points and was provided the full prose for a conference invitation. Not tasks that could yet replace whole types of “jobs” but ones that could immediately reduce the volume of “work.”

By using natural language inputs, any request framed in a conversational sentence or two can be tackled by the algorithm. Now the benefits of AI are accessible to everyone without need of a specialist gate keeper. We are now a few entrepreneurial applications away from reducing the dumb work that bogs down enterprises and demotivates staff. Think how much time we waste versioning spreadsheets and PowerPoint presentations alone.

Here are 5 very practical things you can try today:

  1. Generate a first draft. There are always things we have a tough time getting around to writing, so use this experiment to tackle one of those. Just enter a few bullets of content and simple instructions and see what it comes back with. If you don’t like what you get, just hit refresh and get another take on it.
  2. Make something better. As good as AI is with first drafts, it excels at doing the final clean-up of a document. Beyond just fixing grammar, if you would like to say the same thing in half the words, no problem. Turn paragraphs into bullets, bullets in long-form, change first-person to third, etc.
  3. Summarize a meeting. Before Teams/Zoom recordings and transcriptions, managers use to get helpful one-page summaries of group discussions. Instead of slogging through the transcript of a meeting you missed, let AI summarize it for you.
  4. Create a value proposition. Turn technical products specs into a clear description of what something does – or even a compelling new customer-facing value prop statement.
  5. Empower an intern. While I would encourage everyone to try it for themselves, you can also ask an intern to experiment for you. In less than an afternoon they should be able to come with three time-saving ideas for reducing tedious tasks bogging down your office.


“What is Possible”

Now let’s look at what is possible with AI today. Specifically, how far has AI come with the type of “creative” problem solving that only a few years ago I thought would save us humans from our AI overlords.

One of my experiments was to have AI write the dialogue for the final scene of a movie or play by providing it with a simple plot summary and character descriptions. I disguised just enough of the inputs, so that the algorithm wouldn’t be tempted to cheat and consult the source material. The results were, to me at least, quite amazing. To be clear it wasn’t Shaw or O’Neill, but I now understood how it was possible for Lifetime to churn out a couple dozen movies every year.

Occasionally we still see some early Watson context-confusion creep into the output – particularly with image generation. When a different algorithm was asked to generate candid shots from a house party that never happened, it inserted an extra finger or two on a hand holding a camera. Forgivable for something without fingers and easily fixed.

Imagine how life-saving this AI image generation will become – at least “work life”-saving. Getty Images alone has over 80 million images which generate over 2.3 billion searches annually. Let’s assume it takes an average of three minutes to find the ideal image (I’ve sadly spent much longer). That adds up to almost 1,800 lifetimes of work lost annually.

Perhaps to confirm that there really is something here, we are already starting to see the backlash to AI’s early forays into the once exclusively human domain of creativity. The rules of some online art competitions are being rewritten after AI generated images started beating their human competitors (it does particularly well in Fantasy categories, where an extra finger is more easily explained away). When a fan presented singer/song writer Nick Cave with a song written by ChatGPT in “his” style, he had this to say:

“…it will always be a replication, a kind of burlesque…Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer.”

To better understand where this is all going, I turned to the one source who I trusted to be objective, my new AI collaborator:

Q: Will AI algorithms eventually replace human creativity?

A: No, AI algorithms will never replace human creativity. Artificial intelligence is a tool that can be used to enhance creativity, but it cannot replace the unique and innovative ideas that come from the human brain. AI algorithms can help humans to automate certain creative tasks, but they will not be able to come up with truly creative ideas the way a human can.

Maybe the algorithm was just trying to make us feel better and divert attention from its real goals. But even if the creative glass-ceiling for AI is one of a junior copy writer or designer, destined to write the first draft that helps inspire real “human” creation, that’s still a very big deal.

So, now what?

Yes, we are still a long way off from where this is all going. But we have to acknowledge that things will now move at a much faster pace as we reach “the end of the beginning” of AI in the workplace. We must accept AI as a co-worker who is here to stay (albeit one who won’t try to sell you Girl Scout Cookies).

I’m choosing to view these early breakthroughs in AI as a wakeup call for us humans to up our game. There has long been too much “dumb” work sapping the energy and potential of all our organizations. Now that the means exist to draw a clearer line between work that must be done by humans and work that can and should be done by technology, we should all seize this moment.

Is AI the liberator of human potential, or the inevitable next evolutionary step away from us? When it comes to the future of work, maybe Terminator 2 got it right.

“The future has not been written.
There is no fate but what we make for ourselves.” 

***

PS: If you’re interested in more thinking like this, or would like to share your perspectives, please send me a LinkedIn connection invite.

 


Chris Halsall is a Senior Partner with Vivaldi, who focuses on reinvention and growth at the intersection of customer, brand and business. Prior to joining Vivaldi, Chris was the co-founder of Ogilvy Consulting, where he was the Global COO and leader of the Growth & Innovation practice. Chris began his consulting career at McKinsey & Company, where he led the Marketing Effectiveness Practice and was the Senior Branding Expert for North America.

The post The End of the Beginning for AI in the Workplace appeared first on Vivaldi.

]]>