Back

Working With AI: An Interview with Stanford’s Jeff Hancock

Since ChatGPT debuted in November 2022, many conversations have been sparked in our offices and worldwide around the role of artificial intelligence (AI) — from how we use it, to what we use it for, to how it will reshape our work and daily lives.

On March 14, 2023, OpenAI released GPT-4, a new version of the natural language processing technology, which has the ability to handle images as well as text — though this is currently only available to paid subscribers. This advanced system has even more wide-ranging abilities and increased accuracy, with the company reporting that it scored in the 90th percentile on the bar exam.

As technology pieces are changing rapidly, Vivaldi spoke with Jeff Hancock, Founding Director of the Stanford Social Media Lab and Harry and Norman Chandler Professor of Communication at Stanford University, to get a perspective on AI’s continuing evolution and how it may be used — for bad and good.

—–

Vivaldi: Currently there’s so much news about ChatGPT and speculation about the future — how, within the last few years, has AI already changed some of the ways that we work or interact?

Jeff Hancock: Spellcheck came out of the Stanford AI lab back in the ‘80s. Now nobody thinks of spellcheck as interesting AI, but it did change how we work. Autocomplete, autocorrect —these things changed the way we communicate with each other.

In your research, you’ve found that humans can no longer tell the difference between something written by a human and something written by a machine — what are the potential dangers of that and the potential opportunities?

I think the number one danger is around trust; that these technologies can undermine trust in each other, in anything online, in communication. We saw in one of our previous papers that as soon as we told participants that some of the things we were showing them might be AI, they got really suspicious and distrusted everything.

A positive way it could be done is it just gets incorporated like spellcheck did and we don’t really think any more about it. We improve our work lives, we’re able to do more, or do the same amount in way less time and get to spend more time with our family. I think what’s going to matter is actually less of the tech and more about what we say is okay; the social contract and the social norms that go into when these things can and should be used. There’s a lot of work still to be done there.

With regard to the trust piece and disclosing the use of AI — are there “best practices” around this, or is that still in development?

I think best practices are emerging. I don’t think it’s going to be that AI gets disclosed every time it’s used, that would be like indicating that spellcheck was used, but it will come to be understood when these things are not used. I think there are a lot of conversations in society about when these things can and can’t be used or how to attribute it when it is.

It’s kind of changing every day.

As are people’s beliefs. In December it was a lot of wonder, and excitement, and then there was some fear like, wow, they’re so good, they could take our jobs, and now it’s like these things are kind of crazy. Our beliefs about them change, and then as our beliefs change, even though the technology hasn’t, that changes our interactions and what we think is reasonable and ethical and effective. There’s a lot on the human side that we’re still working out.

What fears do you think are unfounded, and what are the areas that we should be more concerned about?

It’s not going to take over most people’s jobs. Certainly not in the near term. People will find out what it’s really good for, which is producing first drafts or editing drafts in more constrained context. It will help you write, but you can’t rely on it. You have to work with it. They’re finding that connecting it with a search engine is less non-trivial than they thought, and accuracy is important, and they don’t seem to work super well. That’s not surprising. Classification is going to be something that will be used a lot, but it hasn’t been talked about very much yet.

Jeff Hancock, Founding Director of the Stanford Social Media Lab and Harry and Norman Chandler Professor of Communication at Stanford University

There’s been so much talk about different professions being disrupted — how would an industry like consulting be disrupted by this?

Consulting in some ways is about asking really good questions. I think GPT is really great at helping think out what the questions are. If you’re a consultant, you say okay, let’s try to understand whether this practice, whatever that may be, is good. There’s a way for GPT to say okay, here’s what the outcomes are in this context and now here’s what this company is doing. For Company A and Company B, which do we think will lead to the best outcome? Whether it’s right or not, it will certainly help you think out why is it answering the way it’s answering, based on its huge amount of knowledge about the way things work. I’d be excited if I was in consulting, to start using this tool as a way to gain insights.

You also have a deep expertise in social media. So much of the early online and social media world has been driven by paid advertising — what could that potentially mean for a system like ChatGPT or other AI programs?

A really short answer is, I don’t know. We made decisions about social media, where we wanted it for free and to do that we had to use advertising. It led to all kinds of issues, especially around how social media is optimized for engagement and that’s showing to be problematic, with unintended consequences. The question here will be, who is going to pay for this and how? If you don’t make some of these tools freely available, then I think there’s a risk of inequality being worsened. So a model, hopefully not advertising just by default, but a model that allows everybody to get access to most of these tools would be preferred I think.

Social media companies have had some legal protections with regard to what gets posted on their platforms. Will that also be adopted for AI usage, or will companies be held liable if something happens because of what’s said by an AI?

With social media the rules are actually quite clear, even though there are two challenges to the laws now. They are not liable for most content that’s posted, they have some responsibilities, and there’s protected free speech. Here it’s not clear. The company gets broad free speech protection, but they don’t have the same kind of Section 230 protections that platforms do. If it produces something that leads to harm — initially if you asked how to commit suicide, it would tell you. If that happened, then who is responsible? Is it the builder of the AI, the person who used it to do that, is it the tool? I think these are all important questions.

What are you most excited to see developing in the future?

I’m really interested in trust and wellbeing, and how these tools can be used to enhance rather than undermine. There’s huge space in counseling, coaching, mentorship, where there seems to be a pretty serious lack – we don’t have enough counselors or coaches or mentors for people who need them. When it comes to trust, there are so many open questions. When can you trust a machine to help you out? Will the concerns around synthetic media lead us to be like, I don’t trust anyone until we meet in person — and thereby really dial up how much people want to meet face to face? That would be pretty cool.

It’s an interesting consequence if it pushes the other direction, into the real world.

Right, if the advent of AI means that everything online will be viewed as fiction. Check back in a decade.

—-

What are your thoughts on the future of AI? Tell us how your business is utilizing AI: hello@vivaldigroup.com