Advertisement
Singapore markets open in 40 minutes
  • Straits Times Index

    3,272.72
    +47.55 (+1.47%)
     
  • S&P 500

    5,070.55
    +59.95 (+1.20%)
     
  • Dow

    38,503.69
    +263.71 (+0.69%)
     
  • Nasdaq

    15,696.64
    +245.33 (+1.59%)
     
  • Bitcoin USD

    66,582.98
    -251.77 (-0.38%)
     
  • CMC Crypto 200

    1,431.60
    +16.84 (+1.19%)
     
  • FTSE 100

    8,044.81
    +20.94 (+0.26%)
     
  • Gold

    2,335.70
    -6.40 (-0.27%)
     
  • Crude Oil

    83.43
    +0.07 (+0.08%)
     
  • 10-Yr Bond

    4.5980
    -0.0250 (-0.54%)
     
  • Nikkei

    37,552.16
    0.00 (0.00%)
     
  • Hang Seng

    16,828.93
    +317.24 (+1.92%)
     
  • FTSE Bursa Malaysia

    1,561.64
    +2.05 (+0.13%)
     
  • Jakarta Composite Index

    7,110.81
    -7,073.82 (-49.87%)
     
  • PSE Index

    6,506.80
    +62.72 (+0.97%)
     

AI evolution raising ‘important questions’ about ethics, Sony research scientist says

Sony Group Corporation Global Head of AI Ethics & Sony AI Lead Research Scientist Alice Xiang speaks with Yahoo Finance tech reporter Allie Garfinkle at the CES 2023 event in Las Vegas about artificial intelligence, ethical data collection, and how AI factors into Sony games and products.

Video transcript

SEANA SMITH: CES time. No shortage of conversation or should I say chat about artificial intelligence at CES this year. AI is having a moment given the emergence of ChatGPT, and Yahoo Finance's Allie Garfinkle comes to us live from CVS with more on that. Hey, Allie.

ALLIE GARFINKLE: Hey, Dave. I am so excited to be here with Alice. Alice, let's start off by just talking about your top issues in AI this year.

ADVERTISEMENT

ALICE XIANG: Yeah. Thank? You so much for having me, Allie. So when I think of AI ethics issues, I tend to think of them in three general buckets-- so data, evaluation, and governance.

So first, data is incredibly important. It's basically the building blocks for any AI model. And especially as we're seeing now with growth in large foundation models-- general-purpose AI models like ChatGPT has definitely caught a lot of imagination recently. But all of these models are built on tremendous amounts of data.

And so thinking carefully about the representativeness of that data. Is it diverse? Is it globally representative? Have we thought carefully about issues of privacy, copyright, all of these different things that go into making ethical data?

And then once we have an AI model, how do we evaluate it? How do we make sure that it works well for all consumers and that it reflects our values as a company?

And then on the governance side of things, we're really seeing an inflection point with AI ethics from being just something that companies are doing on their own, trying to establish their own policies and standards, to now with the forthcoming EU AI act, we're seeing policymakers really dive into this space. And this raises a lot of really interesting questions around how do we ensure that we have those governance processes in place to make sure that the AI that is built is compliant with relevant laws and is really creating a better society at the end of the day?

ALLIE GARFINKLE: Which before we get to the big questions surrounding AI, I want to drill in a little bit on generative AI like ChatGPT, for example, which has made-- as you said, has made a lot of headlines. With generative AI overall, what are kind of some of the ethics concerns that you're looking at specifically there?

ALICE XIANG: Yeah, sure. So I think we're at a very interesting moment right now with AI where it's gone from AI that's often for very technocratic purposes like risk assessment, things that people don't-- or like prediction, things that people don't necessarily see directly or think of as intelligent to AI that you can interact with and actually ask a question to or ask to make you an image. And that's really changing the game in terms of what AI can do in terms of interacting with humans.

And I think this gives a lot of really promising potential from AI ethics perspective of AI-human collaborations, but it also brings forward a lot of interesting questions around, you know, how do we ensure that the people who created the training data, for example, are actually appropriately credited or acknowledged? Because we have artists that are creating these building blocks that then become these AI models, and so how do we ensure that this is a process that works well for all of the people in the ecosystem.

And then with generative AI like ChatGPT, how do we ensure that this doesn't further contribute to misinformation and a lot of these concerns that we have given that, at the end of the day, AI models, they're kind of like children. They don't really-- they're very smart and they have some understanding of the world, but it's limited, and they often, you know, can speak authoritatively about things they don't necessarily understand.

And so it's very important for AI developers, AI ethicists as the parents of these AI models to ensure that they aren't spreading misinformation, they aren't producing biased or offensive content. And so I think this is a really important moment for us in this field.

ALLIE GARFINKLE: Yeah. No, and the other thing about it too is there are a lot of different products all the time. Coming from Sony in particular, for instance, the new car-- the new car announcements that came out recently. But I'd love to hear when you're looking at products, whether it's a car, a PS3-- it doesn't matter. How are you thinking about setting up those processes for ethical AI review? Because I imagine it's different for something that's new as opposed to something that's already existed.

ALICE XIANG: Yeah, so this is where a lot of the most important questions are kind of in these, like, fine-grained governance aspects. And so Sony has been quite a leader in this space. We were one of the first major technology companies to come out with AI ethics guidelines back in 2018. We established our AI ethics committee, which is comprised of senior executive leaders that deliberate in advance of AI usage. And we also have our AI-ethics office, which is comprised of AI-ethics specialists that work with our business units and our research units on a more bespoke basis to ensure that we're evaluating all these issues like fairness, transparency, safety, robustness, privacy.

And it's really at this point because AI ethics is still quite a new field. A lot of evaluation is pretty bespoke at the moment, and we're still in the process of developing these best standards across industry. But I think it's really exciting how much, you know, Sony has taken a leading role in this, and I think we'll hopefully really be able to set the benchmark for how people think about doing this in the future.

ALLIE GARFINKLE: And one last question, Alice. You know, you have a really interesting background. You have a law degree. You have this background in statistics. Built By Girls sent us a question asking what you would-- advice you would give to, say, a young woman who's interested in entering the AI field?

ALICE XIANG: Yeah, thank you for that question. I mean, first I would say please do enter this field. It's extremely, you know, not diverse at the moment, and that's a huge issue when we think about addressing issues like systemic bias in AI. It does start at the point of who is actually in the room when you're talking about developing an AI product? And that set of people needs to be diverse. We need to have a wide variety of perspectives.

In terms of, you know, concrete advice, I think it's always good to have a strong technical foundation of what AI is, how it works. But increasingly, especially in the AI-ethics field, what we really need is more folks with a strong interdisciplinary background who can understand multiple fields and think about how new technologies might be able to interact with humans and society in ways that might produce intended or unintended consequences.

And so I think it's a really exciting space for a young woman and everyone else, and I really hope that, you know, more people enter this field and that we see more of a diverse set of, you know, researchers and developers in the future.

ALLIE GARFINKLE: Yeah, I'm excited for that. I hope so too. Back to you guys in the studio. Thank you so much.

ALICE XIANG: Thank you.

SEANA SMITH: Thanks, Allie Garfinkle, and, of course, our thanks to Alice Xiang, a Sony Group Corporation global head of AI ethics and Sony AI lead research scientist.