Advertisement
Singapore markets open in 3 hours 11 minutes
  • Straits Times Index

    3,336.59
    +13.21 (+0.40%)
     
  • S&P 500

    5,277.51
    +42.03 (+0.80%)
     
  • Dow

    38,686.32
    +574.82 (+1.51%)
     
  • Nasdaq

    16,735.02
    -2.08 (-0.01%)
     
  • Bitcoin USD

    67,832.13
    +70.63 (+0.10%)
     
  • CMC Crypto 200

    1,464.59
    +36.02 (+2.52%)
     
  • FTSE 100

    8,275.38
    +44.33 (+0.54%)
     
  • Gold

    2,347.70
    +1.90 (+0.08%)
     
  • Crude Oil

    77.18
    +0.19 (+0.25%)
     
  • 10-Yr Bond

    4.5140
    -0.0400 (-0.88%)
     
  • Nikkei

    38,487.90
    +433.80 (+1.14%)
     
  • Hang Seng

    18,079.61
    -150.59 (-0.83%)
     
  • FTSE Bursa Malaysia

    1,596.68
    -7.58 (-0.47%)
     
  • Jakarta Composite Index

    6,970.74
    -7,034.14 (-50.23%)
     
  • PSE Index

    6,433.10
    +61.35 (+0.96%)
     

How to innovate with AI in a privacy-protective way

AI innovation should be grounded in the principles of accountability, responsibility and transparency.

This year, the Singapore government unveiled its national plans to invest more than S$1 billion into artificial Intelligence (AI) over the next five years, enabling businesses to capitalise on AI and other technological advancements. While this move will further propel enterprises to adopt or expand AI capabilities, it is crucial to ensure that any data fueled into the algorithms is being used responsibly and transparently.

Data privacy is top of mind

Data has become the new currency in the age of AI, and the efficacy of AI tools relies heavily upon the quality of data sets on which they are trained. At the same time, the use of data in AI initiatives has become a cause for concern amongst consumers in Singapore. The numbers speak for themselves – Adobe’s State of Customer Digital Experience report found that 61% of consumers fear unauthorised use of personal data as brands harness Generative AI, while 64% express concern over excessive data collection.

ADVERTISEMENT

Data privacy issues can be tricky to navigate due in part to complex data regulations involved, which also differ geographically. As businesses are now under enormous pressure to innovate faster than ever so that they can stay ahead of the game, it can be challenging to strike the right balance between technological innovation and data privacy. With the right combination of technical and legal expertise, a global view on policy, matched with bold ambition, businesses can think big and take on new challenges in the evolving AI and privacy landscape.

Striking the balance between innovation and data privacy

When making choices on how to develop, use, and regulate the latest technologies, privacy and security are critical considerations. In fact, lessons learned from privacy and security are transferable to AI. Leveraging a privacy-by-design approach throughout the AI lifecycle (from development to use), while investing in emerging privacy-enhancing technologies, like federated learning and differential privacy, can be done by shifting the ways we think about data privacy within existing workflows.

To innovate in a privacy-conscious manner, organisations should leverage current processes for evaluating privacy (such as privacy impact assessments and data protection impact assessments) and reinforce them to account for new and increased risks posed by AI. Practising data minimisation and purpose limitation can help maintain quality of data, while respecting consumer personal data rights.

For businesses working with AI vendors, it is paramount to conduct due diligence on the privacy and security practices of those vendors. Understanding clearly if the vendor is acting as a service provider/processor or business/controller and implementing use limitations accordingly is key.

Businesses should also have AI talent and train their staff on potential harms associated with the use of AI.  It is important to develop cross-disciplinary and diverse teams to develop and review AI use cases. For example, at Adobe, AI-powered features with the highest potential ethical impact are reviewed by a diverse cross-functional AI Ethics Review Board. Diversity of personal and professional backgrounds and experiences is essential to identifying potential issues from a variety of different perspectives that a less diverse team might not see.

Privacy is never one-and-done, companies must also test AI systems regularly, perform assessments on these systems on an ongoing basis, and keep abreast of new privacy-enhancing technologies, regulations, and best practices, and continuously evaluate and update privacy, security and related AI practices accordingly.

Using technology to mitigate privacy

There are several technologies that will further enhance privacy for businesses. Some of these include:

  • Federated learning, which is an approach to machine learning that allows data to remain on local devices while still benefiting from collective intelligence. The models learn from decentralised data across devices and only the learned patterns are shared, offering a boost to privacy.

  • The concept of differential privacy, as it relates to AI technology, introduces “statistical noise” or “randomness” to data, allowing overall trends to be analyzed without compromising individual data points.

  • Homomorphic encryption, which is a form of encryption that enables computations to be performed on encrypted data without decrypting it first. Advances in this area could enable AI models to learn from encrypted data, making it a powerful tool for privacy-preserving data analysis.

Mastering the ART of AI and privacy

AI innovation should be approached with thoughtfulness and grounded in the principles of accountability, responsibility and transparency (ART). It is not a myth that the ART of innovation can be achieved in a privacy-conscious manner. Doing so will allow organisations to not only harness the transformational technology to their advantage, but create and retain trust with customers and employees, ultimately giving those who do it well a competitive advantage in Singapore’s digital economy.

Simon Dale is the vice president for Asia at Adobe

See Also: