Please login for full access. If your organisation is a Centre member and you do not have log in details, please email thecentre@accpa.com.au. If you have forgotten your password, you can reset it here.
The way humans interact and collaborate with AI is taking a dramatic leap forward with agentic AI. Think: AI-powered agents that can plan your next trip overseas and make all the travel arrangements; humanlike bots that act as virtual caregivers for the elderly; or AI-powered supply-chain specialists that can...
With technology evolving at a breakneck speed, many companies are finding it difficult to keep up with rapid developments in AI. Integrating AI into existing business operations is a complex challenge requiring significant investment in infrastructure and specialised talent.
Two years after generative AI staked its claim as the free space on everyone’s buzzword-bingo cards, you’d be forgiven for imagining that the future of technology is more AI. That’s only part of the story, though and we propose that the future of technology isn’t so much about more AI as it is about ubiquitous...
The Internet is a vast ocean of human knowledge but AI researchers have nearly sucked it dry. The past decade of explosive improvement in AI has been driven in large part by making neural networks bigger and training them on ever-more data. This scaling has proved surprisingly effective at making large language...
The boardroom war at OpenAI, the company behind ChatGPT, has put a spotlight on the role of corporate governance in AI safety. Few doubt AI is going to be disruptive for society, and governments are beginning to devise regulatory strategies to control its social cost.
AI has rapidly evolved from simple algorithms to complex neural networks, significantly impacting various sectors. AI technologies, particularly...
To capture the full potential value of AI, organisations need to build trust. Trust, in fact, is the foundation for adoption of AI-powered products and services. After all, if customers or employees lack trust in the outputs of AI systems, they won’t use them.
This conundrum has raised the need for enhanced...
With the advent of generative AI (gen AI), the concept of guardrails applies to systems designed to ensure that a company’s AI tools, especially large language models (LLMs), work in alignment with organisational standards, policies, and values.
When large language models exploded onto the scene in 2022, their powerful capabilities to generate fluent text on demand seemed to herald a productivity revolution. But although these powerful AI systems can generate fluent text in human and computer languages, LLMs are far from infallible.
Correct use of sensitive and confidential information requires guarding of valuable intellectual property. These challenges are further exacerbated in the age of generative AI. Generative AI tools open new possibilities for corporate content makers, but they also give risk to a whole new set of security risks.