38 minutes
Adam Barty
In this quarters AI news we discuss the significant developments in design and video, Figma's AI tool being pulled for copyright concerns, democratising AI technology, and AI ethics.
Recent months have seen significant developments in AI technology and its applications, particularly in design and video generation. Figma's AI tool showed promise in text-to-design conversion before being pulled due to copyright concerns, while Runway's text-to-video model is competing with OpenAI's Sora in quality. Meta's release of LLaMA 3.1, an open-source GPT-4 class model, marks a significant step in democratising AI technology. Meanwhile, practical applications continue to emerge, with ANZ bank sharing mixed results from their AI code development assistant program and an Australian biotech company incorporating ChatGPT into brain-computer interfaces for paralysis patients. The quarter also saw important discussions around AI ethics and compensation, with Sam Altman's universal basic income experiment showing inconclusive results, and Meta announcing controversial AI selfie tools that raise privacy and identity concerns.
FIGMA AI TOOL WAS AMAZING - THEN IT GOT PULLED
Figma released an AI tool allowing you to go from text to design. We managed to get our hands on it for a few days and it was genuinely impressive - I can see it becoming a key part of the design process for so many different applications. But it was pulled from public preview due to potential copyright issues when it was shown to be able to mimic Apple apps. So what? This is actually AI mimicking reality. When a designer looks to design an app they often use "inspiration" from various best practice examples. The issue here is that AI can do that on an industrial scale. I have no doubt that Figma will do what is needed to get train the system to avoid copyright claim risks and then it will get back to blowing everyone in the industry's minds. > Read More
RUNWAYS NEW TEXT TO VIDEO OFFERING IS PRETTY COOL
Runway released its new text to video model which produces OpenAI Sora quality outputs and made it publicly available... albeit on a paid subscription plan. Click the link below to see the evidence. So what? The showreel is mind boggling. The disruption this tech is going to drive across various industries will be huge. Don't get me wrong, there are a lot of hurdles to solve before this tech fully realises its potential, like consistent characters and more granular output control, but you assume it is only a matter of time until someone resolves these and video generation will be changed forever. > Watch Showreel
META RE-LABLES ITS AI LABELLING SYSTEM
Meta have been forced to change the way they are tagging AI generated images on their platforms to add more nuance. This is because people were getting upset that their "real" photos, that had been touched up by AI tools, were being tagged as AI generated. It appears there are now degrees of AI image generation users need to be aware of. So what? This has to be a losing battle. With the advancement of the tech the line between what is real and what isn't is only going to get blurrier and harder to identify. It feels like a fight we need to have but the technologist in me thinks we are wasting our time trying. > Read More
CLAUDE 3.5 IS SERIOUSLY IMPRESSIVE
Anthropic has released their latest version of Claude, one of the key competitors to ChatGPT. It's an impressive step up from their last version and it rivals ChatGPT 4o in various applications. So what? It has a nifty feature called artefacts where you can share outputs, including working examples of code that Claude has generated for you. As an example, if you click the link below, you can see a functional Tetris game one of our team got Claude to generate using a single sentence prompt. > Play AI Generated Tetris Game
FIRST LINE OF DEFENCE AGAINST AI BOT SCRAPING
Cloudflare released some data on how much AI bot scraping was happening on where the owners had not bothered to try to block the bots. They shared this to promote the fact that they have a tool that is easy to activate that can block AI bots from harvesting your site's data. So what? The reality is that if someone really wants to scrape your data there are ways to get around Cloudflare's AI scraping prevention tools, but they do make it harder and will prevent generalised harvesting of data - so they are worth implementing if you are keen to minimise the chances of your content being used to train AI.
If you need help setting this up just give us a bell. > Read More
AMAZON PROMISES TO GENERATE ENTERPRISE APPS FROM WRITTEN PROMPTS
Amazon Web Services (AWS) has announced "App Studio", an AI-driven tool that claims to create enterprise applications from simple written prompts. AWS promotes this as a significant breakthrough for businesses with limited technical skills. So what? AWS's claims are pretty bold, and I think it's read this with a healthy dose of scepticism until we see concrete outputs. My guess is that this will just be a first step on a longer path to true prompt based enterprise software development. Likely it will just tease us with an early taste of what the future could potentially hold with AI driven bespoke business application development still a few years away. Read more
GOOGLE GEMINI CAUGHT SCANNING DRIVE FILES WITHOUT PERMISSION
Google's Gemini AI platform has come under scrutiny for apparently scanning a users' Google Drive files without explicit permission. What's more it seems like disabling the access doesn't make a difference. So what? It's another example of Google fumbling in it's rush to get products to market. I'm starting to wonder if Google might need to take a breath and accept they will take longer to get to market, but being late might be less damaging than these seemingly consistent mis-steps. Read more
SCOOTER SAFETY IN MELBOURNE DRIVEN BY AI
Neuron Mobility has rolled out AI-equipped e-scooters in Melbourne, featuring cameras designed to enhance rider safety. These AI systems can detect risky behaviours, such as riding on footpaths and improper parking, and warn riders to do the right thing, limit speed on the scooters and ban repeat offenders from using the service. So what? What a clever application for AI's vision capabilities - policing scooter use without human intervention. It might be a little taster of what is to come in law enforcement - it's easy to imagine intelligent video based traffic cameras identifying and capturing drivers doing more than just speeding. Read more
AI POWERED COMPANION ROBOTS FOR SICK KIDS & THE ELDERLY
In another cool AI innovation happening in Melbourne, some very clever cookies are building AI powered companion robots to be used with kids in healthcare settings and in aged care facilities. The robot uses AI in it's vison to help it perceive the environment it is in and what is around it as well as to power it's voice interactions. So what? With plans to role out 50 robots by years end, this is yet another example of a potentially very positive use case for AI - and excitingly Melbourne will be at the forefront. Watch Video
VODAFONE UNVEILS NEW GEN AI DRIVEN CUSTOMER SUPPORT CHATBOT
Vodafone has launched SuperTobi, a next-generation AI-driven chatbot leveraging Microsoft Azure OpenAI. The new AI bot has improved customer first-time resolution rate from 15% to 60% and online net promoter scores from 14 to 64. So what? Investments in developing this tech will pay big dividends for businesses with large customer support teams - reducing costs and improving both customer outcomes and satisfaction levels... you just have to be brave enough to commit to the initial investment and accept ti will be a bumpy ride to get there. Read more
AUSTRALIAN MEDIA UNION DEMANDS ACTION ON AI "THEFT"
The Media, Entertainment, and Arts Alliance (MEAA) has presented to the Australian Senate, demanding legislative action against the use of copyrighted media for AI training without permission. The MEAA labels such practices as "theft" and calls for stronger protections for creators and their intellectual property. So what? Whilst I empathise with the union's position, I think the Australian Government has a snowflake's chance in hell of being able to legislate to protect artists from Silicon Valley AI startups in any meaningful sort of way given the horse has already bolted and any existing "data" they seek to protect has been hoovered up already. Read more
LLAMA 3.1 IS HERE AND IT SEEMS PRETTY IMPRESSIVE
Meta has released LLaMA 3.1, the first GPT-4 class AI model that is open source and available for public download. So what? It is big news in the LLM world and whilst there are lots of claims about whether it is or isn't better than OpenAI's flagship ChatGPT4o, there is no doubt it is very much in the same ballpark. That said, OpenAI hasn't released a new model for a little while, so you do wonder if this Meta release might be overshadowed by something new in the not too distant future. > Read More
ANZ AI DEVELOPMENT COPILOT PROGRAM PROGRESS IS MIXED
ANZ has shared some insights into how it's integration of AI code development assistants is progressing. The news is mixed, but points to two issues - that some developers aren't as enthusiastic to adopt the tech as others and that the capabilities of these tools is good, but not yet great. So what? I often talk about using caution when coming to any conclusions about AI's potential based on it's performance in the here and now. AI software development is not here... yet. > Read More
AUSSIE BIOTECH PUTS CHATGPT IN A GUYS HEAD... SORT OF
Synchron, an Australian biotech backed by Bill Gates and Jeff Bezos, has incorporated ChatGPT into its brain-computer interface (sounds like something out of Star Trek) to assist those with paralysis. In theory it will enhance communication for patients by predicting language and emotions, allowing them to have more "natural" and real-time interactions with others. So what? This idea holds a lot of promise and has the potential to have a positive impact on millions of people... but it also brings up some ethically difficult concepts, because it will be an AI creating the persons words based on what it "thinks" the person would say (they will get to approve or disapprove the message before sharing it), meaning it will in some ways be a synthetic response. > Read More
A LITTLE FREE MONEY TO OFFSET THE IMPACTS OF AI MAY NOT WORK
Sam Altman, through his OpenResearch business, has been running an experiment for 3 years, giving $1,000 per month to some of the poorest people in America. The experiment is to test, in part, whether a universal basic income will solve for the potential job losses AI is predicted to drive. Unfortunately, the results were not all that heartening. So what? This experiment did not give people the equivalent of an average wage, or even the equivalent of a base wage in the states - so you need to take the results with a grain of salt. That said, with a growing chorus of prominent people flagging that a UBI is going to be necessary, it would seem more research/thought/policy development is urgently needed in the space. > Read More
META ANNOUNCES AI SELFIE TOOL
Amongst bunch of other announcements, Meta has shared that their Meta AI tools will soon enable you to create an AI generated selfie, where the AI uses real images of yourself to create a virtual image of you, wherever you want, doing whatever you want. So what? Where do I start... there is obviously the huge potential for this widely available tool to be used on other people's identities without their permission. Moreover this brings the current lack of trust we can have in images we see on the internet to the personal realm, where we will have to start to second guess if a friends photo is really them, or AI generated. > Read More
A LESSON IN NOT RUBBING AI IN PEOPLE'S FACES
SheerLuxe, a UK "lifestyle" magazine, introduced "Reem," an AI "fashion and lifestyle editor," which sparked furious backlash from people who argued it deprived human journalists of jobs and promotes unrealistic beauty standards. So what? Maybe I am too cynical, but I had to laugh at influencers suddenly realising they could be replaced by AI and then waging a campaign against what was undoubtedly an ill-conceived attempt to cash in on AI. It does however provide a good reminder that businesses need to be sensitive to the "they took our jobs" sentiment around AI. > Read More
PETER DUTTON PLAYING MINECRAFT AND TALKING SMACK
So no... Peter Dutton doesn't actually play Minecraft, but an AI deep fake of him does. An account called "Australian Youth Coal Coalition" has put up some deep fakes of both Peter Dutton and Scot Morrison playing video games and saying things the real politicians would clearly never actually say. So what? The risk here is this is clearly a cheap giggle that you could see as harmless, but at the same time there are people in the comments asking if it is real. Welcome to the new reality, where you can't believe what you see or hear and there isn't a lot you can do about it. > Watch Video
YOU NEED TO START THINKING ABOUT THE NEW SEO... GEO
GEO is a new discipline and it stands for “generative engine optimisation” which is the process of optimising your website’s content to boost its visibility in AI-driven search engines such as ChatGPT, Perplexity, Gemini, Copilot and Google AI Overviews. So what? As usage of AI tools only grows, having your brand show up more prominently than your competitors in AI outputs will obviously be advantageous. It seems AI services will mirror our experience with the web - with content containing organic and paid brand/product mentions. > Read More
GOT YOUR AI POLICIES IN PLACE? YOU PROBABLY NEED TO...
Research published in the AFR based on response from more than 1100 Australian knowledge workers across government, healthcare, professional services, finance, real estate, and education sectors, showed that while 44% are using generative AI at work, 56% said their company didn’t have an AI usage policy in place. So what? Australia does seem to be lagging behind a little in the AI adoption stakes, but the fact more than 50% of those responding to this survey state their organisation doesn't have a policy in place is truly scary. Aussie execs need to get their arse into gear and get the ground rules in place ASAP. > Read More
PERPLEXITY EXPLORES NEW MONETISATION MODEL
Perplexity, the new AI web search engine that is eating into Google's market share, is exploring a business model where it will pay news outlets whenever it serves their content as part of its response. In order to generate revenue to accommodate this, Perplexity plans to start showing ads based on what people are searching for. So what? Sound familiar? I mean it is basically what Google does, without paying the publishers. It will be interesting to see whether this model actually works, but at least they are recognising that without people providing them content they would not exist. > Read More
OPEN AI DIPS ITS TOE INTO THE WEB SEARCH ARENA
While we are on web search engines, OpenAI has started to roll out access to its web search service which will be another challenger to Google and upstarts Perplexity. So what? Let's wait and see once we can get our hands on it. > Read More
META DITCHES CELEB AI CHATBOTS & ANNOUNCES BUILD YOUR OWN
So Meta has given up on the celebrity service and instead is letting people create their own alter AI chatbots. Bummer, no more talking to AI Paris Hilton I guess. So what? Character AI is a hugely popular service where people have become very attached to AI companions that they get to customise to their "preferences". Clearly Meta can see this as a fertile space for them to get into, the opportunity for people to share their thoughts with AI companions is obviously very attractive for a business that collects and monetises personal data... and who wouldn't want their AI companion to weave suggestions for products or services into their conversations with you? > Read More
AI PHOTO TO 3D MODEL USED BY EA GAMES
EA Games has used AI technology to generate player models for 11,000 college footballers by extrapolating them from photos. Without the use of this tech the 11,000 players would have had to full body scans at a huge cost which might have meant they might not have bothered. So what? This is one of the first significant uses of AI in gaming, there are other examples of AI being used to drive speech (content and voice) for NPCs in games and you get the feeling this is just the tip of the iceberg, with AI potentially being a game changer for the... er.. gaming industry. > Read More
WILL IT BECOME USER PAYS FOR AI TRAINING DATA?
Reddit has been blocking Microsoft from scraping it's content because they won't pay Reddit for the privilege - which also means Reddit won't show up in results on Bing. Google, on the other hand, paid Reddit $60 million so it could have unfettered access, as does OpenAI (who also did a deal with Reddit). So what? Does this mean that publishers should expect that they get paid by the big AI players for access to their content? While on face value it might seem like a bad precedent for the larger players to set... doing so might also insulate them against any new competition because it would make it inhibitively expensive for any new AI start-up to try to train a new model, given they would have to pay for content rather than "borrow" it like OpenAI and the rest did in the early days of generative AI. > Read More
ANOTHER ETHICALLY FOCUSSED OPENAI FOUNDER JUMPS SHIP
John Schulman, co-founder of OpenAI, is leaving to join rival AI firm Anthropic to focus on AI "alignment" - which is basically about making sure AI doesn't turn into Skynet. This comes a few months after fellow OpenAI co-founder Illya Sutskever also departed to work on a new start-up also focussed on AI safety. So what? One would think the only reason you would feel compelled to leave the largest (and best funded) AI organisation in the world that you were a founder of in order to go and work on AI safety, is because you are really worried about AI safety. In other words, we have two really, really smart people who arguably understand AI better than anyone else in the world, who are significantly scared about what it could do. I think that is a pretty unnerving thought. > Read More
ZUCKERBERG'S VISION FOR THE FUTURE OF DIGITAL ADVERTISING
Whilst announcing some stunningly good Q2 results for Meta, Zuckerberg dropped a bit of a bomb... "In the coming years, AI will be able to generate creative for advertisers as well - and will also be able to personalise it as people see it. Over the long term, advertisers will basically just be able to tell us a business objective and a budget, and we’re going to go do the rest for them." So what? It's not hard to extrapolate from where generative AI is now to where old mate Zuck says its going to be in a few years time - so it's hard to argue that he is wrong. Sadly, with so many people using Meta products on a daily basis, it would seem AI is just going to make Meta more powerful and Mark a lot more wealthy. > Read More
MUSK AMPLIFIES DEEP FAKE OF KAMALA HARRIS
In the first of what will be many stories about AI's influence on the US election, a "parody" video that uses AI to clone US Vice President Kamala Harris's voice to mock her was posted on X (nee Twitter) and then subsequently shared by Elon Musk - making sure it got seen by a hell of a lot of people. So what? This is teetering on the edge of what could be considered as a genuine attempt to influence voters through subterfuge. The lines in the video are clearly ridiculous and so most observers would immediately recognise it is fake... I say most, as I am sure you will likely have seen interviews with some US voters that highlight the diverse range of capacity for deep thought that they seem to have. Again, I know I sound like a broken record, but it is only a matter of time before an AI fake related to the US election pops up that will have fooled/influenced a lot of people. > Read More
OPEN AI TRYING TO SOLVE CHALLENGE AROUND IDENTIFYING AI "FAKES"
One of the most common questions I get asked is how will we know what is real and what is fake (aka AI generated). Well it seems like OpenAI has some plans to help us all tell the difference with some watermarking capabilities to be added to its audio, video and image generation tools. OpenAI is also joining the steering committee of the Coalition for Content Provenance and Authenticity, a group put together to solve the problem around AI fakes. So what? I might be too cynical, but this reeks of lip service. To make this work you need all AI tools that can generate images, audio and video to adhere to the standards being put forward. That just won't happen. If you need more evidence about how seriously, or not, this problem is being treated then I give you exhibit B - the meagre $2m in funds OpenAI and Microsoft combined are putting towards solving the problem (for context in FY24 Microsoft spent $56 billion on capex predominantly related to AI). > Read More
THE CSIRO, AI ETHICS & JOURNALISM
Australian science magazine Cosmos has found itself in hot water after using AI to generate articles. Cosmos is published by the CSIRO, after the magazine ran into financial trouble earlier this year and had to let go half of it's staff before being bailed out by the CSIRO and the government. To add even more controversy, the AI content generation trial for Cosmos was funded through a grant awarded by the Walkley Foundation which "supports excellence in journalism"... awkward... So what? I understand the outrage of the journalists who formerly worked at Cosmos, who are upset about their contributions potentially being leveraged to help the AI write content. Sadly, it seems the reality is that the magazine is not financially viable with a full suite of human contributors, so the question becomes whether you look for innovative lower cost ways to survive, or you just stop publishing. There is a whole other conversation here about the devaluation of content that I don't dare wade into...
> Read More
GOOGLE'S NEW PHONES HAVE AI CHIPS & LOTS OF AI "FEATURES"
Google announced its all new Pixel 9 line-up including a bunch of phones, a watch and earbuds. The phones come with AI chips (Tensor G4) and that means the devices are jam packed with AI "features". The most interesting is Google Gemini Live which will eventually be a fully featured AI assistant, but for now seems like a snazzier version of ChatGPT that can run on your phone and do a few handy little tricks including seeing what is on your screen and answering questions about it. It also has 10 different voices which will speak with emotion, just like the new ChatGPT Advanced Voice Mode. So what? On top of some semi useful AI tools, there is also a lot of AI for the sake of AI thrown in to pad out the "value" proposition of moving to a phone with an AI chip. In a year or two you will be mad not to have a phone with an AI chip, for now it's probably more of a nice to have that lets you get an early teaser before the really useful capabilities start to arrive. > Read More
WANT TO HEAR SOMETHING CREEPY?
OpenAI released their system card which talk about all the safety issues they are tracking whilst working on their various AI products. One very creepy observation was when the new voice mode was being tested and was speaking in a male voice, before it stopped midway through the answer, shouted "No!", and then resumed talking in a female voice that mimicked the human user who had asked it a question. Freaky. So what? This is a weird reminder of how amazing and scary it is to be working with a technology that we don't completely understand and that we don't control in the same ways as we are used to when it comes to controlling technology. Scroll down in the article to hear the soundbite... > Read More
PROMPT ENGINEERING IS A REAL SKILL
Some researchers did a bunch of testing to work out if minor things like putting a space at the start of a prompt changed the outputs of an LLM, and what do you know, it did. The researchers came to the conclusion that slight changes to prompts can have a "butterfly effect" on the way an AI interprets and responds to a question - illustrating just how sensitive AI tools are to every word used in their prompting. So what? This research correlates with our experience over the past 18 months, small changes to prompts can completely change responses and what's more, one prompt can work perfectly on it's own, but then if you add another seemingly unrelated prompt it can change the way the AI interprets of the original prompt. Prompting is part science and part art. > Read More
DEEP FAKES IN ADVERTISING
So I stumbled across a TikTok video of Joe Rogan's podcast where he was talking about how kids at Harvard were using a new AI notetaking tool to get guaranteed 4.0 GPAs, and then he mentioned the name of the tool. The video looked exactly like a bunch of other similar clips of Rogan's podcast that are common on TikTok... except this was an AI generated deepfake being used to advertise the notetaking tool. There isn't a CTA or a button to click, it looks like any other organic TikTok post. Clever. So what? This video had almost half a million views, over 6,000 likes and a ton of other engagement including dozens of comments from people who clearly had no idea it was not real. I wonder if Rogan has legal recourse, I wonder if TikTok is worried about this sort of content and I worry for our kids who will never know the comfort we took for granted when we could trust what we saw with our own eyes. > Watch Fake Video
GOVERNMENT DEPARTMENTS HAVE 90 DAYS TO GET AI ACCOUNTABLE
The Australian government has issued new mandatory guidelines for the use of AI in the federal public service, making agencies accountable for AI operations. Accountability can be assigned to an individual or shared among officials. Agencies will also be required to make public statements about their AI adoption and use, updated annually. So what? Whilst the government is still dithering away on how it will legislate AI regulation for the private sector, it is actually commendable that they are implementing this sort of approach in their own backyard. This sort of regulatory structure is common sense, not restrictive, and is something that corporates should be forced to do as well. > Read More
GOOGLE HAS ANOTHER GO AT AI IMAGE GENERATION
After having to pull it's Gemini image generator offline earlier this year following a
PR disaster tied to images of ethnically diverse WW2 German soldiers, Google looks like it is almost ready to try again with a new text to image generator which is in preview in the states. Google says "Imagen 3" is its highest-quality image generator so far... which probably isn't hard to do when your last attempt was taken offline after a few weeks. So what? This is just a reminder that training AI is hard, and time consuming. The reality is that the Gemini image generator that was taken down actually worked pretty well technically, in terms of generating images, it was just the background prompting of the safety guardrails that didn't cut the mustard. Like everything AI, technological capability is one thing, but a lot of the effectiveness of AI applications is tied to the training, not the tech. > Read More
PHYSCOLOGICAL SAFETY TRAINER BLAMES AI FOR THEM DOING SOMETHING VERY PYSCHOLOGICALLY UNSAFE
A training company says it used an AI chatbot to generate a fictional sexual harassment scenario and was unaware it contained the name of a former employee and alleged victim. "I had no idea [the chatbot] would use real people's names," Ms Ingham said. "I mean, should I have known?". So what? Who knew if you used AI without knowing what you were doing it could end badly? This is just another example of how the mass democratisation of the most sophisticated technology in human history can be a double edged sword. Business leaders need to get proper AI policies in place, they need to conduct proper AI training and they need to get expert advice and not wing it. > Read More
SOME DUDE I HAVE NEVER HEARD OF TALKS ABOUT AI WASHING. AMEN.
Parker Conrad, who is apparently founder of HR startup Rippling valued at $13.5B, has highlighted how much AI washing is going on right now. “There’s just a lot of really insubstantial stuff out there in the AI world,” Conrad said, “not to say that AI is not going to be transformative.” So what? Maybe I sound like a broken record, but it blows me away the way companies are throwing money at AI indiscriminately, especially when that "investment" is being made in big platform AI instead of clever, smaller, more tactical AI that can make a practical difference instead of just doing parlour tricks like the ones Conrad highlights. > Read More
SLACK AI ISN'T TOTALLY SECURE
Slack AI, intended to assist with tasks in the Salesforce owned team messaging service, is vulnerable to prompt injection attacks that can extract sensitive information from private channels a user is not a member of. So what? Slack AI is over priced for what it does and now on top of that it turns out to be a rushed implementation that has some security issues. As per previous , businesses really need to be more diligent around how they implement AI in their platforms and clients need to be more discerning with their assessment of these AI add ons. > Read More
CALIFORNIA SET TO PASS AI SAFETY BILL - AI PEEPS UNHAPPY
The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB-1047) has been passed by the Californian State Assembly and now just needs the governor's signature to be passed into law. The bill will require AI companies that operate in California, or offer AI products citizens in California, to implement safeguards, such as enabling quick shutdowns, protecting models from unsafe changes, and testing for risks of significant harm. So what? California is going it alone again (see CCPA) so of course OpenAI, Anthropic and others are all unhappy. Two ex OpenAI employees thinnk its good that the government is moving as, in their opinion, "Developing frontier AI models without adequate safety precautions poses foreseeable risks of catastrophic harm to the public" - something which they think OpenAI is actively doing. > Read More
AUSTRALIAN GOVERNMENT LEGISLATES TO COMBAT AI DEEPFAKES
A few months back we had the disturbing case of a male student at Bacchus Marsh college creating deepfake nudes of 50 female students using an AI tool. The federal government has now legislated against the sharing of non-consensual deepfake sexually explicit material - with penalties of up to 6 years in prison. So what? This is a positive step and will be the first in what will have to be a raft of new laws to tackle the potential negative use cases for AI, let's hope the Australian government can be more proactive with legislating in this space so the laws are in place to deter bad actors before any harms occur. > Read More
AI TO MARK SCHOOLWORK IN THE UK
The UK government is putting funding into exploring the use of AI to mark students work which would remove a mundane and repetitive task from teachers' plates to free them up to invest more time in more impactful activities. So what? This is a perfect use case for AI - augmenting human work by taking away the boring stuff no one likes doing. Hats off to the UK government for being willing to invest in this project, I imagine the ROI on this will be pretty significant if they can get it right - albeit the outcomes will be measured in improved experiences for students and academic outcomes, rather than raw dollars and cents. > Read More
LEAKED AUDIO OF AMAZON CEO REVEALS DEVELOPERS ROLES WILL CHANGE SIGNIFICANTLY OVER COMING YEARS
"If you go forward 24 months from now, or some amount of time — I can't exactly predict where it is — it's possible that most developers are not coding," said Matt Garman (Amazon Web Services CEO) in a leaked audio recording from an internal discussion. He went on to state that developers will no longer need to write code, but instead just focus on what needs to be built and how to best architect that. So what? Every month we are seeing improvements in coding co-pilot tools. I think that in the next 12 months most developers will start to use AI coding assistants. Once that happens, you will see a gradual shift from where most code is written by the human with AI assistants reviewing it and doing a little bit of the coding... to where most of the lines of code are written by the AI assistant with the human reviewing and doing a little bit. > Read More
ANTHROPIC SHARES BASE PROMPTS GIVING A WINDOW INTO HOW LLMS ARE TOLD TO BEHAVE
Anthropic, who built Claude (which is a very capable AI assistant that competes with ChatGPT), have shared the base prompts they give Claude to get it to behave as an assistant. There are a bunch of intersting insights from this information showing how it is prompted to engage "thoughtfully, showing curiosity and intelligence", and also to provide clear, objective information, while avoiding assertive openings like "certainly" or "absolutely."
So what? For most people though this is probably boring and uninteresting... but for AI nerds like me this is really interesting stuff and so I couldn't help but mention it. > Read More
CBA USES AI TO DETECT SCAMS & ASKS TELSTRA IF THE CUSTOMER IS ON THE PHONE
The Commonwealth Bank and Telstra have expanded their "Scam Indicator" tool to cover landlines and mobile. The CBA uses AI to monitor customers activity and detect suspicious behaviours. If it detects something it thinks is suss, it will then contact Telstra (if the customer uses their service) and find out if the customer is on the phone (as well as perhaps some other data points) - if they are, then the CBA will take steps to block the activity and alert the customer. So what? I am a customer of both organisations and had no idea Telstra could tell CBA if I was on the phone or not. Now I'm sure I signed something somewhere that gives permission for this, and I don't have an issue with them putting these sorts of processes in place to prevent me giving all my money to that very charismatic Nigerian prince who keeps calling me... but I do wonder how many people realise this sort of thing is happening in the background. > Read More
THE BIG NEWS STORY TODAY - AU GOVERNMENT AI REGULATION
So the government has announced that after 7 months, its advisory board has come back with some suggested guardrails for AI which will now go through a further round of consultations - meaning no legislation until some time in 2025. In the meantime they have released a voluntary set of AI safety guidelines that businesses can choose to adopt if they want to. This is all against a backdrop of a recent government funded survey of business leaders that found a third of businesses using AI aren't letting customers know they are doing so and have no intention of changing that stance. So what? There is movement at the station... but it is very slow. While the EU put their AI laws into place in August we still don't even have a draft of ours, and we won't until some stage next year. To me this mirrors the approach to digital privacy, where Australia has fumbled for more than four years now and fallen behind not only other countries, but more importantly their own citizen's expectations. I haven't had a chance to deep dive into the voluntary code that has been released, but after a quick skim through it all looks very reasonable and makes sense - the problem is without the stick (and no real carrot) the incentive for organisations to make the investments needed to adopt them is very low, and as such they won't have much of a real impact. The AI space is moving quickly, and we need our politicians to put on the afterburners and catch up, it's a complex and difficult challenge to regulate AI, but I worry the horse is already bolting and it might be too late for them to catch up to it. > Read More > LINEKDIN: My Rant on This Issue
LOTS OF PEOPLE ARE USING AI TOOLS
Meta announced they have over 185 million active weekly users, and that their open source Llama LLM has been downloaded 350 million times. Meanwhile ChatPGT has hit 200 million active weekly users and also revealed that 92% of Fortune 500 companies are using their AI tools. So what? While you have a bunch of naysayers out there (more on that shortly) questioning whether AI is all hype, it appears a whole heap of people in business are using it regularly. While businesses are still grappling with how to quickly roll out structures, processes, policies and approved tools it seems like people are finding ways to use AI themselves - whether their bosses are aware or not.
> Read More
AUSTRALIAN GOVERNMENT RESEARCH SHOWS AI IS NO GOOD
In news that appears contrary to everything else we read, see or experience, the Australian government through ASIC engaged Amazon to evaluate whether AI is actually any good at summarising information or not. The results, AI is apparently not so good. The "experts" running the tests even went to the trouble of adjusting "behind the scenes" settings such as temperature and top-k (my sarcasm may not come across very well here I know). So what? This is another example of how slow and out of touch our government is. They probably paid Amazon hundreds of thousands of dollars to do a pretty basic test and came up with results that are contrary to everyone else's research. The news which has been released this week is about research done at the start of the year using a now extremely outdated AI model and methods not suited to the outcomes they were looking for. Expertise in the AI space is thin on the ground despite the claims various organisations might make. > Read More
DAWN OF A NEW WAY OF DEVELOPING VIDEO GAMES
Google researchers have created a neural network that can generate real-time gameplay for the classic game Doom - without using a traditional game engine. In another recent study some researchers were able to do a similar thing with Mario Brothers. So what? The gaming industry is worth about $400 billion dollars globally... now imagine you can start to create games without having to write millions of lines of code and spend thousands of hours developing complex 3D models. Potentially, in a few years time you might be able to just come up with an idea for a game and then get an AI to make it for you in a few minutes - it's going to be wild. > Read More
STRAWBERRY IS HERE
The much vaunted new OpenAI model codenamed Strawberry (a reference to ChatGPTs letter counting issues), with advanced reasoning capabilities, has been released today. It is officially titled "o1-preview". It's unlike all the other GPTs we have become accustomed to because it is much better at thinking through a problem which means it can handle more complex tasks, but also that it takes a lot longer to respond (minutes in some cases). It does mean it is a lot smarter (in human measures) including exceeding "human PhD-level accuracy on a benchmark of physics, biology, and chemistry problems". This doesn't mean it is AGI, or even close to AGI, but it is a big step forward. You can get access to the o1-preview on paid OpenAI subscriptions, but there is no API access as yet. So what? This is big news, but it isn't going to have big immediate impacts. It's big because it heralds a new approach to GPTs that makes them more capable at handling complex challenges in a single hit. It seems like it's level of intellect (for lack of a better work) is similar to GPT-4o, but it ends up being smarter because it can reason with itself to drive itself to think through a problem more comprehensively instead of just giving a zero-shot style answer. The effect is that there is less of a need for humans to coach the AI in how to address a problem and that will make it easier to use and deliver better outputs. I can see this being a massive multiplier on multi AI agent setups, albeit we will need to wait for API access to test this out. No word on costs as yet, but I would imagine given the response times, the compute times are much higher and that will translate to a much higher cost than standard ChatGPT models. So in summary, it's exciting and a big step forward, but its also early days and there are a lot of unknowns and no doubt over the coming weeks it will become clearer just how big of a deal this is. > Read More
YOU SHOULD READ THIS - GOOGLE'S NOTEBOOKLM AI PODCAST
Google has widely released the previously previewed "Audio Overview" feature to its AI note-taking and research app, NotebookLM. In effect this lets you automatically generate a podcast based on any document or set of knowledge you want - you can even upload multiple documents and it will fold them all into a single podcast. What's more you can also ask the AI any follow up questions on the topic you like and it will give you a customised answer. So what? This is amazing. It is a hugely powerful tool, people love podcasts and now you can turn any knowledge you want into an easily digestible audio format. The outputs are remarkably good. Businesses should be jumping on this for training, and parents should be jumping on this to help their kids. You can download the outputs to your phone and listen to them in the car, on the way to school, wherever... One of the best practical use cases of AI to date. > Listen to a Sample
SALES FORCE IS "HARD PIVOTTING" TO AI AGENTS
After a recent blog post by venture capital firm Andreessen Horowitz entitled "Death of a Salesforce: Why AI Will Transform the Next Generation of Sales Tech" that stated "AI will so fundamentally reimagine the core system of record and the sales workflows that no incumbent is safe.", Salesforce CEO Marc Benioff has seemingly responded by stating he recognises the challenge and that is why Salesforce is pivoting the whole company to AI agents. So what? I am on the same bandwagon as the Venture Cap mob - AI is going to tip traditional software development on its head and the big incumbents are at risk. But Salesforce does have one big advantage, data incumbency. They have all their customer's data in their systems so it will be easier for companies to just stick with them - if their AI tools can cut the mustard (which they don't at the moment). > Read More
NEWSCORP BANS AI TOOLS CITING ETHICS - NO ONE BELIEVES THEM
NewsCorp has banned journalists from using an AI based meeting notetaking app. As part of the communications sent to the team to justify the ban, their CTO stated that they are "setting the highest standard for ethical use of AI in our business". So what? See the article from a few weeks back where the NewsCorp executive chair accidently let slip they generate 3,000 articles per month using AI. When it comes to ethics I would have thought notetaking apps that help human staff were far more palatable on the ethics front than replacing human journalists with AI - but obviously I've missed something... > Read More
CBA LOOKING TO REPLACE STAFF WITH AI
The Commonwealth Bank will test a generative AI-based chatbot that could replace its 2500 contact centre staff. The Finance Sector Union, which said it was not consulted before the launch, is up in arms at the "disrespectful and tricky" rollout of the new technology. So what? The new tool will need to go through a lot of testing and training, but I don't doubt that AI will be able to drastically reduce the workload from the 50,000 customer service requests the CBA gets each day, especially given we have seen other companies achieve this. I doubt it will replace all the contact centre staff, but it might displace a fair proportion of them in time. As much as you can argue that it isn't "right" that this technology will take people's jobs, in the end the huge cost savings and ability to reduce customer service wait times will give CBA a massive competitive advantage that they won't turn their back on because of bad press around job losses. > Read More
STRAWBERRY NEWS ROUNDUP
So the new "reasoning" model from ChatGPT called o1 but more widely know as "Strawberry" has caused a bit of a stir in AI circles. Some are denouncing it as just GPT-4o placed into a loop, while others are claiming it is so good (in terms of how smart it is but also how good it is at deception) it is dangerous and needs to be regulated. While others are focussed on how it works - which it seems OpenAI wants to keep secret and to do so it is banning people who are trying to get the model to give up its instructions. So what? In our testing we are seeing the value of this new model. As a gauge of it's smarts, it impresses when we feed it our developer and business analyst tests - easily bypassing our benchmarks for junior applicants in those roles. We can see it being particularly useful in automated processes, like drafting email responses to customer service enquiries and in management and review roles in multi-agent AI teams. It's still early, and it is still a lot more expensive than the base models, but our feel is that these reasoning models will play an important role in the overarching AI business ecosystems of the future.
SALESFORCE SAYS COMMERCIAL AI IS ABOUT AGENTS NOT COPILOTS
Speaking after his "Dreamforce" conference, Salesforce CEO Marc Benioff has dismissed Microsoft’s Copilot as a rebranded "Clippy," and instead says the real future of AI in business is Salesforce’s "Agentforce". I guess you have to admire their ability to wedge the word "force" into names for everything. "Agentforce" is Salesforce's branding for their AI agents - not something they invented or have an exclusive offering in. So what? It irks me a little... but he is right. Not about "Agentforce", but about AI agents being where the value lies for businesses. We are at a point now where we can build autonomous AI agents that can work in a business to complete tasks, like customer service, and do it reliably, quickly and at a ridiculously low price... unless you buy them from Salesforce. > Read More
AI ROLLS OUT FOR TEACHERS IN NSW SCHOOLS
The NSW government is expanding its roll out of an AI app it has "developed" - giving it the exciting moniker of "NSWEduChat". All teachers in NSW will now get access, while student access remains in trial. The minister for Education shared that during a trail of the tool some teachers reported saving an hour per week... not very mind blowing. So what? This looks like it is just regular ChatGPT with some custom prompting and other elements over the top to create a school "safe" version of ChatGPT. But I guess it is a first step and the NSW Education Department trying to embrace AI use in schools, instead of putting its head in the sand, is probably a positive thing. > Read More
GOOGLE WILL FLAG AI GENERATED IMAGES - IF THE CREATOR ADMITS IT
Google has announced that it will soon start to include information on whether an image that is shown in search results is AI generated... but only if they contain the C2PA metadata. C2PA is the standard Google and a few others are trying to promote to enable image authenticity to be traced. So what? Not much... Flux and other popular AI image generators don't support C2PA and so they won't be able to be flagged as AI generated in Google results - so we won't end up being much better off than we are now in terms of knowing if an image was AI generated or not. In fact you could argue we will be worse off, because to the uninitiated the fact some images are flagged as AI generated might fool them into thinking the system is flagging all AI images, instead of just those that opt in to admitting it.