Roundup – Twitter tick shake-up; AI experts urge pause on advanced training
Twitter unveils blue tick shake-up
In a bid to clamp down on AI bots, according to Tesla’s Elon Musk, from mid-April only verified subscribers will be able to vote in polls and have posts recommended to other users. Non-paying accounts will also no longer be included in the “For you” stream of recommended tweets.
Musk said the changes were “the only realistic way to address advanced AI bot swarms taking over. It is otherwise a hopeless losing battle.”
Yet the move has come under scrutiny by social media users. A former Twitter employee told the BBC that “verified users will use their power and their presence on the platform to influence anything from misinformation to actual harm for users all around the world. It’s a silent threat that no one is seeing.”
Cerebras publishes seven trained generative AI Models to Open Source
To advance the use of AI, researchers need models from which they can base their research other than building one from scratch, according to the startup. In this context, Cerebras has finished training seven generative AI models to release to Apache-2 open source. All models can be used to run these AIs on any hardware. Yet not all platforms are so collaborative.
OpenAI has abandoned its commitment to open source and sharing work since there is now much more money at stake, said Cerebras. The company has failed to publish high level details about GPT-4 and Google has been similarly quiet about its Bard chat system.
Experts urge pause on training AI systems more powerful than GPT-4
Elon Musk, a group of artificial intelligence experts and industry executives are calling for a six-month ‘pause’ in training of systems more powerful than GPT-4. In an open letter, signed by more than 1,000 people including Musk, Apple co-founder Steve Wozniak and Stability AI CEO Emad Mostaque, the pause on advanced AI development was requested until shared safety protocols for such designs were developed, implemented and audited by independent experts.
The letter also detailed potential risks to society and civilisation by human-competitive AI systems in the form of economic and political disruptions, and called on developers to work with policymakers on governance and regulatory authorities.
Britain opts for ‘adaptable’ AI rules, with no single regulator
The UK plans to split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition.
AI could improve productivity and help unlock growth, but there are concerns about the risks it could pose to people’s privacy, human rights or safety, the government said. It wants to avoid “heavy-handed” legislation that could stifle innovation and will instead take an adaptable approach to regulation based on broad principles such as safety, transparency, fairness and accountability.
Westminster’s approach, outlined in a policy paper published on Wednesday, means it can adapt its rules as the technology develops. Over the next 12 months, existing regulators will issue practical guidance to organisations, as well as other tools and resources like risk-assessment templates.
Subscribe to our Editor's weekly newsletter