The inaugural meeting of the London Futurists in the Pub

London, 25th March 2023

London Futurists, a global community of futurists with over 9,000 members, has been holding meetings for over 15 years. During the pandemic, David Wood, its founder and the chairman started weekly virtual ZOOM meetings. However, those living in London wanted to return to a previous tradition of meeting at a pub after the events which took place in London. We now want to return to that idea. Hence, Tony Czarnecki, the Managing Partner of Sustensis and long-time member of London Futurists is hosting the first inaugural in-person meeting, which will be called ‘London Futurists in the Pub’ (LFiP). The event will be held on Wednesday, 29th March at 5:30 pm at the London and South Western pub (, Clapham Junction – 100m from the station. If you are not a member of the London Futurists and would like to become one, please first register here. Those who are London Futurists members and would like to attend, please register via the official London Futurists website

There is also London Futurists in the Pub WhatsApp group. Those who attend are encouraged to register there, to keep you in the loop for the last minute eventual changes or additions. You will get the link, once you have registered on the London Futurists Meetup website

The main objective of these meetings is to become an additional forum for the London Futurists to meet and exchange views in person on the subjects discussed at the London Futurists events and of course to get to know each other, having fun over a meal or just over a pint. In order not to get lost in the crowd, it is suggested that you wear a badge with your name.

The format of the meetings is not set in stone. This is up to discussion. However, for the first meeting Tony has prepared the subject, which seems to be a hot topic – When will AGI emerge. He has have prepared a video on ‘Will AGI emerge by 2030?’,  which covers most topics in the book that he is finishing right now. There is also an extended slide presentation on this subject on the Sustensis website . Should there be the right conditions in the pub (not too noisy), we may start with a short introduction and then give anyone present a few minutes to present their views. If it is too noisy, we will just have one-to one conversations and talk on any subject of interest.

London Futurists in the Pub looks forward to seeing you there. Here is a resume of the subject of the meeting by its host – Tony Czarnecki.


Pace of change has never been so fast as today. What not so long ago took a decade, now it barely takes a year. But it is in Artificial Intelligence (AI) where a nearly exponential pace of change is so evident, particularly since 2022. The release of ChatGPT, with several similar AI Assistants, represent significant advancement towards Artificial General Intelligence (AGI) with human level intelligence. Ray Kurzweil, an eminent futurist, predicted in 2014 that AI will have human level intelligence by 2029. He did not define what that ‘human intelligence’ means and there is still no agreement about that among the AI researchers.

However, more important than a philosophical debate on the nature of intelligence, is whether AGI will be able to outsmart us and get out of control by about 2030. Rather than being a single moment, such loss of control will be a gradual process, combined with subtle influence over our decisions until AGI starts making decisions for us. A total loss of control over AGI will happen when we will be unable to revert such decisions. Since AGI with a human level intelligence will continue to increase its capabilities exponentially, we will quickly lose control over its behaviour and own goals. In the worst-case scenario, that may lead to the extinction of a human species. That is why we should consider all feasible options to extend the ‘AI’s nursery time’ beyond 2030. We could then better prepare ourselves for the future when we will be managed by Superintelligence, immensely more capable than the whole Humanity, and hopefully a benevolent master.

To protect our civilisation and the survival of humanity we must fundamentally change the assumptions about the nature, scope and timing of various necessary decisions and solutions to be implemented for an effective AI control. We have done that for Global Warming, although we must do much more to stay below 1.5C temperature increase. To prolong the period of human control over AI, we must also take much more significant, and sometimes painful measures, proposed ironically here as ‘The Ten Commandments’ if such control is to be effective and implemented on time.

Many of us consider ‘freedom’ as our most treasured value. But we forget that there is one higher value – Life. If a human species becomes extinct it will mean the end of every human life. If you agree with that, then ask yourself what else could be done to have an effective control of AGI before it is too late. It may help to imagine that we are all aboard ‘Titanic’ and each of the passengers must throw away some of his possessions to save himself and the rest of the passengers. We are in the war time situation although this time the enemy is invisible, and the stake is our species survival. That’s the situation we are in right now, and that’s why I believe the following measures and decisions must be taken:

  1. Prepare for AGI emerging by 2030 not just as a new technology, but a new intelligent species, in many ways superior to humans,
  2. Maintain control over AGI beyond 2030 to have more time to evolve with AGI, if we want to avoid extinction,
  3. Do not apply linear world procedures to control AI, which is changing at a nearly exponential pace,
  4. Authorize AI sector to control AI itself, based on the governments’ mandate, since governments are too slow to control AI effectively,
  5. Start global AI control in the USA, since 2/3 of the ‘Western world’ AI sector is there. The US existing US Partnership on AI (PAI) should be converted into an independent Global AI Consortium (GAIC), gradually joined by non-US organisations,
  6. Convert all major AI projects into companies merging them into Joint Venture company, e.g., One-AI, supervised by GAIC, for a more effective AI control,
  7. Create just one Superintelligence Programme, managed by One-AI company, to achieve a tighter AI control and to counterbalance China’s potential dominance in AI,
  8. Create a de facto World Government initiated by G7 from members of NATO, EU, the European Political Community, or OECD,
  9. Create independent Global AI Control Agency (GAICA), mandated by the World Government, by integrating the US National AI Initiative with G7 Global Partnership on Artificial Intelligence
  10. Create Global Welfare State to soften the turbulence of the transition to the AGI world. To achieve that the World Government must make a fast redistribution of wealth by setting up a Global Wealth Redistribution Fund.

The key objective of this approach is to extend the period of AI control. Only this radical way forward may enable humans retain such control for much longer than otherwise might be the case. The proposed measures are very difficult, but if we do not even try to implement them, which is quite likely seeing how short-term are the policies of most governments, we will seal a very dangerous future for humans. Conversely, if we follow such an approach, the future for humans will soon be unimaginably positive, as the last point in this approach suggests – the creation of a Global Welfare State.