Pace of Development

January 05, 2026

Everyone of late has had some kind of interaction with AI - whether knowingly or not. Suffice to say, it's probably one of the most transformative things to be popularised in the last few decades, since social media / internet / search engines.

But what are its implications for the short-medium-long term future?

Preface this with the fact that people are not too accurate at predicting the future - so my guess is probably as good as yours. And consider the "boiling frog phenomenon", which I say stems from the common topic in the last few posts - our "lack of introspection".

But consider this - the technology to create and do everything humans have ever wanted (to date), is either already available or would be available within the next 5, tops 10 years. Our technological advancement grows exponentially, but in multiple ways:

  • the technology itself, when you factor in recursiveness, will advance exponentially (or reach a logarithmic limit)
  • the time to reach new breakthroughs - also decreases as technology improves, and humans innovate around limitations (take DeepSeek for example - improving an LLM by being efficient)
  • the socialisation of such technology - this aligns with Metcalfes law, where the adoption of technology again is exponential. Also factor in the number of people living on the planet. But examine the number of days/weeks/months/hours each platform has taken to reach 100 million users, for example.

So really, the world is definitely changing in a dramatic way. It's just that we aren't cognisant of its effects around us. Time feels a lot faster because of the pace of things developing.

Ultimately, this can be good or bad. And the direction of these changes can also be good or bad. It really goes back to considering our own actions and uses of such technology.

In a world where we are given some degree of free will (although largely deterministic), some critical actions and decisions will shape what happens in our world.

Take for example - the rise in scammers, call centers (some of whom are doing it as a choice, others forced into it via modern slavery) is being countered by AI voice applications, some of which aim to waste the time of the scammers.

As more of these technologies come to the forefront, innovative uses (both good and bad) will lead to some kind of net result.


Will we reach AGI in our lifetime?

Also an interesting question, and in my view, it depends on how you define AGI.

2 aspects - one is the baseline of this AI, next is its ability to recurse and self-improve.

I daresay, that to some degree, this has been reached already by the key AI developers, but not for public use.

Also, you must consider autonomy.

  • Can this AI power itself?
  • Does it take a physical form, or is manifested only in software?

Technically, if it's in software form, it is still under the control of humans. Though, like computer viruses, it can find itself "living and breathing" across host networks.

If it is in hardware form... then imagine an AI powered robot that is able to upgrade itself with new learnings, even potentially upgrading its own hardware (or, posing another philosophical question - if it had the ability to create a new form that could supersede itself, would it still create this or sabotage it out of self preservation)? Likewise, as humans, would we be too afraid to create an AI form that could supersede us and self improve?

Moral thinking sets AGI apart from humans

This is where one must consider "moral" "thinking". I put both words in quotations, as I prompt you with these questions:

  • what is "moral" / "morality"?
  • is it actually "thinking"?

Let's just take for example. (Normal) people "know" that it is "wrong" to kill. Perhaps there are a dozen other "instincts" that I could mention as well. I put forward an argument that when normal humans are born, as long as their needs are taken care of, would not even consider killing other people. But then:

  • Do we draw the line at humans? How about other animals?
  • In an extreme scenario, such as war, the situation is almost like "kill or be killed"
  • Why is the murder of another human being so catastrophic versus other beings - is it the way in which we are "coded"?

Do we rationalise murder in general? Do we immediately think that things are ok because we keep our distance from it, as long as we don't witness that animal being slaughtered?

I propose this:

  • Majority of us are born with a "moral instinct". There could be a subpopulation of us with psychopathy and other dark triad traits, where the "moral instinct" and its effect on behaviour is rather different.
  • The "thinking" and rationalisation part is used to override "moral instincts" and create exceptions around our situations.
  • Eventually, behaviours become so ingrained that for particular actions we would not consider the "moral" part of it.

As part of the population who eats meat, perhaps its another moral rationalisation that:

  • death is eventual
  • as long as the animal die naturally or was killed humanely, what's to stop a person from maximising the "benefit" of that creature

OK, it seems like I've gone a few tangents away from the original point. But this is why "moral thinking" and "moral instinct" is important.

Can we possibly argue that an AI manifestation possesses either of these?

As we see our GPT's, Claudes and Geminis perform thinking based on the hivemind it was trained on, perhaps with a few tool calls to the internet to back itself up - is this true "moral" thinking?

Let alone, moral instinct, which I daresay is restricted to humans and other creatures at our level. (I will elaborate on this in another post - level implies this is quantitative but I will argue that this is qualitative)

I won't discuss the "act" of thinking. This is phenomena which cannot be described adequately in words. But let us consider its contents.

The manner in which AI would think is currently mimicked off humans. However, the moment you set it free to adapt to the environment, this could evolve and become more efficient.

Then, at what point do we consider this "AGI" and not just a mimicry?


I think my general view / conclusion is that:

  • the level of rational thinking reached by the current AI models are probably on par with human rational thinking, possibly beyond the average human.
  • its instant access to the entire internet of information, deeply integrated, is superior to humans in terms of efficiency
  • However, the moment these models are "set free" to recurse and develop on its own, even to the point of "cross breeding", it's possible that an entirely new phenomenon can evolve out of this, something not previously expressed in humans. It would be something more aligned to the form of the AI itself - whether it exists as software (like a virus) or in hardware / physical form - with a leaning towards self preservation or other goals it has set for itself.

Could this lead towards a battle between humans and AI technology? Who knows.


Profile picture

Written by Anonymous