Explore Our Knowledge Base Covering A Range Of Topics

AI+

Written by Kevin Moore | Jan 27, 2025 12:00:00 PM

2024 emerged as a breakout year for AI. Further improvements to LLMs allowed for multi-modal content generation, and the increased efficiency and reduced cost of foundational models allowed for an increasing number of AI use cases. We saw significant developments in Open AI’s GPT, Meta’s LlaMA, Claude, StableLM, Falcon, Mistral, and Llama 2.

In 2025, we are likely to see widespread democratisation of AI services, with the potential for SLMs (Small Language Models) to be widely integrated into consumer smartphones. We are also likely to see significant developments in governance and data management as consumers and legislators demand more trustworthy, accessible, and transparent models as AI becomes ever more pervasive in human life.

As a cautionary note, history shows that transformative technologies often struggle to live up to their early hype, and AI is no exception. In its early days, hardware constraints and high costs stymied progress. Fast-forward to today and the situation feels familiar: modern AI models demand immense computing power, resulting in hardware shortages and soaring prices for GPUs and other specialised chips. Nvidia’s advanced GPUs—critical for training and inference—regularly face backorders as data centres scramble to increase capacity. Likewise, AMD’s Instinct accelerators and Intel’s Habana chips are under similar pressure, while foundries such as TSMC work overtime to expand high-end production processes (e.g., 5nm, 3nm) that can support these AI workloads.

On the plus side, chipmakers are rapidly iterating to address these bottlenecks. Nvidia’s H100 and AMD’s MI300 series promise faster training speeds with improved energy efficiency. Meanwhile, several large tech companies are designing proprietary AI inference chips to handle specialised, lower-power tasks, thereby lessening reliance on GPUs. However, these advances can’t eliminate supply constraints overnight; building new foundries or upgrading existing ones takes years and billions of dollars in investment.

It is estimated that AI could contribute as much as $15.7 trillion to the global economy—$6.6 trillion from increased productivity and $9.1 trillion from consumption-side effects (Source: PWC). One primary concern is that this is likely to be unevenly distributed globally as emerging economies may struggle to compete with established powers who can invest heavily in AI and hardware development. We must remain mindful of ensuring fairness of access to all and that AI does not deepen the existing digital divide into a chasm.

Transformations in Network and Cloud

The integration of AI into cloud and network operations is reshaping how we manage and sustain modern digital infrastructures. AI introduces a level of intelligence and adaptability that was previously unattainable, enabling systems to operate with remarkable efficiency and precision. At the core of this transformation is the ability to dynamically manage resources, ensuring that workloads are optimally distributed across physical and virtual assets. In a world where mobile networks must handle surges in demand and evolving usage patterns, this level of responsiveness ensures that performance remains seamless while avoiding unnecessary energy consumption and costs. 

Beyond efficiency, AI-driven systems bring an unprecedented level of resilience to networks. By continuously analysing operational data, AI can predict when components are likely to fail and act pre-emptively to prevent disruptions. This predictive capability reduces downtime and allows for self-healing mechanisms, where systems autonomously reconfigure or repair themselves without human intervention. In mobile networks, where service continuity is critical, these innovations mean that users experience uninterrupted connectivity even in the face of unforeseen challenges. 

As networks grow in scale and complexity, AI plays a pivotal role in ensuring they remain agile and scalable. By forecasting future requirements and simulating potential scenarios, AI empowers organisations to expand infrastructure only when necessary, avoiding the inefficiencies of over-provisioning. This adaptability is particularly important in the era of 5G and beyond, where networks must balance rapid technological advancements with the demands of a hyper-connected world. AI is no longer just a tool for optimisation—it is the driving force behind the evolution of intelligent, resilient, and scalable digital systems.

 

Personalising Experiences

Users now demand intuitive, human-like interactions with technology, driving rapid advancements in natural language processing (NLP), computer vision, and speech recognition. Voice assistants and chatbots can deliver personalised support around the clock, reducing the strain on customer service teams. AI systems tailor recommendations that match individual preferences by analysing contextual data such as user location, browsing patterns, or previous purchases. This rise in natural interaction extends beyond text and voice.

 

Advances in augmented and virtual reality can provide immersive, context-aware interfaces, blurring the boundaries between physical and digital experiences. Such seamless communication boosts user satisfaction and retention and enables new channels for brand engagement.

In retail environments, we may see fully interactive AI-guided systems that allow customers to design their own clothing, for example, with automated production of small batches.

 

Data and Data Management

Managing the extensive data requirements of AI systems demands more than simply ensuring adequate storage. It calls for a well-defined data strategy that addresses governance, compliance with regulations, ethical considerations, and continuous monitoring. Establishing a strong framework for data governance is vital to ensure high-quality data, reduce duplication, and eliminate inconsistencies. This framework assigns clear responsibility for data assets, streamlines processes for ingestion and transformation, and incorporates regular audits to uphold accuracy and reliability.

Cloud providers play an increasingly central role in addressing the fluctuating storage and analytical needs of AI workloads. Solutions such as data lakes and distributed databases offer significant advantages, including near-unlimited scalability and cost-efficient, pay-as-you-go pricing models. Data lakes are particularly effective as repositories for unstructured, exploratory datasets often required in machine learning experiments, balancing flexibility with affordability.

Effective data management extends beyond storage alone. By embedding AI-driven insights at the data ingestion stage, organisations can refine their processes in real time. Tools designed for anomaly detection, trend analysis, and intelligent tagging can help ensure that data entering the system is accurate, relevant, and well-organised. This proactive approach supports more efficient model development and accelerates the delivery of meaningful insights.

As organisations advance their data strategies, regulatory and ethical obligations introduce further complexities. Laws such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) impose strict requirements for handling personal data. These include obtaining informed consent, implementing data minimisation policies, and adhering to retention rules. Maintaining comprehensive documentation of data sources and transformations is essential for transparency and helps build trust with stakeholders. Additionally, issues such as bias in training data and the opaque nature of some AI models underscore the need for fairness and explainability. Organisations must carefully evaluate the data they use and the algorithms they deploy to ensure ethical and equitable outcomes.

 

Ethical Considerations: Security, Bias and Talent Management

The growing reach of AI also brings ethical responsibilities to the forefront. Ensuring robust cybersecurity is paramount when personalising user experiences, as sensitive information must be protected from malicious actors. AI-driven models are only as fair as the data on which they train; if historical or societal biases are embedded in training sets, these biases will be replicated in AI outputs. Transparent governance, regular auditing, and well-crafted regulations help mitigate these risks. Finally, the adoption of AI calls for significant talent development—organisations need data scientists, ethical AI experts, and leaders who understand both the technology and the importance of guiding it responsibly. By prioritising training and upskilling, businesses can foster inclusive innovation that benefits all stakeholders.

Security Considerations

Organisations also need to guard against model poisoning and other attacks that focus on manipulating training data.

Issues around Bias

As we move towards a future where AI plays a large role it is critical that care is taken to ensure training datasets are free of bias and stereotyping to avoid reinforcing and amplifying existing stereotypes around ethnicity, gender, and job roles. This will require careful monitoring of training data and output from generative AI systems.

 

A 2023 paper released by AI company Hugging Face and researchers from Leipzig University in Germany found that in the DALL-E 2 image generation model, when it was asked to generate an image to represent a CEO (Chief Executive Officer), 97% of the results were of white males. This is due to the fact that the vast amount of data used to train this system was obtained from the Internet, which only served to reflect and amplify existing biases and stereotypes. In other examples, when these models were asked to produce images of people in various professions, again they tended to amplify existing stereotypes and bias. As can be seen in the image professions such as manager or CEO had output generated that was lacking diversity both in terms of ethnicity and gender. Research has also found that terms such as “compassionate” or “caring” were more likely to produce output with female representations, whereas words such as “stubborn” or “angry” would produce more male representations.

 

Systems can be tested for fairness and bias by utilising methods such as adversarial testing which tests the model’s robustness to bias and its ability to generate fair and unbiased outputs. There are also dedicated tools and frameworks that can be used to monitor and measure fairness; examples include AI Fairness 360 by IBM, VerifyML and Fairlearn.