Connect with us


the evolution of AI to become a creator

In recent days we have heard the expression Generative Artificial Intelligence, or generative AI, everywhere. The blame lies, above all, with the latest developments in OpenAI in its GPT model, Microsoft’s improvements in its products or systems like Claude. But few have stopped to explain in detail what Artificial Generative Intelligence implies. Or its bases. Despite this, it is on more and more systems, from ChatGPT to DALL-E.

What is Generative Artificial Intelligence

This type of Artificial Intelligence is known as generative, because it is able to create something that does not exist. It is its main difference with respect to discriminative Artificial Intelligence, which is dedicated to making distinctions between different types of inputs, and tries to answer questions whose answer implies identifying something from a question that implies a choice.

For example, one of these AIs will be able to answer a question about an image, to answer if it is one thing or another. But you won’t be able to create an image from simple instructions. Something that generative AI can do.

Despite having made a lot of noise lately, generative AI has been around for quite some time. It goes back, in fact, to the appearance of Eliza, a chatbot that was quite popular years ago, and which pretended to be a therapist to talk to. They created it at MIT, and it was launched in 1966. It was a revolution then, despite being quite rudimentary. And years of work and research have evolved generative AI so much that Eliza now looks like something made by beginners.

The arrival of DALL-E, stable Diffusion, and above all, of Chat GPT, have turned Artificial Intelligence upside down, as well as the perception that the general public has of it. The first two allow the generation of realistic images from simple instructions.

The third is even capable of engaging in a text conversation with humans, and providing certain types of information. It is even likely that soon it will be multimodal, thanks to the evolution of its model, GPT, to its version 4. For now, it only allows you to respond with text, but in the future you may also be able to work with multimedia elements.

We usually refer to these systems, and others like them, as models. This denomination has not been made at random. It has occurred because all three are capable of representing an attempt to simulate or model some aspect of the real world based on a set, sometimes very large, of information about it.

How does Generative Artificial Intelligence work?

This type of Artificial Intelligence uses machine learning to process large amounts of data images or text. Most of this information is taken from the Internet. After processing it, it is able to determine which things are most likely to appear along with others that have already appeared. That is, they generate text by predicting which word is most likely to come after other words you have already created.

Most of the programming work of generative AI is dedicated to creating algorithms that can distinguish the things that interest the creators of the AI ​​they want to develop. In the case of ChatGPT, it will be words and phrases. In the case of DALL-E, graphics and visual elements.

But above all, it must be taken into account that this type of AI generates responses and outputs based on the assessment of a huge set of data, which have been used to train it. With them, it responds to requests and instructions with images or phrases that, based on what’s in that dataset, the generative AI suggests is likely to be appropriate.

The autocompletion that appears when you write with your smartphone, or in Gmail, which suggests words or parts of sentences, is a low-level generative Artificial Intelligence system. ChatGPT and DALL-E are quite a bit more advanced.

Training generative AI models

The process by which models are developed to capture and process all the data they need to function is known as training. Two techniques are usually used for this, which will be more or less adequate depending on the model. For example, ChatGPT uses what is known as a Transformer (hence the T in its name).

A transformer derives meaning from large chunks of text. In this way, the model manages to understand how the different semantic components and words can be, and how they can be related to each other. In addition, you can determine how likely it is for them to appear next to each other. These transformers run unattended on a large set of natural language text, using a process called pretraining (the P for ChatGPT). Once the process is finished, the humans in charge of working with the model take care of adjusting it through interactions with it.

Another of the techniques used to train generative Artificial Intelligence models is called Generative Adversarial Network (Generative adversarial network, or GAN). With it, two algorithms are put to compete with each other. You generate text or images based on probabilities derived from a large data set. The other is a discriminative Artificial Intelligence, which has been trained by humans to assess whether the output is real or generated by Artificial Intelligence.

The generative AI repeatedly tries to fool the discriminative, automatically adapting to produce successful responses. Once it gets a consistent and solid win over the discriminative, the discriminative is tuned again by humans, and the process starts all over again.

In this type of training, it must be taken into account that although humans are involved in it, most learning and adaptation occur automatically. This leads to many iterations being necessary to get the models to the point where they produce interesting results. Also, keep in mind that this is a process that requires a lot of computing power.

Negative points and use cases

Although the results of Generative Artificial Intelligence models are impressive, not everything related to it is good or beneficial. In fact, in addition to presenting limitations, it has many possible negative impacts in different sectors. In addition, they are still prone, especially those of text, to what has been called hallucinationswhich can even lead to making false claims, and even pimping and insulting humans.

As for its negative impacts, it is quite clear that the possibility of creating content easily and cheaply can have an impact on writing content that is not particularly creative. But in many cases they can fool the humans who read those contents. That has meant that many students have already used them in their school tasks. Even at the university level. They are also being used by email *text muted*mers to write their emails and send them to thousands of people with minimal effort.

A whole debate has also arisen related to the intellectual property of the images or texts generated by a generative AI. There are many discussions about who owns it, and the legal issues related to it are still beginning to be debated.


Another of the negative points of these AIs is that, in many cases, may be biased. Especially since their answers are completely conditioned on the type of data with which they were trained. If they are biased, and work without rules or limits, we can find macho, racist or classist AIs, for example. To avoid this, OpenAI, for example, creator of ChatGPT, has equipped its model with safeguard measures to avoid these biases before giving access to the public to use the chatbot.

But despite all this, generative AI has multiple use cases. ChatGPT, for example, can extract information from very large data sets to answer questions made in natural language to give useful answers. Because of this, it can be very useful for search engines, which are already rushing to test its integration. Not only the general purpose ones, which have been the first to approach them. They can also be useful for other more specific and sectoral search engines.


A look at the key NVIDIA news at GTC 2023

NVIDIA presented a lot of news yesterday taking advantage of the GTC 2023 scenarioa very important event that has become a reference in the technological sector at a professional level.

As the list of novelties is very long, and it would be counterproductive to collect them all In a huge article, we are going to focus on the four most important announcements, and we are going to detail them so that you know all their keys.

In general terms, the most interesting novelties that NVIDIA has presented this year are focused above all on the hardware, and are not limited to the traditional model but rather extend to quantum computing. It is precisely with this that we open this article, although we must not forget that the green giant has also confirmed an important boost to AI to be able to take it to any industry, and has strengthened its collaborations with the main giants of the sector.


Its about first GPU-accelerated quantum computing systemand represents a major inflection point as it shapes a next-generation computing solution powered by NVIDIA’s Grace Hopper superchip and the open source CUDA Quantum programming model, coupled with the world’s most advanced quantum control platform. world, OPX, from Quantum Machines.

With this system it is possible to unite the best of both worlds, classical computing and quantum computing, and this will allow researchers to have all the potential they need to create extraordinary applications, and without having to give up advanced features that are key in quantum computingsuch as calibration, control, quantum error correction, and hybrid algorithms.

At the hardware level, the NVIDIA DGX Quantum has a Grace Hopper super chip, as we said at the beginning, and is connected via a PCIe interface to a Quantum Machines OPX+ system. This allows reduce latency to less than a microsecondand allows near-instant communications between graphics cores and quantum processing units.

Five new GPUs for professional laptops

NVIDIA has also confirmed the release of five new professional laptop GPUs, based on the Ada Lovelace architecture and designed to shape more powerful, efficient and lighter workstations. The RTX 5000 is the most powerful of five, since it has 9,728 shaders, it has 76 RT cores, 304 tensor cores, comes with 16 GB of GDDR6 over a 256-bit bus, and has a configurable TGP of between 80 and 175 watts.

The RTX 4000 is a step behind with its configuration of 7,424 shaders, it has 56 RT cores, it has 232 tensor cores, it has 12 GB of GDDR6 on a 192-bit bus and its TGP can be configured between 60 and 175 watts. The RTX 3500 fits in the upper-middle range with its 5,120 shaders, 40 RT cores and 160 tensor cores, it has 12 GB of GDDR6 on a 192-bit bus and its TGP can be configured between 60 and 140 watts.

The RTX 3000 is a mid-range model with 4,608 shaders, 36 RT cores, 144 tensor cores, 128-bit bus, 8 GB of GDDR6, and a configurable TGP from 35 to 140 watts. Finally we have the RTX 2000, which has 3,072 shaders, 24 RT cores, 96 tensor cores, 128-bit bus, and 8 GB of GDDR6. Its TGP can be set between 35 and 140 watts. NVIDIA has also announced a RTX 4000 ADA SFF for Desktopwhich is nothing more than a small version equipped with 6,144 shaders.

NVIDIA H100 NVL, a dual GPU specialized in Large Language Models (LLM)

The green giant has also surprised us with a Hopper-based dual GPU setup that is specifically designed for work with Large Language Models (LLMs), large language models in a direct translation. It may not sound familiar to some of you and you may have even raised an eyebrow, but if I tell you that Chat-GPT is one of them, you will surely understand the importance of this new hardware.

The acronym NVL refers to NVLink, an interconnection technology that is what allows the two GPUs that make up this accelerator to be joined, and a total of three Gen4 bridges are used. At the specification level, we are facing an impressive solution, since it offers a power of up to 68 TFLOPs in FP64reaches the 134 TFLOPs in FP64 with their tensor cores, and can offer up to 1,979 TFLOPs in FP32 with such cores.

In FP16 the power with the tensor cores is 3,958 TFLOPs, and at INT8 we have a power of 7,916 TFLOPs. The numbers speak for themselves, and yes, they are truly impressive. The NVIDIA H100 NVL has a 6144-bit bus, features 188GB of 5.1GHz HBM3 memory, and offers up to 7.8TB/s. LLM models need large blocks of memory and high bandwidth, so NVIDIA hasn’t cut a thread.

Continue Reading


Intel Releases F-Tile-Enabled Intel Agilex 7 FPGAs

The chip giant has launched the Intel Agilex 7a new generation of FPGA solutions (Field Programmable Gate Array, or matrix of logic gates programmable in the field in Spanish), with which the Santa Clara company has once again opted to maintain high versatility and a high level of performance.

According to Intel, the Agilex 7s are equipped with the Field Programmable Gate Array (FPGA) transceivers. fastest available on the market today, and have been designed to help different user profiles address challenges in the most bandwidth-demanding areas of the world, which are typically found in data-intensive industries, one way or another. The two clearest examples would be data centers and high-speed networks.

These new FPGAs have been built with cloud, network, and embedded systems in mind, and are F-Tile-enabled. Taking a look at the performance data we find very interesting data, since They are capable of reaching a speed of up to 1.6 Tbpsthey can work with 25/50G passive optical networks and also with different broadcast standards.

Shannon Poulin, Intel corporate vice president and general manager of the Programmable Solutions Group, commented:

“Intel’s Agilex 7 with F-Tile is loaded with transceivers that offer more flexibility, bandwidth and performance data rate than any other FPGA on the market today. Coupled with Intel manufacturing and our supply chain resilience, we are offering multiple industry-leading products and capabilities that our customers and the industry require to address a wide variety of critical business needs.”

Intel Agilex 7 FPGAs have been manufactured under the intel 10nm node, also known as Intel 7, which means they use a state-of-the-art process and are optimized to offer good value in terms of performance and power consumption. Compared to the previous generation from Intel these new FPGAs have achieved double the bandwidth per channel, reducing power and maintaining a compact and functional form factor. This was precisely one of Intel’s most important innovations at MWC 2023.

Continue Reading


The United States says no to the human chip tests that Elon Musk wanted to carry out

Neuralink is undoubtedly one of Elon Musk’s companies What has generated the most controversy? in recent years. His goal is, according to the billionaire’s own words, to make the blind see again, so that paraplegics can walk again, and turn human beings into cyborgs so that they can overcome the limitations inherent to their own condition, that is, to his own humanity.

It is approached as a medical company, and it is normal because they carry out bio-implants, but these have raised many questions, and in fact their experiments with monkeys have given rise to numerous accusations of animal abuse. In the midst of all this controversy is one of the most important key points for Elon Musk’s company, be able to test in humans.

Reuters has published an extensive and interesting article on this subject, where they have analyzed its most important keys, and have also discussed the resounding rejection given by the FDA (US Food and Drug Administration) at Neuralink’s request to start its tests with human beings.

The FDA rejection occurred in 2022, but had been shrouded in significant secrecy by Elon Musk’s firm, and has not been discovered until today. According to the source, when the FDA received the petition, it responded with several dozen problems that Neuralink has to solve completely before being able to reconsider entering the human testing phase, and these mainly include security issues, such as the risks of implanting a lithium-ion battery, the risks of cable migrations to other areas of the brain and other issues related to the possibility of removing implants without damage to brain tissue.

Some topics are so basic that the truth makes me wonder what phase Neuralink is really in, and leads me to question if they are really prepared to face the jump to human trials even in the long term. They have a lot of work to do, and at this point it is important to remember those leaks that said that the monkeys that the company used to test its implants not only died, but also did so with extreme suffering. I don’t think we want to see this in humans.

Continue Reading