Connect with us

Windows

From the invasion of Ukraine to GPT-3: we review the technological news of the year (I)

Avatar of Thomas Grimm

Published

on

dall e2 1000x600 jpg

It leaves many technological headlines the year that is about to end. From the hope that 2022 could be the year of recovery, we quickly moved on to the concern that the invasion of *text muted* was causing in all of us. Suddenly, a world that should be heading towards a new normality was entering a deep crisis caused by the rise in energy prices, skyrocketing inflation and component production that has not yet recovered.

Of course, there have been other news that have drawn a different year. Elon Musk decided to buy Twitter and caused an earthquake whose tremors and aftershocks we are still feeling; the AI ​​suddenly “stretched out” and showed us the enormous possibilities (and fears) that algorithms such as Dall-E or Chat GPT-3 had, and the main hyperscalars decided that 2022 was the best year to launch their new cloud regions in Spain. We present below the first part of the special report with which we want to review what have been the great technological news of this year. Tomorrow, the second part.

The Ukrainian war and the flight of the *text muted*n tech industry

We started 2022 thinking that the worst of the pandemic was over and that the world could return to some normality. And suddenly Putin started saying that *text muted* had to go back to “Mother *text muted*” and the ensuing war became the big tragic news this year, with tremendous consequences on all fronts. In the humanitarian field, of course, but also in the energy, economic and technological fields.

The invasion aggravated the semiconductor and chip crisis that the world has been suffering for three years. For the first time, we have been able to witness what a real cyber war looks like when it is carried out publicly and on a large scale. The armed conflict also caused the vast majority of Western technology companies to close or sell their businesses in *text muted* and that a large part of the technological talent of the Eurasian country fled the country for fear of being called to the front.

In return, Putin’s government is cutting ties with any company that is not in its sphere of influence, has begun to look at China as the main supplier of technological components (although publicly in Beijing they are reluctant to do so due to the possible consequences) and has increased surveillance of its citizens on the Internet, threatening, in fact, to completely disconnect the country from the network of networks. In the medium term, *text muted* hopes to be able to produce its own chips, but no analysts see this as a realistic option, which could condemn one of the world’s major powers to being a technological pariah if things don’t change.

From VMware to Orange: the acquisitions of the year

The fact that this expansive period that has marked the last few years in the sector has come to an end in a large part of the technology industry does not mean that 2022 has not once again been an almost record year in the field of acquisitions and mergers. In the first place, of course, we have the for many incomprehensible acquisition of Twitter by Elon Musk, who, after various threats, ended up paying 44,000 million euros for the social network.

If we go to “corporate” technology, the most interesting acquisition that has taken place this year, perhaps it has been that of VMware, for which Broadcom ended up disbursing 61,000 million dollars; or the acquisition of Cerner by Oracle, thus signing the most important purchase in its history.

The two large operations that Microsoft has completed this year have also had an impact: the acquisition of Nuance and the acquisition of the Activision video game studio. In the more creative arena, designers have been shocked by Adobe’s acquisition of Figma. We have also witnessed the end of the Citrix soap opera, how AMD received the green light to complete the purchase of Xilinx and in Spain, the merger of the Orange and MásMóvil businesses.

Elon Musk and the inexplicable acquisition of Twitter

No acquisition has generated more than that of Twitter by Elon Musk. In the first place because of how unusual the purchase offer was, and the money that the owner of Tesla stated that he was willing to put on the table to take over the social network. But later because of that strange “reversal” that Musk gives and that almost cost him going to court, since once the purchase offer was signed, he refused to “pay the bill.”

The most entertaining thing “from the outside”, however, was to observe how, in his position as the new CEO, he began to make decisions that were controversial at best and completely absurd at worst. The dismissal of the board of directors was followed by the resignation of key collaborators and the dismissal of more than half of the company’s workers.

And the turmoil had only just begun. Hurricane Musk caused the departure of advertisers, the end of content moderation (which triggered hate speech), the suspension (and then reversed) of journalist accounts, the prohibition of linking to other social networks and finally, a delirious survey that ended with the replacement of Musk himself in front of the social network.

From Dall-E to Chat GPT-3: the AI ​​is starting to “scare”

It can happen that for years, the great technological promises seem little more than castles in the air, to suddenly materialize with enormous force. It is precisely what we have begun to experiment this year with Artificial Intelligence.

In the first place, we have discovered how algorithms such as Dall-E or Stable Diffusion made complex image generation systems based on deep learning neural networks available to almost anyone, which allow generating all kinds of illustrations and images from text descriptions. even artistic works, with minimal effort.

And when we had not yet recovered from the impact of what this advance could mean for designers, digital artists and other image professionals, at the end of this year Chat GPT-3 arrived, an algorithm that completely changes our relationship with search of content on the Internet and that could make Google itself tremble.

The Metaverse Year That Wasn’t

If this has been the year of AI, the same cannot be said for the metaverse. The high expectations that Meta placed precisely in the development of this virtual universe were quickly frustrated when, as the months went by, we began to discover the scant technical progress and, what is worse, the null interest shown by the projects that were put into March.

From an event sponsored by the European Commission to which only six people showed up, going through municipal initiatives such as virtual Benidorm or the virtual reality hub sponsored by Telefónica, if the metaverse has shown anything, it is that it is still taking its very first steps. And it is that not even the employees of Zuckerberg’s company are convinced that their flagship project is moving forward.

The fact that developing this technology is costing so much has caused the collapse of Meta on the stock market, which has also resulted in the dismissal of almost 15% of the employees and the departure of John Carmack, one of its key figures. . Right now the technology is only finding very limited applications in the industrial sector and in the best of cases, it will take between five and ten years for it to begin to fulfill any of those promises that were made to us a decade ago.

After Zuckerberg’s failure, hope goes through Apple, that company capable of succeeding where others failed and which is trusted to present its first device for this new world in 2023.

In the second part of this special, we will also analyze how, together with the metaverse, the dreams of those who thought they could get rich by investing their savings in cryptocurrencies have begun to disappear, or how countries compete starkly to convince chip producers to invest in new factories in their territories. Do not miss it!

Advertisement

Windows

Intel confirms that Emerald Rapids will arrive in the second half of this year

Avatar of Thomas Grimm

Published

on

Intel 2

The chip giant has reaffirmed the most important keys to its roadmap, and has maintained its commitment to meet the initially scheduled dates. This is very important for Intel, in fact it is part of the strategy that its CEO, Pat Gelsinger, gave in an interview alluding to the execution on time and on time.

It is not difficult to understand, it is useless to have a winning architecture on an advanced node and with a huge density of transistors if this is not viable on the wafer, either for technical or economic reasons, and if therefore you are going to have to delay it repeatedly. This has been a problem that Intel has been experiencing on numerous occasions. Remember, for example, what happened with the transition to 10 nm, and also the delays suffered by the new Sapphire Rapids processors.

In fairness, it must be recognized that in all these cases Intel has sinned of ambitionthat is, it has had excellent ideas to design very advanced chips for the moment in time in which we found ourselves, but by jumping to the wafer he has come across the harsh reality that he had bitten off more than he could swallow. Sapphire Rapids is one of the best examples that we can put today, since they are very advanced processors that have numerous specialized accelerators, and that of course start from a truly unique approach. A pity that they suffered so many delays.

Intel has internalized this problem, and for this reason it has reaffirmed, as we told you at the beginning of this article, its commitment to its latest roadmap, and He has done it at all levels. This means that Meteor Lake, the first Intel 4 (7nm) node architecture for general consumption, will arrive later this year, and that Emerald Rapids, the successor to Sapphire Rapids, will also be released in the second half of this year. Granite Rapids will go to 2024.

According to the latest information to which we have had access, Emerald Rapids will be a minor evolution of Sapphire Rapidsbecause it will maintain the MCM design based on interconnected blocks, it will have a maximum of 64 cores and 128 threads, it will also have specialized accelerators and it will be manufactured on the Intel 7 node.

This leaves us with a really frantic pace of releases, since if you realize we move in annual cycles, something that undoubtedly represents a major challenge. However, if Intel manages to comply, it is clear that it will significantly improve its position, and that it will make things very difficult for AMD.

Continue Reading

Windows

IBM and Rapidus want to manufacture semiconductors at 2 nm in 2025

Avatar of Thomas Grimm

Published

on

IBM y Rapidus semiconductores

Competition in the world of semiconductors is fierce. TSMC is the undisputed leader when it comes to chip manufacturing, since the designs are carried out by its customers, but IBM is not about to be left behind in the nanometer raceand has already defined a strategy to improve its position in this sector, an alliance with the Japanese company Rapidus.

This alliance with the Japanese semiconductor consortium Rapidus will have, as its main objective, to establish a chip production line at 2nm in the first half of 2025. This first production line will work with prototypes, which means that they will not be commercial units, and therefore it is a risk phase that will represent a very important investment for both companies.

If that first move goes well, IBM and Rapidus will put themselves in a prime position within the semiconductor industry, and will be right up there with TSMCsince the Taiwanese company also plans to start producing chips in the 2 nm node by 2025, as long as things go according to their own forecasts, obviously.

Right now we are in an important transition stage. The 5nm node is the most popular today, but the jump to the 3nm node will become a reality very soon, and this year its adoption by some giants of the sector will begin. Except for a last-minute surprise, Apple will be the first to launch a SoC for smartphones based on TSMC’s 3nm node, the Apple A17, which will be used in the iPhone 15 Pro and iPhone 15 Pro Max.

Leaps in the manufacturing process are important because reduce the size of the transistorsand make it possible to introduce performance and efficiency improvements. It is also possible to reduce the space occupied in the silicon wafer, which in the end translates into a greater number of chips per wafer, with all that this entails in terms of manufacturing costs.

However, by reducing the size of the transistors logic gates become thinner, and this increases the risk of electrical leakage. It is also more difficult to achieve a good success rate on the wafer, especially in complex designs, and this translates into fewer functional chips. The jump to 2 nm could be very difficult, since it is a value that brings us closer to the physical limits of silicon, so it will be interesting to see how the industry progresses in this new adventure.

Continue Reading

Windows

These are the main differences between Microsoft Azure, Google Cloud and AWS

Avatar of Thomas Grimm

Published

on

cloud g7727c20a8 1920

The top three public cloud providers are AWS, Microsoft Azure and Google Cloud. In their offering, all three have quite a few similarities. The main plans that they offer to their clients have many similarities in terms of the type of services they offer. In addition, their prices and billing models are also quite similar. They also have the same goals in terms of type of customers, among other things.

Of course, the fact that their cloud plans are similar does not imply that they do not also have differences. In certain respects, they present certain important differences and have distinct characteristics. These are the main ones:

1- Cloud Assisted Code Writing

All three major public cloud providers offer integrated development environments, or plugins, with which developers can write code manually. But the same is not true of software development tools assisted by Artificial Intelligence. So far only one offers such a tool, with AI models to help developers generate code automatically. This is AWS, which since 2022 has Amazon Code Whisperer.

With this tool, AWS customers who are dedicated to development will have recommendations when writing code, driven by Artificial Intelligence, with the aim of making it easier for them to develop more efficient connections with cloud resources.

Microsoft offers a tool similar in many cases to CodeWhisperer: Copilot. But not part of Azure cloud, but part of GitHub. This means that Copilot does not integrate in any specific way with Azure, nor does it specifically address development needs that are related to Microsoft Azure. Of course, Microsoft may decide to launch a tool of this type in the future, since it has decided to bet heavily on AI in its cloud, which is shown with the recent launch of Azure OpenAI and with the integration of tools like ChatGPT in several of its cloud products.

As for the Google Cloud platform, it is not yet firmly committed to AI-assisted development products. Neither is Google, at least for now. The momentum that Artificial Intelligence is taking, which is leading the big technology companies to accelerate their projects related to AI, and which in the short and medium term may make them accelerate developments and integrations related to it, may lead to the cloud gets a boost from Artificial Intelligence in general, and AI-powered code development for the cloud in particular. Not only in Google Cloud, but also in the rest of the public cloud providers that today do not have tools of this type.

2 – Platform as a Service (PaaS) cloud offerings

All major public cloud providers offer some version of cloud services. Platform as a Service (PaaS), a cloud computing model in which IaaS (Infrastructure as a Service) service is integrated with software development and deployment tools. All with the aim that companies can develop and run applications.

Of the three, the one with the most complete offering of PaaS solutions is Azure. It does so through systems like Web App Service. AWS also has quite a notable offering, with services like ELastic Beanstalkand Google Cloud has a service called Cloud Run.

But these two services are not as versatile or complete as those offered by Azure in PaaS. Neither in terms of the uses that can be given to them nor in terms of the flexibility they provide in the development and execution of applications. Therefore, if a developer needs to have PaaS services in the cloud, Azure is the most appropriate choice for him.

3 – Cloud Data Loss Prevention Services

The data loss prevention (DLP) solutions in the cloud are intended to help companies and professionals to discover and protect the sensitive data that they have to store in the cloud. In this case, both AWS and Azure and Google Cloud offer some type of system or tool designed for this.

Of course, the offer that Azure makes is based on a Microsoft product that is not specifically focused on protecting Azure. This is why it cannot be considered as a cloud data loss prevention solution in Azure. This is such a tool that is generic and is supported by Azure. Both AWS and Google Cloud offer a native DLP platform, so they beat Microsoft’s cloud in this case.

4 – Hybrid cloud solutions

The integration of a private storage infrastructure with a public cloud, such as those offered by the aforementioned providers, gives rise to the generation of a hybrid cloud. However, the way to create and manage these integrations is different in each case. In Google Cloud, for the creation and management of hybrid clouds you must use the platform anthoswhich is based on Kubernetes.

As for AWS and Azure, their solutions for building hybrid clouds are not based on Kubernetes. The one that allows you to do it in AWS is called Outpostswhile in Azure there are two options: Arc and Stack. So no knowledge of Kubernetes is required to build hybrid clouds.

In addition, there are other differences between the level of flexibility offered by these three providers in terms of hybrid cloud creation. Outpost is more restrictive, requiring customers to purchase hardware outright. AWS solutions, meanwhile, are compatible with virtually any type of hybrid cloud infrastructure.

These are the main differences that we can find in the cloud offer of the three main public cloud providers. In most cases, as we have seen, they all offer some kind of solution, but not always with the same versatility, solidity and power.

As for the rest of the differences that there may be, they will be in a much smaller area, such as data storage or hosted virtual machines. They all have similar options and very similar plans. But if you need an extra in any of the aspects that we have talked about, it is convenient to analyze with a magnifying glass which option is the best for you

Continue Reading

Trending