Connect with us

Windows

The Galician Supercomputing Center, CESGA, orders Fujitsu its quantum supercomputer

Avatar of Thomas Grimm

Published

on

cesga centro supercomputacion galicia ordenador cuantico fujitsu jpeg

The CESGA (Galicia Supercomputing Center) has commissioned Fujitsu its quantum supercomputerin which will invest 14 million euros. This is a step that will contribute decisively to the development of the Galician Quantum Technologys Pole, and it is also the most determined commitment to quantum computing in Spain to date. The purchase has been possible thanks to the funds provided by the Galician Agency for Innovation of the Xunta de Galicia, as well as by the European Union, within the framework of the REACT UE Axis of the FEDER Galicia 2014-2020 operational program, part of the response of the EU to the COVID-19 pandemic.

In addition, the purchase of this quantum system is framed in the activities of the Pole of Quantum Technologys of Galicia, which aims to place Galicia in prominent positions in terms of quantum computing. The activity of this Pole is divided into five areas: academy; research and technological centers and singular entities; economy and business; society; and infrastructures.

This supercomputer, when it goes into operation, will be one of the first to be available to the research community in Southern Europe. It will be installed at CESGA throughout 2023 and it will be used to facilitate cutting-edge research tasks for researchers, technology centers and companies.

Among other things, it will be used to generate new quantum algorithms in important areas. Such as the simulation of physical and chemical phenomena, data encryption, the search for optimizations, machine learning and the solution of complex problems, medicine, Artificial Intelligence, robotics, materials science and cybersecurity.

This purchase covers four different elements: a quantum computer, a high-performance computer, a quantum algorithm emulator, and a storage system for the results of the new algorithms for analysis and validation. Its most important element is a quantum computer that is flexible, and that is integrated with other computing elements.

Currently, CESGA is participating in a European R+D+i project, financed by the Quantum Flagship program for the development and deployment of quantum Technologys to develop useful algorithms in collaboration with other European entities. Especially with the University of A Coruña, with which he researches algorithms for Artificial Intelligence and finance. In addition, he is also designing test programs to measure the real performance of quantum computers.

Besides, CESGA is also participating in the Quantum Spain Project, which aims to deploy in the Spanish Supercomputing Network the capacities to investigate quantum computing, and the creation of a research community in Spain. On the other hand, Galicia participates in the Complementary Plan for Quantum Communications. This is a collaborative initiative between the Government of Spain and several autonomous communities, which seeks to improve Spain’s position in the lines of quantum information Technologys, within the framework of the Quantum Flagship initiative.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Windows

Intel confirms that Emerald Rapids will arrive in the second half of this year

Avatar of Thomas Grimm

Published

on

Intel 2

The chip giant has reaffirmed the most important keys to its roadmap, and has maintained its commitment to meet the initially scheduled dates. This is very important for Intel, in fact it is part of the strategy that its CEO, Pat Gelsinger, gave in an interview alluding to the execution on time and on time.

It is not difficult to understand, it is useless to have a winning architecture on an advanced node and with a huge density of transistors if this is not viable on the wafer, either for technical or economic reasons, and if therefore you are going to have to delay it repeatedly. This has been a problem that Intel has been experiencing on numerous occasions. Remember, for example, what happened with the transition to 10 nm, and also the delays suffered by the new Sapphire Rapids processors.

In fairness, it must be recognized that in all these cases Intel has sinned of ambitionthat is, it has had excellent ideas to design very advanced chips for the moment in time in which we found ourselves, but by jumping to the wafer he has come across the harsh reality that he had bitten off more than he could swallow. Sapphire Rapids is one of the best examples that we can put today, since they are very advanced processors that have numerous specialized accelerators, and that of course start from a truly unique approach. A pity that they suffered so many delays.

Intel has internalized this problem, and for this reason it has reaffirmed, as we told you at the beginning of this article, its commitment to its latest roadmap, and He has done it at all levels. This means that Meteor Lake, the first Intel 4 (7nm) node architecture for general consumption, will arrive later this year, and that Emerald Rapids, the successor to Sapphire Rapids, will also be released in the second half of this year. Granite Rapids will go to 2024.

According to the latest information to which we have had access, Emerald Rapids will be a minor evolution of Sapphire Rapidsbecause it will maintain the MCM design based on interconnected blocks, it will have a maximum of 64 cores and 128 threads, it will also have specialized accelerators and it will be manufactured on the Intel 7 node.

This leaves us with a really frantic pace of releases, since if you realize we move in annual cycles, something that undoubtedly represents a major challenge. However, if Intel manages to comply, it is clear that it will significantly improve its position, and that it will make things very difficult for AMD.

Continue Reading

Windows

IBM and Rapidus want to manufacture semiconductors at 2 nm in 2025

Avatar of Thomas Grimm

Published

on

IBM y Rapidus semiconductores

Competition in the world of semiconductors is fierce. TSMC is the undisputed leader when it comes to chip manufacturing, since the designs are carried out by its customers, but IBM is not about to be left behind in the nanometer raceand has already defined a strategy to improve its position in this sector, an alliance with the Japanese company Rapidus.

This alliance with the Japanese semiconductor consortium Rapidus will have, as its main objective, to establish a chip production line at 2nm in the first half of 2025. This first production line will work with prototypes, which means that they will not be commercial units, and therefore it is a risk phase that will represent a very important investment for both companies.

If that first move goes well, IBM and Rapidus will put themselves in a prime position within the semiconductor industry, and will be right up there with TSMCsince the Taiwanese company also plans to start producing chips in the 2 nm node by 2025, as long as things go according to their own forecasts, obviously.

Right now we are in an important transition stage. The 5nm node is the most popular today, but the jump to the 3nm node will become a reality very soon, and this year its adoption by some giants of the sector will begin. Except for a last-minute surprise, Apple will be the first to launch a SoC for smartphones based on TSMC’s 3nm node, the Apple A17, which will be used in the iPhone 15 Pro and iPhone 15 Pro Max.

Leaps in the manufacturing process are important because reduce the size of the transistorsand make it possible to introduce performance and efficiency improvements. It is also possible to reduce the space occupied in the silicon wafer, which in the end translates into a greater number of chips per wafer, with all that this entails in terms of manufacturing costs.

However, by reducing the size of the transistors logic gates become thinner, and this increases the risk of electrical leakage. It is also more difficult to achieve a good success rate on the wafer, especially in complex designs, and this translates into fewer functional chips. The jump to 2 nm could be very difficult, since it is a value that brings us closer to the physical limits of silicon, so it will be interesting to see how the industry progresses in this new adventure.

Continue Reading

Windows

These are the main differences between Microsoft Azure, Google Cloud and AWS

Avatar of Thomas Grimm

Published

on

cloud g7727c20a8 1920

The top three public cloud providers are AWS, Microsoft Azure and Google Cloud. In their offering, all three have quite a few similarities. The main plans that they offer to their clients have many similarities in terms of the type of services they offer. In addition, their prices and billing models are also quite similar. They also have the same goals in terms of type of customers, among other things.

Of course, the fact that their cloud plans are similar does not imply that they do not also have differences. In certain respects, they present certain important differences and have distinct characteristics. These are the main ones:

1- Cloud Assisted Code Writing

All three major public cloud providers offer integrated development environments, or plugins, with which developers can write code manually. But the same is not true of software development tools assisted by Artificial Intelligence. So far only one offers such a tool, with AI models to help developers generate code automatically. This is AWS, which since 2022 has Amazon Code Whisperer.

With this tool, AWS customers who are dedicated to development will have recommendations when writing code, driven by Artificial Intelligence, with the aim of making it easier for them to develop more efficient connections with cloud resources.

Microsoft offers a tool similar in many cases to CodeWhisperer: Copilot. But not part of Azure cloud, but part of GitHub. This means that Copilot does not integrate in any specific way with Azure, nor does it specifically address development needs that are related to Microsoft Azure. Of course, Microsoft may decide to launch a tool of this type in the future, since it has decided to bet heavily on AI in its cloud, which is shown with the recent launch of Azure OpenAI and with the integration of tools like ChatGPT in several of its cloud products.

As for the Google Cloud platform, it is not yet firmly committed to AI-assisted development products. Neither is Google, at least for now. The momentum that Artificial Intelligence is taking, which is leading the big technology companies to accelerate their projects related to AI, and which in the short and medium term may make them accelerate developments and integrations related to it, may lead to the cloud gets a boost from Artificial Intelligence in general, and AI-powered code development for the cloud in particular. Not only in Google Cloud, but also in the rest of the public cloud providers that today do not have tools of this type.

2 – Platform as a Service (PaaS) cloud offerings

All major public cloud providers offer some version of cloud services. Platform as a Service (PaaS), a cloud computing model in which IaaS (Infrastructure as a Service) service is integrated with software development and deployment tools. All with the aim that companies can develop and run applications.

Of the three, the one with the most complete offering of PaaS solutions is Azure. It does so through systems like Web App Service. AWS also has quite a notable offering, with services like ELastic Beanstalkand Google Cloud has a service called Cloud Run.

But these two services are not as versatile or complete as those offered by Azure in PaaS. Neither in terms of the uses that can be given to them nor in terms of the flexibility they provide in the development and execution of applications. Therefore, if a developer needs to have PaaS services in the cloud, Azure is the most appropriate choice for him.

3 – Cloud Data Loss Prevention Services

The data loss prevention (DLP) solutions in the cloud are intended to help companies and professionals to discover and protect the sensitive data that they have to store in the cloud. In this case, both AWS and Azure and Google Cloud offer some type of system or tool designed for this.

Of course, the offer that Azure makes is based on a Microsoft product that is not specifically focused on protecting Azure. This is why it cannot be considered as a cloud data loss prevention solution in Azure. This is such a tool that is generic and is supported by Azure. Both AWS and Google Cloud offer a native DLP platform, so they beat Microsoft’s cloud in this case.

4 – Hybrid cloud solutions

The integration of a private storage infrastructure with a public cloud, such as those offered by the aforementioned providers, gives rise to the generation of a hybrid cloud. However, the way to create and manage these integrations is different in each case. In Google Cloud, for the creation and management of hybrid clouds you must use the platform anthoswhich is based on Kubernetes.

As for AWS and Azure, their solutions for building hybrid clouds are not based on Kubernetes. The one that allows you to do it in AWS is called Outpostswhile in Azure there are two options: Arc and Stack. So no knowledge of Kubernetes is required to build hybrid clouds.

In addition, there are other differences between the level of flexibility offered by these three providers in terms of hybrid cloud creation. Outpost is more restrictive, requiring customers to purchase hardware outright. AWS solutions, meanwhile, are compatible with virtually any type of hybrid cloud infrastructure.

These are the main differences that we can find in the cloud offer of the three main public cloud providers. In most cases, as we have seen, they all offer some kind of solution, but not always with the same versatility, solidity and power.

As for the rest of the differences that there may be, they will be in a much smaller area, such as data storage or hosted virtual machines. They all have similar options and very similar plans. But if you need an extra in any of the aspects that we have talked about, it is convenient to analyze with a magnifying glass which option is the best for you

Continue Reading

Trending