Windows
How to make Windows 11 PC turn off automatically without third-party apps
One of the options that I miss the most in Windows 11 compared to macOS is the possibility of enable automatic computer shutdown as one more option within the system options. Something that in Windows 11 we can do and although we have to take some steps before achieving it, it allows us not to depend on third-party applications.
In Windows 11 it is possible to enable this possibility choosing day, time to turn off, frequency… A process that we are now going to detail step by step and that although it may be somewhat complex, now we are going to detail it step by step so that you can achieve it.
Shut down Windows 11 automatically
The first thing is to access the Windows 11 start menu and write the word Developer. Among all the options we must choose the one that comes with the name “Task Scheduler”.
This tool allows you to create automatisms in Windows. To do this we must click on the option “Create basic task” and a window will open in which the process begins.
We are faced with several steps. The first is to assign a name to the task, in my case “Auto power off”.
Then we’ll have to decide when we want the task to repeat, in this case the automatic shutdown. It can be every day (Daily), once a week, once a month… We will also have to set the time at which we want the computer to turn off automatically and even the date, and every how many days you want it to be repeat the action.
Now it’s time to tell Windows what action to take. For that we select the option “Start program” and click on “Following”.
At that point we click on the button “Examine” to open Windows explorer and search in the address C:WindowsSystem32 the shutdown.exe application. We double click to select it.
We return to the previous screen and We check that C:WindowsSystem32shutdown.exe appears in the bar. If everything is fine, click Next to confirm the step.
Here a summary will be shown to verify that everything has been configured and all that remains is to click on the button “Finalize” to confirm the scheduled shutdown.
Cover image | izzyestabroo
Windows
A look at the key NVIDIA news at GTC 2023
NVIDIA presented a lot of news yesterday taking advantage of the GTC 2023 scenarioa very important event that has become a reference in the technological sector at a professional level.
As the list of novelties is very long, and it would be counterproductive to collect them all In a huge article, we are going to focus on the four most important announcements, and we are going to detail them so that you know all their keys.
In general terms, the most interesting novelties that NVIDIA has presented this year are focused above all on the hardware, and are not limited to the traditional model but rather extend to quantum computing. It is precisely with this that we open this article, although we must not forget that the green giant has also confirmed an important boost to AI to be able to take it to any industry, and has strengthened its collaborations with the main giants of the sector.
NVIDIA DGX Quantum
Its about first GPU-accelerated quantum computing systemand represents a major inflection point as it shapes a next-generation computing solution powered by NVIDIA’s Grace Hopper superchip and the open source CUDA Quantum programming model, coupled with the world’s most advanced quantum control platform. world, OPX, from Quantum Machines.
With this system it is possible to unite the best of both worlds, classical computing and quantum computing, and this will allow researchers to have all the potential they need to create extraordinary applications, and without having to give up advanced features that are key in quantum computingsuch as calibration, control, quantum error correction, and hybrid algorithms.
At the hardware level, the NVIDIA DGX Quantum has a Grace Hopper super chip, as we said at the beginning, and is connected via a PCIe interface to a Quantum Machines OPX+ system. This allows reduce latency to less than a microsecondand allows near-instant communications between graphics cores and quantum processing units.
Five new GPUs for professional laptops
NVIDIA has also confirmed the release of five new professional laptop GPUs, based on the Ada Lovelace architecture and designed to shape more powerful, efficient and lighter workstations. The RTX 5000 is the most powerful of five, since it has 9,728 shaders, it has 76 RT cores, 304 tensor cores, comes with 16 GB of GDDR6 over a 256-bit bus, and has a configurable TGP of between 80 and 175 watts.
The RTX 4000 is a step behind with its configuration of 7,424 shaders, it has 56 RT cores, it has 232 tensor cores, it has 12 GB of GDDR6 on a 192-bit bus and its TGP can be configured between 60 and 175 watts. The RTX 3500 fits in the upper-middle range with its 5,120 shaders, 40 RT cores and 160 tensor cores, it has 12 GB of GDDR6 on a 192-bit bus and its TGP can be configured between 60 and 140 watts.
The RTX 3000 is a mid-range model with 4,608 shaders, 36 RT cores, 144 tensor cores, 128-bit bus, 8 GB of GDDR6, and a configurable TGP from 35 to 140 watts. Finally we have the RTX 2000, which has 3,072 shaders, 24 RT cores, 96 tensor cores, 128-bit bus, and 8 GB of GDDR6. Its TGP can be set between 35 and 140 watts. NVIDIA has also announced a RTX 4000 ADA SFF for Desktopwhich is nothing more than a small version equipped with 6,144 shaders.
NVIDIA H100 NVL, a dual GPU specialized in Large Language Models (LLM)
The green giant has also surprised us with a Hopper-based dual GPU setup that is specifically designed for work with Large Language Models (LLMs), large language models in a direct translation. It may not sound familiar to some of you and you may have even raised an eyebrow, but if I tell you that Chat-GPT is one of them, you will surely understand the importance of this new hardware.
The acronym NVL refers to NVLink, an interconnection technology that is what allows the two GPUs that make up this accelerator to be joined, and a total of three Gen4 bridges are used. At the specification level, we are facing an impressive solution, since it offers a power of up to 68 TFLOPs in FP64reaches the 134 TFLOPs in FP64 with their tensor cores, and can offer up to 1,979 TFLOPs in FP32 with such cores.
In FP16 the power with the tensor cores is 3,958 TFLOPs, and at INT8 we have a power of 7,916 TFLOPs. The numbers speak for themselves, and yes, they are truly impressive. The NVIDIA H100 NVL has a 6144-bit bus, features 188GB of 5.1GHz HBM3 memory, and offers up to 7.8TB/s. LLM models need large blocks of memory and high bandwidth, so NVIDIA hasn’t cut a thread.
Windows
the evolution of AI to become a creator
In recent days we have heard the expression Generative Artificial Intelligence, or generative AI, everywhere. The blame lies, above all, with the latest developments in OpenAI in its GPT model, Microsoft’s improvements in its products or systems like Claude. But few have stopped to explain in detail what Artificial Generative Intelligence implies. Or its bases. Despite this, it is on more and more systems, from ChatGPT to DALL-E.
What is Generative Artificial Intelligence
This type of Artificial Intelligence is known as generative, because it is able to create something that does not exist. It is its main difference with respect to discriminative Artificial Intelligence, which is dedicated to making distinctions between different types of inputs, and tries to answer questions whose answer implies identifying something from a question that implies a choice.
For example, one of these AIs will be able to answer a question about an image, to answer if it is one thing or another. But you won’t be able to create an image from simple instructions. Something that generative AI can do.
Despite having made a lot of noise lately, generative AI has been around for quite some time. It goes back, in fact, to the appearance of Eliza, a chatbot that was quite popular years ago, and which pretended to be a therapist to talk to. They created it at MIT, and it was launched in 1966. It was a revolution then, despite being quite rudimentary. And years of work and research have evolved generative AI so much that Eliza now looks like something made by beginners.
The arrival of DALL-E, stable Diffusion, and above all, of Chat GPT, have turned Artificial Intelligence upside down, as well as the perception that the general public has of it. The first two allow the generation of realistic images from simple instructions.
The third is even capable of engaging in a text conversation with humans, and providing certain types of information. It is even likely that soon it will be multimodal, thanks to the evolution of its model, GPT, to its version 4. For now, it only allows you to respond with text, but in the future you may also be able to work with multimedia elements.
We usually refer to these systems, and others like them, as models. This denomination has not been made at random. It has occurred because all three are capable of representing an attempt to simulate or model some aspect of the real world based on a set, sometimes very large, of information about it.
How does Generative Artificial Intelligence work?
This type of Artificial Intelligence uses machine learning to process large amounts of data images or text. Most of this information is taken from the Internet. After processing it, it is able to determine which things are most likely to appear along with others that have already appeared. That is, they generate text by predicting which word is most likely to come after other words you have already created.
Most of the programming work of generative AI is dedicated to creating algorithms that can distinguish the things that interest the creators of the AI they want to develop. In the case of ChatGPT, it will be words and phrases. In the case of DALL-E, graphics and visual elements.
But above all, it must be taken into account that this type of AI generates responses and outputs based on the assessment of a huge set of data, which have been used to train it. With them, it responds to requests and instructions with images or phrases that, based on what’s in that dataset, the generative AI suggests is likely to be appropriate.
The autocompletion that appears when you write with your smartphone, or in Gmail, which suggests words or parts of sentences, is a low-level generative Artificial Intelligence system. ChatGPT and DALL-E are quite a bit more advanced.
Training generative AI models
The process by which models are developed to capture and process all the data they need to function is known as training. Two techniques are usually used for this, which will be more or less adequate depending on the model. For example, ChatGPT uses what is known as a Transformer (hence the T in its name).
A transformer derives meaning from large chunks of text. In this way, the model manages to understand how the different semantic components and words can be, and how they can be related to each other. In addition, you can determine how likely it is for them to appear next to each other. These transformers run unattended on a large set of natural language text, using a process called pretraining (the P for ChatGPT). Once the process is finished, the humans in charge of working with the model take care of adjusting it through interactions with it.
Another of the techniques used to train generative Artificial Intelligence models is called Generative Adversarial Network (Generative adversarial network, or GAN). With it, two algorithms are put to compete with each other. You generate text or images based on probabilities derived from a large data set. The other is a discriminative Artificial Intelligence, which has been trained by humans to assess whether the output is real or generated by Artificial Intelligence.
The generative AI repeatedly tries to fool the discriminative, automatically adapting to produce successful responses. Once it gets a consistent and solid win over the discriminative, the discriminative is tuned again by humans, and the process starts all over again.
In this type of training, it must be taken into account that although humans are involved in it, most learning and adaptation occur automatically. This leads to many iterations being necessary to get the models to the point where they produce interesting results. Also, keep in mind that this is a process that requires a lot of computing power.
Negative points and use cases
Although the results of Generative Artificial Intelligence models are impressive, not everything related to it is good or beneficial. In fact, in addition to presenting limitations, it has many possible negative impacts in different sectors. In addition, they are still prone, especially those of text, to what has been called hallucinationswhich can even lead to making false claims, and even pimping and insulting humans.
As for its negative impacts, it is quite clear that the possibility of creating content easily and cheaply can have an impact on writing content that is not particularly creative. But in many cases they can fool the humans who read those contents. That has meant that many students have already used them in their school tasks. Even at the university level. They are also being used by email *text muted*mers to write their emails and send them to thousands of people with minimal effort.
A whole debate has also arisen related to the intellectual property of the images or texts generated by a generative AI. There are many discussions about who owns it, and the legal issues related to it are still beginning to be debated.
Another of the negative points of these AIs is that, in many cases, may be biased. Especially since their answers are completely conditioned on the type of data with which they were trained. If they are biased, and work without rules or limits, we can find macho, racist or classist AIs, for example. To avoid this, OpenAI, for example, creator of ChatGPT, has equipped its model with safeguard measures to avoid these biases before giving access to the public to use the chatbot.
But despite all this, generative AI has multiple use cases. ChatGPT, for example, can extract information from very large data sets to answer questions made in natural language to give useful answers. Because of this, it can be very useful for search engines, which are already rushing to test its integration. Not only the general purpose ones, which have been the first to approach them. They can also be useful for other more specific and sectoral search engines.
It can also be used in the future for code generation. Large language models can understand programming languages, in the same way that they understand the languages that are spoken. And while generative AI is unlikely to replace developers in the short to medium term, it can help improve their productivity, as assistants.
The same happens with the content creation. The arrival of these models has worried a lot all those who are dedicated to creating texts and content. For example, editors and marketing professionals. But for them, these models are an opportunity to speed up their tasks and get rid of the most repetitive ones. For example, writing campaign emails. Or descriptive content for a web page. As long as they are texts that do not require much creativity, and that have to be created from a specific set of instructions and data, systems like ChatGPT can take care of it.
In the future, it is likely that generative Artificial Intelligence will almost completely change some sectors thanks to its advancement. Or that it ends up with certain jobs. But for now, and it doesn’t seem like in the short term, the jobs will continue to be done by humans. Although they rely on Artificial Intelligence.
Windows
Intel Releases F-Tile-Enabled Intel Agilex 7 FPGAs
The chip giant has launched the Intel Agilex 7a new generation of FPGA solutions (Field Programmable Gate Array, or matrix of logic gates programmable in the field in Spanish), with which the Santa Clara company has once again opted to maintain high versatility and a high level of performance.
According to Intel, the Agilex 7s are equipped with the Field Programmable Gate Array (FPGA) transceivers. fastest available on the market today, and have been designed to help different user profiles address challenges in the most bandwidth-demanding areas of the world, which are typically found in data-intensive industries, one way or another. The two clearest examples would be data centers and high-speed networks.
These new FPGAs have been built with cloud, network, and embedded systems in mind, and are F-Tile-enabled. Taking a look at the performance data we find very interesting data, since They are capable of reaching a speed of up to 1.6 Tbpsthey can work with 25/50G passive optical networks and also with different broadcast standards.
Shannon Poulin, Intel corporate vice president and general manager of the Programmable Solutions Group, commented:
“Intel’s Agilex 7 with F-Tile is loaded with transceivers that offer more flexibility, bandwidth and performance data rate than any other FPGA on the market today. Coupled with Intel manufacturing and our supply chain resilience, we are offering multiple industry-leading products and capabilities that our customers and the industry require to address a wide variety of critical business needs.”
Intel Agilex 7 FPGAs have been manufactured under the intel 10nm node, also known as Intel 7, which means they use a state-of-the-art process and are optimized to offer good value in terms of performance and power consumption. Compared to the previous generation from Intel these new FPGAs have achieved double the bandwidth per channel, reducing power and maintaining a compact and functional form factor. This was precisely one of Intel’s most important innovations at MWC 2023.
-
Cringe10 months ago
He played Goose in ‘Top Gun’: what became of Anthony Edwards?
-
Hot and How?11 months ago
PS5: New versions – hardware revisions at a glance
-
Hot and How?10 months ago
WD Blue SA510 SSD: Western Digital hasn’t completely turned its back on SATA yet
-
Hot and How?10 months ago
ASRock DeskMini B660 review
-
Hot and How?10 months ago
ASRock DeskMeet B660 review
-
Softwares10 months ago
Confirmed the sentence of 5 years without uploading videos for the youtuber ReSet for making fun of a beggar giving him cookies with toothpaste
-
Softwares10 months ago
Kermit the Frog making cameos in ‘The Matrix’, ‘Star Wars’, and much more is one of the best that DALL-E 2 has left us
-
Hot and How?10 months ago
PocketBook Era: waterproof e-book reader with speaker