Connect with us

Windows

The peculiar origin of the USB memory

Avatar of Thomas Grimm

Published

on

usb historia 1000x600 jpg

One would have to go back to the year 2000, when a Singaporean company, Trekdrove at a German trade fair a memory chip, encased in plastic and connected to a connector Universal Serial Bus (USB). It was from 8 megabytes and it did not require an external power supply, since it took it directly from the computer to which it was connected.

Within a short time, the USB stick got hundreds of sample orders. Its impact was such that in just four months Trek achieved exponential growth in the Stock Market and manufactured and sold more than 100,000 Thumb Drives.

Was Trek the pioneer?

In April 1999, the Israeli company M-Systems presented a model of ‘Architecture for a Universal Serial Bus-based PC flash disk’, by scientists Dov Moran, Oron Ogdan and Amir Ban, which was awarded a year later. In 2000, 8B storage devices from M-Systems began to be sold in the US by IBM, but under the name of DiskOnKey.

IBM has claimed invention of the device based on a 2000 confidential internal report written by one of his employees, Shimon Shmueli, while in Malaysia or China so-called inventors have also appeared.

It is true that in 1995 flash memory became cheap and robust for consumers, the circulation of data skyrocketed through the world Wide Web. However, despite the fact that the idea could have been formulated at different points simultaneously, Trek presents the most compelling story of the origin of USB.

Trek was not familiar in the year 2000 with the inventor of the USB flash drive and CEO of the company, Henn Tan, he was not someone famous compared to other hardware developers like Robert Noyce or Steve Jobs. There are companies like Toshiba either IBM who licensed it from Trek, while others copied it without permission. Still, her popularity did not catch up with Tan.

Tan is the third of six children from a humble family in the Geylang neighborhood of Singapore. Despite being the first to attend high school, he was shown to be a rebel, until finally a beating for truancy led him to focus on his studies. In 1973 he entered the National Service as an instructor for the military police and later became a machinist for a German multinational.

So, a visionary

In the early 1970s, Singapore was home to semiconductor factoriessomething that Tan took advantage of to enter NEC as a sales executive in 1977, to later reach Sanyo. With a series of acquired knowledge, he ventures into buying Trek in 1995 for a million dollars.

He wanted to create revolutionary products that the big computer companies would buy from him. Tan saw a market opportunity in the launch of flagship laptops from firms like Apple or IBM in 1991-1992. To this was added increased demand for peripherals (displays, modems, printers, mice, hard drives, CD-ROM drives, and even dot-com drives). Many of their electronic components were made in Asia, especially in Singapore, under the OEM system.

After seeing how many doors were closed to him, in 1998 Toshiba Named Trek Official Design House to make exclusive products. Among them stood out an MP3 player, which took a USB plug as a method of connection to the computer that allowed it to transfer music.

Of course, Tan gave Toshiba what they asked for, but he also put his researchers to work developing a device that could store spreadsheets, images or any type of file, a differentiating element from what had existed until then; the usb stick.

diskette replacement

The Floppy They arrived in the 1960s at the hands of IBM, being the first of 8 inches and the posterior ones of 5 ¼ inches and 3 ½ inches. They replaced cassette tapes, but their storage capacity was only about 1.44MB at most in the case of double-sided and double-density.

In the 90s they began to look for alternatives, since although in the 1980s computers incorporated CD-ROM drivesthey could only read pre-recorded discs, but not store data.

In 1994 the unit would arrive Iomega Superfloppywhich could store up to 750MB of data, but it was still too difficult to transport. Hence, the USB memory covered all the needs demanded by consumers, displacing its competitors.

So, hero or villain?

The USB stick with your Flash memory and his Interface It was not new, since flash memory was created by Toshiba engineer Fujio Masuoka in 1980. He also did not invent the USB port, which had existed since 1996, although that fusion plus a driver and firmware in a plastic box was indeed the idea of So.

Tan filed a patent application a month before the fair Technological CeBIT 2000 to stop the many imitators that sprung up around the world, especially the Chinese company netac. The company dedicated itself to manufacturing electronic products outside the limits of intellectual property laws.

Finally Trek and Netac reached an agreement to finance part of the research of the Chinese company, although in exchange they applied for a patent on the USB memory in the country.

Like Netac, many were the hackers who wanted to appropriate the whole world of Trek’s invention, the USB memory. Beginning in 2002, he filed lawsuits against numerous companies, including electec Y M-Systems. Although she won many lawsuits, a UK court made her lose the patent there in 2008. Many companies were producing USB sticks without Tan’s permission.

Advertisement

In 2010 Trek released Flue Drive/Flu Card, a modified USB stick that could wirelessly transmit data between devices or to the cloud. In 2014, it signed agreements with Ricoh and Mattel China to license the design of the Flu card.

trek tried access new markets such as the Internet of Things, cloud technology, and medical and wearable devices.

trek’s fall

ThumDirve and Flu Card income did not serve to maintain the profitability of the company, and since 2006 it began to falsify the accounts misleading auditors and shareholders. In August 2022, he pleaded guilty to falsifying accounts, which landed him in prison and his son, Wayne Tan, took over the company as vice president of Trek. The history of the USB memory is an example of how difficult it can be to invent and place a product on the market.

In 2021, global device sales from all manufacturers exceeded $7 billion, and are expected to exceed $7 billion. 10 billion dollars in 2028. After all, the USB memory is used to store data, transfer it and even transfer malware to other devices.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Windows

Intel confirms that Emerald Rapids will arrive in the second half of this year

Avatar of Thomas Grimm

Published

on

Intel 2

The chip giant has reaffirmed the most important keys to its roadmap, and has maintained its commitment to meet the initially scheduled dates. This is very important for Intel, in fact it is part of the strategy that its CEO, Pat Gelsinger, gave in an interview alluding to the execution on time and on time.

It is not difficult to understand, it is useless to have a winning architecture on an advanced node and with a huge density of transistors if this is not viable on the wafer, either for technical or economic reasons, and if therefore you are going to have to delay it repeatedly. This has been a problem that Intel has been experiencing on numerous occasions. Remember, for example, what happened with the transition to 10 nm, and also the delays suffered by the new Sapphire Rapids processors.

In fairness, it must be recognized that in all these cases Intel has sinned of ambitionthat is, it has had excellent ideas to design very advanced chips for the moment in time in which we found ourselves, but by jumping to the wafer he has come across the harsh reality that he had bitten off more than he could swallow. Sapphire Rapids is one of the best examples that we can put today, since they are very advanced processors that have numerous specialized accelerators, and that of course start from a truly unique approach. A pity that they suffered so many delays.

Intel has internalized this problem, and for this reason it has reaffirmed, as we told you at the beginning of this article, its commitment to its latest roadmap, and He has done it at all levels. This means that Meteor Lake, the first Intel 4 (7nm) node architecture for general consumption, will arrive later this year, and that Emerald Rapids, the successor to Sapphire Rapids, will also be released in the second half of this year. Granite Rapids will go to 2024.

According to the latest information to which we have had access, Emerald Rapids will be a minor evolution of Sapphire Rapidsbecause it will maintain the MCM design based on interconnected blocks, it will have a maximum of 64 cores and 128 threads, it will also have specialized accelerators and it will be manufactured on the Intel 7 node.

This leaves us with a really frantic pace of releases, since if you realize we move in annual cycles, something that undoubtedly represents a major challenge. However, if Intel manages to comply, it is clear that it will significantly improve its position, and that it will make things very difficult for AMD.

Continue Reading

Windows

IBM and Rapidus want to manufacture semiconductors at 2 nm in 2025

Avatar of Thomas Grimm

Published

on

IBM y Rapidus semiconductores

Competition in the world of semiconductors is fierce. TSMC is the undisputed leader when it comes to chip manufacturing, since the designs are carried out by its customers, but IBM is not about to be left behind in the nanometer raceand has already defined a strategy to improve its position in this sector, an alliance with the Japanese company Rapidus.

This alliance with the Japanese semiconductor consortium Rapidus will have, as its main objective, to establish a chip production line at 2nm in the first half of 2025. This first production line will work with prototypes, which means that they will not be commercial units, and therefore it is a risk phase that will represent a very important investment for both companies.

If that first move goes well, IBM and Rapidus will put themselves in a prime position within the semiconductor industry, and will be right up there with TSMCsince the Taiwanese company also plans to start producing chips in the 2 nm node by 2025, as long as things go according to their own forecasts, obviously.

Right now we are in an important transition stage. The 5nm node is the most popular today, but the jump to the 3nm node will become a reality very soon, and this year its adoption by some giants of the sector will begin. Except for a last-minute surprise, Apple will be the first to launch a SoC for smartphones based on TSMC’s 3nm node, the Apple A17, which will be used in the iPhone 15 Pro and iPhone 15 Pro Max.

Leaps in the manufacturing process are important because reduce the size of the transistorsand make it possible to introduce performance and efficiency improvements. It is also possible to reduce the space occupied in the silicon wafer, which in the end translates into a greater number of chips per wafer, with all that this entails in terms of manufacturing costs.

However, by reducing the size of the transistors logic gates become thinner, and this increases the risk of electrical leakage. It is also more difficult to achieve a good success rate on the wafer, especially in complex designs, and this translates into fewer functional chips. The jump to 2 nm could be very difficult, since it is a value that brings us closer to the physical limits of silicon, so it will be interesting to see how the industry progresses in this new adventure.

Continue Reading

Windows

These are the main differences between Microsoft Azure, Google Cloud and AWS

Avatar of Thomas Grimm

Published

on

cloud g7727c20a8 1920

The top three public cloud providers are AWS, Microsoft Azure and Google Cloud. In their offering, all three have quite a few similarities. The main plans that they offer to their clients have many similarities in terms of the type of services they offer. In addition, their prices and billing models are also quite similar. They also have the same goals in terms of type of customers, among other things.

Of course, the fact that their cloud plans are similar does not imply that they do not also have differences. In certain respects, they present certain important differences and have distinct characteristics. These are the main ones:

1- Cloud Assisted Code Writing

All three major public cloud providers offer integrated development environments, or plugins, with which developers can write code manually. But the same is not true of software development tools assisted by Artificial Intelligence. So far only one offers such a tool, with AI models to help developers generate code automatically. This is AWS, which since 2022 has Amazon Code Whisperer.

With this tool, AWS customers who are dedicated to development will have recommendations when writing code, driven by Artificial Intelligence, with the aim of making it easier for them to develop more efficient connections with cloud resources.

Microsoft offers a tool similar in many cases to CodeWhisperer: Copilot. But not part of Azure cloud, but part of GitHub. This means that Copilot does not integrate in any specific way with Azure, nor does it specifically address development needs that are related to Microsoft Azure. Of course, Microsoft may decide to launch a tool of this type in the future, since it has decided to bet heavily on AI in its cloud, which is shown with the recent launch of Azure OpenAI and with the integration of tools like ChatGPT in several of its cloud products.

As for the Google Cloud platform, it is not yet firmly committed to AI-assisted development products. Neither is Google, at least for now. The momentum that Artificial Intelligence is taking, which is leading the big technology companies to accelerate their projects related to AI, and which in the short and medium term may make them accelerate developments and integrations related to it, may lead to the cloud gets a boost from Artificial Intelligence in general, and AI-powered code development for the cloud in particular. Not only in Google Cloud, but also in the rest of the public cloud providers that today do not have tools of this type.

2 – Platform as a Service (PaaS) cloud offerings

All major public cloud providers offer some version of cloud services. Platform as a Service (PaaS), a cloud computing model in which IaaS (Infrastructure as a Service) service is integrated with software development and deployment tools. All with the aim that companies can develop and run applications.

Of the three, the one with the most complete offering of PaaS solutions is Azure. It does so through systems like Web App Service. AWS also has quite a notable offering, with services like ELastic Beanstalkand Google Cloud has a service called Cloud Run.

But these two services are not as versatile or complete as those offered by Azure in PaaS. Neither in terms of the uses that can be given to them nor in terms of the flexibility they provide in the development and execution of applications. Therefore, if a developer needs to have PaaS services in the cloud, Azure is the most appropriate choice for him.

3 – Cloud Data Loss Prevention Services

The data loss prevention (DLP) solutions in the cloud are intended to help companies and professionals to discover and protect the sensitive data that they have to store in the cloud. In this case, both AWS and Azure and Google Cloud offer some type of system or tool designed for this.

Of course, the offer that Azure makes is based on a Microsoft product that is not specifically focused on protecting Azure. This is why it cannot be considered as a cloud data loss prevention solution in Azure. This is such a tool that is generic and is supported by Azure. Both AWS and Google Cloud offer a native DLP platform, so they beat Microsoft’s cloud in this case.

4 – Hybrid cloud solutions

The integration of a private storage infrastructure with a public cloud, such as those offered by the aforementioned providers, gives rise to the generation of a hybrid cloud. However, the way to create and manage these integrations is different in each case. In Google Cloud, for the creation and management of hybrid clouds you must use the platform anthoswhich is based on Kubernetes.

As for AWS and Azure, their solutions for building hybrid clouds are not based on Kubernetes. The one that allows you to do it in AWS is called Outpostswhile in Azure there are two options: Arc and Stack. So no knowledge of Kubernetes is required to build hybrid clouds.

In addition, there are other differences between the level of flexibility offered by these three providers in terms of hybrid cloud creation. Outpost is more restrictive, requiring customers to purchase hardware outright. AWS solutions, meanwhile, are compatible with virtually any type of hybrid cloud infrastructure.

These are the main differences that we can find in the cloud offer of the three main public cloud providers. In most cases, as we have seen, they all offer some kind of solution, but not always with the same versatility, solidity and power.

As for the rest of the differences that there may be, they will be in a much smaller area, such as data storage or hosted virtual machines. They all have similar options and very similar plans. But if you need an extra in any of the aspects that we have talked about, it is convenient to analyze with a magnifying glass which option is the best for you

Continue Reading

Trending