Future Developments in ICT Essay Example
Future Developments in ICT Essay Example

Future Developments in ICT Essay Example

Available Only on StudyHippo
  • Pages: 8 (2090 words)
  • Published: August 19, 2017
  • Type: Essay
View Entire Sample
Text preview

Long gone are the days of yearly ‘Massive’ performance or feature increases in the consumer space, instead, we have settled for far smaller generational improvements: Slightly better graphics in games, 10 gigabytes more storage space on your iPod, 5 second faster boot times. Technological development is slowing and major breakthroughs are needed to enable larger performance leaps. What we have seen as of late is evolutionary rather than revolutionary, and this is showcased no better than in Apples recent release of the iPad Mini; a smaller iPad based on the internals of the iPhone 4S.

In this essay, I will be avoiding frivolous developments as seen by the consumer, and instead be focussing on the hardware that powers such devices, I aim to convey and explain the incredible developments that lie on the horizon, but to

...

do this, I need to delve into the past shortly…Back in the late 80’s, computational power would increase by a factor of two in less than a few years. An excellent example is Intel’s 486 microprocessor, which doubled the performance of the previous 386 in every respect, with an extremely short development time. The increases in instructions per cycle (IPC) were massive, and this was owed to a period in time where optimisations were plentiful and it was easy to spot shortcomings in architecture and design. Slowly, as Intel filled its roadmap with faster and faster processors, the speed increases were lower.Enter the age of the Pentium 4; it was here that Intel recognized that higher clock speeds could gain sizeable performance increases, and so they kept pushing up until 3.8 Ghz.

The downsides to these higher clocks were

View entire sample
Join StudyHippo to see entire essay

incredible decreases in efficiency. Power consumption and heat soared to record numbers during this time and the performance returns were again diminishing. The realisation suddenly dawned on the company that they could not keep scaling their CPU’s up like this, and so their hopes of reaching an easy “10 Ghz by 2011” were dashed. Moore’s law, which dictates that the number of transistors on integrated circuits would double every two years, became extremely hard to uphold.

This was addressed two years later via a parallel approach. Rather than have a single core running at extremely high frequencies, the load could be spread across multiple power efficient cores. It’s easy to visualise this if we imagine a single, fast ant trying to form a structure versus an army of slower (but still fast) ants. Things will be much faster with more resources to chuck at the problem; fact.

This parallelisation was made possible through smaller manufacturing processes (Moving from 1 micrometre lithography to 65 nanometre), more on this later. This parallel logic has carried us through to 2013, but a brick wall is starting to appear. Software must always make efficient use of the hardware available to it, to extract the maximum performance; it must evolve with hardware, if not slightly faster!The problem that arises is, can multiple cores complete the same task faster than one? How can you split up that task, so as to make the most efficient use of those plentiful resources? The more cores you have in a system, the more threads you need to keep them busy, and doing so is not that easy. A thread has to acquire a lock, which

may necessitate waiting until another thread releases the lock. That can lead to serious lock contention, which can result in bad scaling, even to the point where more cores (and threads) can lead to a performance loss instead of a gain.So where’s the future development in all of this? Intel’s new Transactional Synchronization eXtensions (TSX).

I’ll make use of an example to better describe these new instructions. Imagine a large database stored in to the computer’s memory. If a core wants to modify the data in the table, the entire table is locked to prevent data corruption (another core cannot attempt to modify the same piece of data). Intel’s TSX allows the entire data structure to be unlocked and worked on concurrently by multiple cores, and optimistically assumes that the output will not be corrupted; if it is, the entire operation is aborted and is re-run using a traditional locked-data approach. Whilst on a small scale (8 Cores or less) the impact is minimal and can sometimes be a detriment to performance.

On a larger scale such as that seen in supercomputers and up and coming multi-core designs, performance can soar in many usage scenarios.To make these massive multi-core processors, the processes by which we manufacture them must decrease in size so as to lower costs, power consumption, heat and production yields. These are again, pioneered by Intel in their fabrication labs. Most recently we have seen a drop to 22nm ‘3D Tri-Gate transistors’, which allow more transistors to exist in the same die area.

This translates into smaller phones in the consumer space, with far longer battery life over previous generations. Similarly, these changes

can also be exploited to create more powerful phones with battery life identical to the previous generation. The next step forward in this sector is expected to arrive in 2015 in the form of Extreme Ultra Violet lithography, which aims to use a smaller wavelength of light to form even smaller transistors. This should make it possible to produce transistors down below 8 nanometres, which would allow you to squeeze the performance of a high-end desktop PC from this year into a much smaller laptop or tablet. Simply amazing.The next developments in this field will come in the form of graphene in place of silicon.

Graphene is an atom thick layer of carbon, which has exceptional electrical properties; the electrons are able to travel far faster in this substrate than in silicon, which will help to reduce latencies and dramatically speed up chips, with minimal changes to design. IBM graphene transistors are said to have a cut-off frequency of 155 Ghz, which would allow unparalleled performance increases (Up to 30x-40x) if cooling can keep up. A graphene transistor has recently been demonstrated that is only 1 atom thick and 10 atoms wide, allowing for the same improvements previously detailed.Whilst the processors powering the next generation of devices will be extremely powerful, running millions of operations per second, the supporting infrastructure around it must be able to keep it fed with data at the same rate; the slowest part of todays computers being the storage subsystem. Whilst every other part in the computer has no or very little mechanical reliance, the hard disk has remained slow and unchanged for over 30 years. It seems silly that

instructions are ultimately read from a disk spinning at only 5400 revolutions per minute via a magnetic head moved slowly over it.

SSD’s are the way forwards, and this is a rapidly advancing field.SSD’s rely solely on large amounts of fast flash memory, which is written and read from in parallel via a storage controller. This provides massive increases on Read and Write speeds to the drive, with Maximum hard drive read speeds peaking at only 50 mb/s versus an SSD’s 600 mb/s: This change by itself is enough to allow a computer running Windows 8 to boot in under 4 seconds. SSD’s are also able to withstand daily life in a laptop far better than a hard disk due to the lack of any moving parts, with failures becoming far less likely and reliability soaring through the roof. Currently, research is underway to further increase read and write speeds via the use of stacked memory, meaning the signals have a smaller distance to travel.

Further research is allowing for better reliability of the drives, as the NAND cells can only be written to a certain number of times before they can no longer store data, and this is arriving in the form of 4th generation storage controllers such as Sandforce and Marvell which include features such as over provisioning; the use of more memory (128GB of memory on a 100GB drive) to allow writes to be spread over larger amounts of NAND, minimising wear and performance decreases.To bypass all of the difficulties of packing large computing power into a small device, many are turning to cloud computing as the solution. This involves implementing large

numbers of servers over a high-speed network to manage the processing given to it by the user, via a ‘thin client’. A recent example is the Google Chromebook, which runs the Chrome OS (a stripped down Linux build). Instead of harbouring any useable applications locally, the laptops functionality is visible only when connected to Google’s services, meaning any functionality is dictated by a third party.

This is why I fundamentally believe it will never take off. Until all applications a user is accustomed to can be transferred to the cloud and run at a reasonable speed with no discernable lag, cloud computing is useless. This is an issue can only be ironed out by the app creators themselves.But “cloud storage” I hear you say!How useful is cloud storage with much larger files, when the average upload speed in the UK is below 1.

5 Mbps? To mitigate issues such as lag and low upload speeds, a readily available high-speed wireless network that has an extremely low latency is required. Enter 5G.Whilst 4G LTE is still young, an incredible number of hardware manufacturers are now stepping up to develop the next generation of wireless technology. Samsung have just showcased a prototype of such technology with peak transmission speeds of 1Gbps, faster than most (if not all) wire based solutions in use today for networking. According to Samsung the technology will allow you to download a full-length movie over the air in less than a second. Versus LTE, which tops out at around 75 megabits-per-second (Mbps).

The prototype made use of 64 antennas, though with an estimated lead-time of 7 years on the project this number is bound

to decrease. Consumers will not see such a solution until 2020 at the earliest, though System on Chip (SoC) providers such as NVidia and Qualcomm will no doubt be integrating this into their chipsets early in 2017.In the scientific realm, distributed computing across a large supercomputer is the standard way of accomplishing any meaningful research, with clusters of strong computers working together to crunch data down into useful information. Soon we could see these mammoths replaced by smaller, single Quantum computers, capable of accelerating specific calculations by an order of magnitude. By encoding information in more than just two states (0 and 1) through the alteration of an electrons spin, many qubits can be generated, and these can exist all at the same time in a phenomena known as superposition.

This high level of parallelism allows many calculations to be performed at once, dramatically increasing computational throughput: A 30-qubit quantum computer would equal the processing power of a conventional computer that could run at 10 teraflops.This interesting development has a significant drawback. Simply measuring the electron spin, will change it and force it to exist as only one value, effectively transforming the million dollar quantum computer back into the standard run of the mill binary based system. Quantum entanglement is currently being studied and put forth as a solution to this problem; in quantum physics, if you apply an outside force to two atoms, it can cause them to become entangled, and the second atom can take on the properties of the first atom. So if left alone, an atom will spin in all directions.

The instant it is disturbed it chooses one spin, or

one value; and at the same time, the second entangled atom will choose an opposite spin, or value. This allows scientists to know the value of the qubits without actually looking at them. [1] http://www.howstuffworks.

com/quantum-computer.htmHopefully with the introduction of these new parts over the coming years, we will begin to see major scientific developments come through to help explain the world around us. This is of course, the most useful application of ICT in today’s world (rather than the cash cow pit that is consumer electronics). Such advances could help to cure cancer and run simulations that one day will result in no world hunger, and the sooner we have this technology in our hands, the faster we can work towards a better future.

The irony of today’s situation is that, the supercomputers running the simulations on global warming today, consume far more electricity in the process than their results could ever justify. I hope that changes, and soon.

Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New