Green computing, green ICT as per International Federation of Global & Green ICT "IFGICT", green IT, or ICT sustainability, is the study and practice of environmentally sustainable computing or IT.
The goals of green computing are similar to green chemistry: reduce the use of hazardous materials, maximize energy efficiency during the product's lifetime, the recyclability or biodegradability of defunct products and factory waste. Green computing is important for all classes of systems, ranging from handheld systems to large-scale data centers.
Many corporate IT departments have green computing initiatives to reduce the environmental effect of their IT operations.[1]
In 1992, the U.S. Environmental Protection Agency launched Energy Star, a voluntary labeling program that is designed to promote and recognize the energy efficiency in monitors, climate control equipment, and other technologies. This resulted in the widespread adoption of sleep mode among consumer electronics. Concurrently, the Swedish organization TCO Development launched the TCO Certified program to promote low magnetic and electrical emissions from CRT-based computer displays; this program was later expanded to include criteria on energy consumption, ergonomics, and the use of hazardous materials in construction.[2]
The Organisation for Economic Co-operation and Development (OECD) has published a survey of over 90 government and industry initiatives on "Green ICTs", i.e. information and communication technologies, the environment and climate change. The report concludes that initiatives tend to concentrate on the greening ICTs themselves rather than on their actual implementation to tackle global warming and environmental degradation. In general, only 20% of initiatives have measurable targets, with government programs tending to include targets more frequently than business associations.[3]
Many governmental agencies have continued to implement standards and regulations that encourage green computing. The Energy Star program was revised in October 2006 to include stricter efficiency requirements for computer equipment, along with a tiered ranking system for approved products.[4][5]
By 2008, 26 US states established statewide recycling programs for obsolete computers and consumer electronics equipment.[6] The statutes either impose an "advance recovery fee" for each unit sold at retail or require the manufacturers to reclaim the equipment at disposal.
In 2010, the American Recovery and Reinvestment Act (ARRA) was signed into legislation by President Obama. The bill allocated over $90 billion to be invested in green initiatives (renewable energy, smart grids, energy efficiency, etc.) In January 2010, the U.S. Energy Department granted $47 million of the ARRA money towards projects that aim to improve the energy efficiency of data centers. The projects provided research to optimize data center hardware and software, improve power supply chain, and data center cooling technologies.[7]
Modern IT systems rely upon a complicated mix of people, networks, and hardware; as such, a green computing initiative must cover all of these areas as well. A solution may also need to address end user satisfaction, management restructuring, regulatory compliance, and return on investment (ROI). There are also considerable fiscal motivations for companies to take control of their own power consumption; "of the power management tools available, one of the most powerful may still be simple, plain, common sense."[15]
Gartner maintains that the PC manufacturing process accounts for 70% of the natural resources used in the life cycle of a PC.[16] More recently, Fujitsu released a life-cycle assessment (LCA) of a desktop that show that manufacturing and end of life accounts for the majority of this desktop's ecological footprint.[17] Therefore, the biggest contribution to green computing usually is to prolong the equipment's lifetime. Another report from Gartner recommends to "Look for product longevity, including upgradability and modularity."[18] For instance, manufacturing a new PC makes a far bigger ecological footprint than manufacturing a new RAM module to upgrade an existing one.
Data center facilities are heavy consumers of energy, accounting for between 1.1% and 1.5% of the world's total energy use in 2010 [1]. The U.S. Department of Energy estimates that data center facilities consume up to 100 to 200 times more energy than standard office buildings.[19]
Energy efficient data center design should address all of the energy use aspects included in a data center: from the IT equipment to the HVAC(Heating, ventilation and air conditioning) equipment to the actual location, configuration and construction of the building.
The U.S. Department of Energy specifies five primary areas on which to focus energy efficient data center design best practices:[20]
Additional energy efficient design opportunities specified by the U.S. Department of Energy include on-site electrical generation and recycling of waste heat.[21]
Energy efficient data center design should help to better utilize a data center's space, and increase performance and efficiency.
In 2018, three new US Patents make use of facilities design to simultaneously cool and produce electrical power by use of internal and external waste heat. The three patents use silo design for stimulating use internal waste heat, while the recirculation of the air cooling the silo's computing racks. US Patent 9,510,486, uses the recirculating air for power generation, while sister patent, US Patent 9,907,213, forces the recirculation of the same air, and sister patent, US Patent 10,020,436, uses thermal differences in temperature resulting in negative power usage effectiveness. Negative power usage effectiveness, makes use of extreme differences between temperatures at times running the computing facilities, that they would run only from external sources other than the power use for computing.
The efficiency of algorithms affects the amount of computer resources required for any given computing function and there are many efficiency trade-offs in writing programs. Algorithm changes, such as switching from a slow (e.g. linear) search algorithm to a fast (e.g. hashed or indexed) search algorithm can reduce resource usage for a given task from substantial to close to zero. In 2009, a study by a physicist at Harvard estimated that the average Google search released 7 grams of carbon dioxide (CO₂).[22] However, Google disputed this figure, arguing instead that a typical search produced only 0.2 grams of CO₂.[23]
Algorithms can also be used to route data to data centers where electricity is less expensive. Researchers from MIT, Carnegie Mellon University, and Akamai have tested an energy allocation algorithm that successfully routes traffic to the location with the cheapest energy costs. The researchers project up to a 40 percent savings on energy costs if their proposed algorithm were to be deployed. However, this approach does not actually reduce the amount of energy being used; it reduces only the cost to the company using it. Nonetheless, a similar strategy could be used to direct traffic to rely on energy that is produced in a more environmentally friendly or efficient way. A similar approach has also been used to cut energy usage by routing traffic away from data centers experiencing warm weather; this allows computers to be shut down to avoid using air conditioning.[24]
Larger server centers are sometimes located where energy and land are inexpensive and readily available. Local availability of renewable energy, climate that allows outside air to be used for cooling, or locating them where the heat they produce may be used for other purposes could be factors in green siting decisions.
Approaches to actually reduce the energy consumption of network devices by proper network/device management techniques are surveyed in.[25] The authors grouped the approaches into 4 main strategies, namely (i) Adaptive Link Rate (ALR), (ii) Interface Proxying, (iii) Energy Aware Infrastructure, and (iv) Max Energy Aware Applications.
Computer virtualization refers to the abstraction of computer resources, such as the process of running two or more logical computer systems on one set of physical hardware. The concept originated with the IBM mainframe operating systems of the 1960s, but was commercialized for x86-compatible computers only in the 1990s. With virtualization, a system administrator could combine several physical systems into virtual machines on one single, powerful system, thereby conserving resources by removing need for the original hardware and reducing power and cooling consumption. Virtualization can assist in distributing work so that servers are either busy or put in a low-power sleep state. Several commercial companies and open-source projects now offer software packages to enable a transition to virtual computing. Intel Corporation and AMD have also built proprietary virtualization enhancements to the x86 instruction set into each of their CPU product lines, in order to facilitate virtual computing.
New virtual technologies, such as operating-system-level virtualization can also be used to reduce energy consumption. These technologies make a more efficient use of resources, thus reducing energy consumption by design. Also, the consolidation of virtualized technologies is more efficient than the one done in virtual machines, so more services can be deployed in the same physical machine, reducing the amount of hardware needed.[26]
Terminal servers have also been used in green computing. When using the system, users at a terminal connect to a central server; all of the actual computing is done on the server, but the end user experiences the operating system on the terminal. These can be combined with thin clients, which use up to 1/8 the amount of energy of a normal workstation, resulting in a decrease of energy costs and consumption.[citation needed] There has been an increase in using terminal services with thin clients to create virtual labs. Examples of terminal server software include Terminal Services for Windows and the Linux Terminal Server Project (LTSP) for the Linux operating system. Software-based remote desktop clients such as Windows Remote Desktop and RealVNC can provide similar thin-client functions when run on low power, commodity hardware that connects to a server.[27]
The Advanced Configuration and Power Interface (ACPI), an open industry standard, allows an operating system to directly control the power-saving aspects of its underlying hardware. This allows a system to automatically turn off components such as monitors and hard drives after set periods of inactivity. In addition, a system may hibernate, when most components (including the CPU and the system RAM) are turned off. ACPI is a successor to an earlier Intel-Microsoft standard called Advanced Power Management, which allows a computer's BIOS to control power management functions.[citation needed]
Some programs allow the user to manually adjust the voltages supplied to the CPU, which reduces both the amount of heat produced and electricity consumed. This process is called undervolting. Some CPUs can automatically undervolt the processor, depending on the workload; this technology is called "SpeedStep" on Intel processors, "PowerNow!"/"Cool'n'Quiet" on AMD chips, LongHaul on VIA CPUs, and LongRun with Transmeta processors.
Data centers, which have been criticized for their extraordinarily high energy demand, are a primary focus for proponents of green computing.[28] According to a Greenpeace study, data centers represent 21% of the electricity consumed by the IT sector, which is about 382 billion kWh a year.[29]
Data centers can potentially improve their energy and space efficiency through techniques such as storage consolidation and virtualization. Many organizations are aiming to eliminate underutilized servers, which results in lower energy usage.[30] The U.S. federal government has set a minimum 10% reduction target for data center energy usage by 2011.[28] With the aid of a self-styled ultraefficient evaporative cooling technology, Google Inc. has been able to reduce its energy consumption to 50% of that of the industry average.[28]
Microsoft Windows has included limited PC power management features since Windows 95.[31] These initially provided for stand-by (suspend-to-RAM) and a monitor low power state. Further iterations of Windows added hibernate (suspend-to-disk) and support for the ACPI standard. Windows 2000 was the first NT-based operating system to include power management. This required major changes to the underlying operating system architecture and a new hardware driver model. Windows 2000 also introduced Group Policy, a technology that allowed administrators to centrally configure most Windows features. However, power management was not one of those features. This is probably because the power management settings design relied upon a connected set of per-user and per-machine binary registry values,[32] effectively leaving it up to each user to configure their own power management settings.
This approach, which is not compatible with Windows Group Policy, was repeated in Windows XP. The reasons for this design decision by Microsoft are not known, and it has resulted in heavy criticism.[33] Microsoft significantly improved this in Windows Vista[34] by redesigning the power management system to allow basic configuration by Group Policy. The support offered is limited to a single per-computer policy. The most recent release, Windows 7 retains these limitations but does include refinements for timer coalescing, processor power management,[35][36] and display panel brightness. The most significant change in Windows 7 is in the user experience. The prominence of the default High Performance power plan has been reduced with the aim of encouraging users to save power.
There is a significant market in third-party PC power management software offering features beyond those present in the Windows operating system.[37][38][39] available. Most products offer Active Directory integration and per-user/per-machine settings with the more advanced offering multiple power plans, scheduled power plans, anti-insomnia features and enterprise power usage reporting. Notable vendors include 1E NightWatchman,[40][41] Data Synergy PowerMAN (Software),[42][43]Faronics Power Save,[44]Verdiem SURVEYOR and EnviProt Auto Shutdown Manager[45]
Linux systems started to provide laptop-optimized power-management in 2005,[46] with power-management options being mainstream since 2009.[47][48][49]
Desktop computer power supplies are in general 70–75% efficient,[50] dissipating the remaining energy as heat. A certification program called 80 Plus certifies PSUs that are at least 80% efficient; typically these models are drop-in replacements for older, less efficient PSUs of the same form factor. As of July 20, 2007, all new Energy Star 4.0-certified desktop PSUs must be at least 80% efficient.[51]
Smaller form factor (e.g., 2.5 inch) hard disk drives often consume less power per gigabyte than physically larger drives.[52][53] Unlike hard disk drives, solid-state drives store data in flash memory or DRAM. With no moving parts, power consumption may be reduced somewhat for low-capacity flash-based devices.[54][55]
As hard drive prices have fallen, storage farms have tended to increase in capacity to make more data available online. This includes archival and backup data that would formerly have been saved on tape or other offline storage. The increase in online storage has increased power consumption. Reducing the power consumed by large storage arrays, while still providing the benefits of online storage, is a subject of ongoing research.[56]
A fast GPU may be the largest power consumer in a computer.[57]
Energy-efficient display options include:
Unlike other display technologies, electronic paper does not use any power while displaying an image.[58]CRT monitors typically use more power than LCD monitors. They also contain significant amounts of lead. LCD monitors typically use a cold-cathode fluorescent bulb to provide light for the display. Some newer displays use an array of light-emitting diodes (LEDs) in place of the fluorescent bulb, which reduces the amount of electricity used by the display.[59] Fluorescent back-lights also contain mercury, whereas LED back-lights do not.
A light-on-dark color scheme, also called dark mode, is a color scheme that requires less energy to display on new display technologies, such as OLED.[60] This positively impacts battery life and energy consumption. While an OLED will consume around 40% of the power of an LCD displaying an image that is primarily black, it can use more than three times as much power to display an image with a white background, such as a document or web site.[61] This can lead to reduced battery life and energy usage, unless a light-on-dark color scheme is used. A 2018 article in Popular Science suggests that "Dark mode is easier on the eyes and battery"[62] and displaying white on full brightness uses roughly six times as much power as pure black on a Google Pixel, which has an OLED display.[63] In 2019, Apple announced that a light-on dark mode will be available across all native applications in iOS 13 and iPadOS. It will also be possible for third-party developers to implement their own dark themes.[64] Google has announced an official dark mode is coming to Android with the release of Android 10.[65]
Recycling computing equipment can keep harmful materials such as lead, mercury, and hexavalent chromium out of landfills, and can also replace equipment that otherwise would need to be manufactured, saving further energy and emissions. Computer systems that have outlived their particular function can be re-purposed, or donated to various charities and non-profit organizations.[66] However, many charities have recently imposed minimum system requirements for donated equipment.[67] Additionally, parts from outdated systems may be salvaged and recycled through certain retail outlets[68][69] and municipal or private recycling centers. Computing supplies, such as printer cartridges, paper, and batteries may be recycled as well.[70]
A drawback to many of these schemes is that computers gathered through recycling drives are often shipped to developing countries where environmental standards are less strict than in North America and Europe.[71] The Silicon Valley Toxics Coalition estimates that 80% of the post-consumer e-waste collected for recycling is shipped abroad to countries such as China and Pakistan.[72]
In 2011, the collection rate of e-waste is still very low, even in the most ecology-responsible countries like France. In this country, e-waste collection is still at a 14% annual rate between electronic equipment sold and e-waste collected for 2006 to 2009.[73]
The recycling of old computers raises an important privacy issue. The old storage devices still hold private information, such as emails, passwords, and credit card numbers, which can be recovered simply by someone's using software available freely on the Internet. Deletion of a file does not actually remove the file from the hard drive. Before recycling a computer, users should remove the hard drive, or hard drives if there is more than one, and physically destroy it or store it somewhere safe. There are some authorized hardware recycling companies to whom the computer may be given for recycling, and they typically sign a non-disclosure agreement.[74]
Cloud computing addresses two major ICT challenges related to Green computing – energy usage and resource consumption. Virtualization, dynamic provisioning environment, multi-tenancy, green data center approaches are enabling cloud computing to lower carbon emissions and energy usage up to a great extent. Large enterprises and small businesses can reduce their direct energy consumption and carbon emissions by up to 30% and 90% respectively by moving certain on-premises applications into the cloud.[75] One common example includes online shopping that helps people purchase products and services over the Internet without requiring them to drive and waste fuel to reach out to the physical shop, which, in turn, reduces greenhouse gas emission related to travel.[76]
New technologies such as edge and fog computing are a solution to reducing energy consumption. These technologies allow redistributing computation near the use, thus reducing energy costs in the network.[77] Furthermore, having smaller data centers, the energy used in operations such as refrigerating and maintenance gets largely reduced.
Teleconferencing and telepresence technologies are often implemented in green computing initiatives. The advantages are many; increased worker satisfaction, reduction of greenhouse gas emissions related to travel, and increased profit margins as a result of lower overhead costs for office space, heat, lighting, etc.[78] The savings are significant; the average annual energy consumption for U.S. office buildings is over 23 kilowatt hours per square foot, with heat, air conditioning and lighting accounting for 70% of all energy consumed.[79] Other related initiatives, such as Hoteling, reduce the square footage per employee as workers reserve space only when they need it.[80] Many types of jobs, such as sales, consulting, and field service, integrate well with this technique.
Voice over IP (VoIP) reduces the telephony wiring infrastructure by sharing the existing Ethernet copper.[81] VoIP and phone extension mobility also made hot desking more practical. Wi-Fi consume 4 to 10 times less energy than 4G.[82]
The information and communication technologies (ICTs) energy consumption, in the US and worldwide, has been estimated respectively at 9.4% and 5.3% of the total electricity produced.[83] The energy consumption of ICTs is today significant even when compared with other industries. Some study tried to identify the key energy indices that allow a relevant comparison between different devices (network elements).[84] This analysis was focused on how to optimise device and network consumption for carrier telecommunication by itself. The target was to allow an immediate perception of the relationship between the network technology and the environmental effect. These studies are at the start and the gap to fill in this sector is still huge and further research will be necessary.
The inaugural Green500 list was announced on November 15, 2007, at SC|07. As a complement to the TOP500, the unveiling of the Green500 ushered in a new era where supercomputers can be compared by performance-per-watt.[85] As of 2019, two Japanese supercomputers topped the Green500 energy efficiency ranking with performance exceeding 16 GFLOPS/watt, and two IBM AC922 systems followed with performance exceeding 15 GFLOPS/watt.
Degree and postgraduate programs that provide training in a range of information technology concentrations along with sustainable strategies in an effort to educate students how to build and maintain systems while reducing its harm to the environment. The Australian National University (ANU) offers "ICT Sustainability" as part of its information technology and engineering masters programs.[86]Athabasca University offer a similar course "Green ICT Strategies",[87] adapted from the ANU course notes by Tom Worthington.[88] In the UK, Leeds Beckett University offers an MSc Sustainable Computing program in both full and part-time access modes.[89]
Some certifications demonstrate that an individual has specific green computing knowledge, including:
There are a lot of blogs and other user created references that can be used to gain more insights on green computing strategies, technologies and business benefits. A lot of students in Management and Engineering courses have helped in raising higher awareness about green computing.[93][94]
Since 2010, Greenpeace has maintained a list of ratings of prominent technology companies in several countries based on how clean the energy used by that company is, ranging from A (the best) to F (the worst).[95]
Digitalization has brought additional energy consumption; energy-increasing effects have been greater than the energy-reducing effects. Four energy consumption increasing effects are:
By: Wikipedia.org
Edited: 2021-06-18 19:02:25
Source: Wikipedia.org