Friday, 17 June 2016

Ubuntu 16.04: A desktop for Linux diehards


Every two years a release of Ubuntu is designated Long-Term Support (LTS). Ubuntu 16.04, code-named Xenial Xerus, is the latest in that line. LTS releases are supported for five years instead of the usual nine months, but they also have less obvious implications. LTS releases are usually geared toward the enterprise, which means they generally include fewer new features and more testing. Both qualities are attractive to risk-averse companies running production software on Ubuntu servers, but provide comparatively little to the desktop user.
However, Xenial Xerus bucks this trend with a handful of new features and some welcome improvements. With the new app store, the stand-alone calendar, and the movable Launcher, Xenial might be one of the more feature-rich releases in a few years. In this review, I’ll start by walking through these new pieces and improvements, and end with a look at how Ubuntu stacks up -- in terms of installation, ease, features, and so on -- against other desktop operating systems you might be familiar with.

Cisco fires off recall on fire prone switches

IE5000 industrial Ethernet devices have a short that could spark combustion

Cisco is recalling Ethernet switches that pose a potential fire hazard because of damage to the source wiring that can cause a short. The company issued a field notice last week on the situation, which affects its IE5000 industrial Ethernet switches.

Potential damage to the source wiring can cause a short to the metal enclosure/barrier. This could lead to a potential electrical and/or fire safety hazard for the end user.
The issue was observed in a single device that had not yet been shipped and not at a customer site, Cisco says. A switch was discovered to have a short in a damaged power harness cable during a manufacturing test.
Upon discovery, Cisco initiated a hardware upgrade program to replace any impacted units. Affected devices can be identified through serial number validation, and from version IDs and deviation labels on the top and bottom of the switch or from the Device Manager screen.
Cisco has already determined that IE5000 switches with version ID V02 and deviation label #D517262 are not affected by the short.

 

Cisco Engineers Enterprise Genome for Software

Digital Network Architecture (DNA) maps network sequence to virtualization, ease of operationsSAN DIEGO – Cisco this week introduced a software-driven architecture designed to extend policy throughout an enterprise wired and wireless network, from branch to edge to core.

Cisco’s Digital Network Architecture (DNA) is a blueprint for building an enterprise network with virtualization, automation, analytics, cloud service management and programmability for ease of operation and management. It is delivered through Cisco ONE software licensing on a variety of Cisco platforms, and is anchored by the company’s APIC-Enterprise Module SDN controller, which has been slow to emerge from development and trials.

 

Arista takes aim at core router market with Universal Spine

The concept of using switching infrastructure as a replacement for a core router is certainly nothing new. Years ago, vendors like Foundry Networks and Force10 tried to make the case but were unsuccessful in their attempts. Although the switches were beefy and had massive port density they were missing some key features such as MPLS support, the ability to support a full Internet routing table and carrier class resiliency. From an economic perspective, the cost per port on a switch is about one-tenth what it is on a router, so there is a financial argument to be made but the products just didn’t have the technical chops to hang with big routers.
Arista Networks is now taking a shot at this market again but is taking a significantly different approach to the market. Arista is attempting to disrupt the core router market by replacing the big boxes with a distributed spine, similar to the way the company disrupted the legacy data center switching market. Spine-Leaf configurations are well accepted today in big data centers and cloud providers but this wasn’t the case just a few years ago as there was a certain religion around big chassis deployed in multiple tiers.
Arista’s Universal Spine architecture is built on the same concept but moves the spine into the core. The solution is enabled by Arista’s new 7500 R Series switch/router platform specifically designed for cloud providers and large enterprises looking to build next generation data centers. While switches can be very robust and be loaded with router like features, the fact is they aren’t routers and that’s where the switch vendors failed in the past. Arista’s 7500 R is the first combined switch and router that uses a switch-like architecture but has been beefed up with router features. Key attributes of the Universal Spine are:
  • Total fabric capacity of a whopping 115 Tbps
  • Up to 432 wirespeed 100 Gig ports
  • Flexible port speeds of 1/10/25/40/50/100 Gig
  • Lossless forwarding using virtual output queues
  • Arista FlexRoute technology that delivers over a million routes, more than enough for an Internet routing table with MPLS, Segment Routing and EVPN.
  • NEBS compliance - a key certification for service providers
  • Programmable traffic engineering with support for 128,000 MPLS, GRE, VXLAn and IP-in-IP tunnels
  • Hitless upgrades
  • Millisecond network convergence
  • N+1 fabric resiliency
The 7500R series comes in three form factors – 4, 8 and 12 slot chassis. Arista also announced three new wire speed line cards for the 75005. These include:
  • 36 QSFP ports with a choice of 10/25/40/50/100 Gig
  • 36 40 Gig ports with flexible combinations of 10 Gig and up to 6 ports of 100 Gig
  • 48 10 Gig SFP+ and 2 100 Gig QSFP
The 7504R, 7508R and all line cards are available now and the 7512R will be available in the third quarter of 2016. The price per 100 Gig port starts at a $3,000, which is less than 10% of the price of a traditional router.
Arista’s Universal Spine creates an interesting option to a traditional core router. In practicality I really don’t expect to see tier 1 or 2 network operators jumping at it until it’s proven. Telcos tend to be highly risk adverse and want to see technology used in other markets first. Arista quoted NetFlix in its press release and big cloud providers should prove to be an excellent test bed for Universal Spine. It’s an interesting vision and now it’s time for Arista to prove it works.



Review: Wave 2 Wi-Fi delivers dramatic performance boost for home networks

Mention "home Wi-Fi router" and you’ll probably think of a cheap device with cruddy performance. But dramatic changes are coming, with big boosts in bandwidth, thanks to two new Wi-Fi technologies.


Both beamforming and MU-MIMO (an acronym for the mouthful that is “multi-user, multiple input, multiple output”) are transformational technologies. We tested them in the new Linksys EA-7500, the company’s first small office/home office router to support the so-called Wave 2 technologies.
(For the record, “Wave 2” is a marketing and not a technical specification. The IEEE 802.11ac standards describe how beamforming and MU-MIMO must be implemented.)
The first of these technologies, beamforming, makes more efficient use of the radios in Wi-Fi routers. Before beamforming, Wi-Fi routers worked like light bulbs, with signals radiating in all directions. Problem is, signals only need to travel where Wi-Fi devices are – and that’s typically just a small part of the total coverage area.
With beamforming, Wi-Fi routers and and clients exchange information about their locations. Then, the router alters its phase and power for a better signal. The result: Far more efficient use of radio signals, faster forwarding, and possibly greater distances.
Beamforming comes in two flavors. With explicit beamforming, both client and Wi-Fi router share information about radio reception from their respective locations. This allows for the most efficient “steering” of signals between the Wi-Fi router and clients. Many recent devices, such as Apple smartphones and tablets made within the past two years, support explicit beamforming.
Even older clients may still benefit. With implicit beamforming, the Wi-Fi router steers signals based on its own measurements, without signal information from clients. Implicit beamforming doesn’t work as well as the explicit version, but some performance gains still are possible.
Beamforming makes possible a much bigger advance in Wi-Fi: MU-MIMO.

Is the Cisco 6500 Series invincible?

The Cisco 6500 Series has proven itself time and time again to be a mainstay in the networking industry. Cisco has done a commendable job with continued enhancements to ensure that the industry’s golden child maintains relevance. If this is the case, why do IT professionals still fear its supposedly impending obsolescence and feel pressure to upgrade to newer models? Let’s just say rumors of its demise are greatly exaggerated.

As the industry moves toward 10/40Gig and higher, the need for bandwidth and port density only increases. Software-defined networking (SDN), while certainly worthy of consideration, may not be the best option for all organizations just yet. However, the need for high-speed switching connectivity and robust services remains a concern for the here and now. Enter: The Cisco 6500 Series.
Since its birth in 1999, the Cisco Catalyst 6500 Series has grown considerably in available options and bandwidth potential. The introduction of its cousin in 2013, the Cisco Catalyst 6800, is further proof of its staying power. This product line was designed to ease users into an upgrade with some degree of adaptability with existing infrastructure, including compatibility with 6500 series blades, greater throughput potential, and options for Instant Access satellite switches and other newer technology.

 

 

Cisco Nexus 1100 Series Cloud Services Platforms

  • Dedicated hardware platform supports critical virtualization infrastructure
  • Offloads application servers from running virtual service nodes
  • Improves scalability and performance of virtualized data center
  • Separates security policy management from VMware virtualization administration

Cisco Nexus 1000V Switch for Microsoft Hyper-V

  • Includes an advanced NX-OS feature set and associated partner eco system
  • Innovative network services architecture supports scalable multitenant environments
  • Offers a consistent operational experience across physical and virtual environments and hypervisors
  • Tightly integrates with Microsoft System Center Virtual Machine Manager 2012 SP1 (SCVMM)

Cisco Application Virtual Switch 

  • Specifically designed for the Application Centric Infrastructure (ACI)
  • Provides high performance and throughput
  • Offers optimal traffic steering
  • Offers a consistent operational model across leading hypervisors

Cisco makes programmable routers more open

Cisco has added support for open source tools Chef and Puppet in the IOS XR operating system for its NCS programmable routers.Cisco has added a software development kit (SDK), and open source management and automation tools to the routers...

 

Cisco NFV technology targeted at mobile carriers

Introduced at Mobile World Congress, the latest Cisco NFV technology is an all-in-one package of computing, storage and networking infrastructure

 

An Internet 100 times as fast

A new network design that avoids the need to convert optical signals into electrical ones could boost capacity while reducing power consumption.
The heart of the Internet is a network of high-capacity optical fibers that spans continents. But while optical signals transmit information much more efficiently than electrical signals, they're harder to control. The routers that direct traffic on the Internet typically convert optical signals to electrical ones for processing, then convert them back for transmission, a process that consumes time and energy.
In recent years, however, a group of MIT researchers led by Vincent Chan, the Joan and Irwin Jacobs Professor of Electrical Engineering and Computer Science, has demonstrated a new way of organizing optical networks that, in most cases, would eliminate this inefficient conversion process. As a result, it could make the Internet 100 or even 1,000 times faster while actually reducing the amount of energy it consumes.
One of the reasons that optical data transmission is so efficient is that different wavelengths of light loaded with different information can travel over the same fiber. But problems arise when optical signals coming from different directions reach a router at the same time. Converting them to electrical signals allows the router to store them in memory until it can get to them. The wait may be a matter of milliseconds, but there's no cost-effective way to hold an optical signal still for even that short a time.
Chan's approach, called "flow switching," solves this problem in a different way. Between locations that exchange large volumes of data — say, Los Angeles and New York City — flow switching would establish a dedicated path across the network. For certain wavelengths of light, routers along that path would accept signals coming in from only one direction and send them off in only one direction. Since there's no possibility of signals arriving from multiple directions, there's never a need to store them in memory.
Reaction time
To some extent, something like this already happens in today's Internet. A large Web company like Facebook or Google, for instance, might maintain huge banks of Web servers at a few different locations in the United States. The servers might exchange so much data that the company will simply lease a particular wavelength of light from one of the telecommunications companies that maintains the country's fiber-optic networks. Across a designated pathway, no other Internet traffic can use that wavelength.
In this case, however, the allotment of bandwidth between the two endpoints is fixed. If for some reason the company's servers aren't exchanging much data, the bandwidth of the dedicated wavelength is being wasted. If the servers are exchanging a lot of data, they might exceed the capacity of the link.
In a flow-switching network, the allotment of bandwidth would change constantly. As traffic between New York and Los Angeles increased, new, dedicated wavelengths would be recruited to handle it; as the traffic tailed off, the wavelengths would be relinquished. Chan and his colleagues have developed network management protocols that can perform these reallocations in a matter of seconds.
In a series of papers published over a span of 20 years — the latest of which will be presented at the OptoElectronics and Communications Conference in Japan next month — they've also performed mathematical analyses of flow-switched networks' capacity and reported the results of extensive computer simulations. They've even tried out their ideas on a small experimental optical network that runs along the Eastern Seaboard.
Their conclusion is that flow switching can easily increase the data rates of optical networks 100-fold and possibly 1,000-fold, with further improvements of the network management scheme. Their recent work has focused on the power savings that flow switching offers: In most applications of information technology, power can be traded for speed and vice versa, but the researchers are trying to quantify that relationship. Among other things, they've shown that even with a 100-fold increase in data rates, flow switching could still reduce the Internet's power consumption.
Growing appetite
Ori Gerstel, a principal engineer at Cisco Systems, the largest manufacturer of network routing equipment, says that several other techniques for increasing the data rate of optical networks, with names like burst switching and optical packet switching, have been proposed, but that flow switching is "much more practical." The chief obstacle to its adoption, he says, isn't technical but economic. Implementing Chan's scheme would mean replacing existing Internet routers with new ones that don't have to convert optical signals to electrical signals. But, Gerstel says, it's not clear that there's currently enough demand for a faster Internet to warrant that expense. "Flow switching works fairly well for fairly large demand — if you have users who need a lot of bandwidth and want low delay through the network," Gerstel says. "But most customers are not in that niche today."
But Chan points to the explosion of the popularity of both Internet video and high-definition television in recent years. If those two trends converge — if people begin hungering for high-definition video feeds directly to their computers — flow switching may make financial sense. Chan points at the 30-inch computer monitor atop his desk in MIT's Research Lab of Electronics. "High resolution at 120 frames per second," he says: "That's a lot of data."


Designing the hardware

Improving communication between distributed processors and managing shared data are two of the central challenges in creating tomorrow’s chips
 With the multicore chips in today’s personal computers, which might have four or six or even eight cores, splitting computational tasks hasn’t proved a huge problem. If the chip is running four programs — say, a word processor, an e-mail program, a Web browser and a media player — the operating system can assign each its own core. But in future chips, with hundreds or even thousands of cores, a single program will be split among multiple cores, which drastically complicates things. The cores will have to exchange data much more often; but in today’s chips, the connections between cores are much slower than the connections within cores. Cores executing a single program may also have to modify the same chunk of data, but the performance of the program could be radically different depending on which of them gets to it first. At MIT, a host of researchers are exploring how to reinvent chip architecture from the ground up, to ensure that adding more cores makes chips perform better, not worse.
In August 2010, the U.S. Department of Defense’s Defense Advanced Research Projects Agency announced that it was dividing almost $80 million among four research teams as part of a “ubiquitous high-performance computing” initiative. Three of those teams are led by commercial chip manufacturers. The fourth, which includes researchers from Mercury Computer, Freescale, the University of Maryland and Lockheed Martin, is led by MIT’s Computer Science and Artificial Intelligence Lab and will concentrate on the development of multicore systems.
One way to improve communication between cores, which the Angstrom project is investigating, is optical communication — using light instead of electricity to move data. Though prototype chips with optical-communications systems have been built in the lab, they rely on exotic materials that are difficult to integrate into existing chip-manufacturing processes. Two of the Angstrom researchers are investigating optical-communications schemes that use more practical materials.
In early 2010, an MIT research group led by Lionel Kimerling, the Thomas Lord Professor of Materials Science and Engineering, demonstrated the first germanium laser. Germanium is already used in many commercial chips simply to improve the speed of electrical circuits, but it has much better optical properties than silicon. Another Angstrom member, Vladimir Stojanović of the Microsystems Technology Laboratory, is collaborating with several chip manufacturers to build prototype chips with polysilicon waveguides. Waveguides are ridges on the surface of a chip that can direct optical signals; polysilicon is a type of silicon that consists of tiny, distinct crystals of silicon clumped together. Typically used in the transistor element called the gate, polysilicon has been part of the standard chip-manufacturing process for decades.
Other Angstrom researchers, however, are working on improving electrical connections between cores. In today’s multicore chips, adjacent cores typically have two high-capacity connections between them, which carry data in opposite directions, like the lanes of a two-lane highway. But in future chips, cores’ bandwidth requirements could fluctuate wildly. A core performing a calculation that requires information updates from dozens of other cores would need much more receiving capacity than sending. But once it completes its calculation, it might have to broadcast the results, so its requirements would invert. Srini Devadas, a professor in the Computer Science and Artificial Intelligence Lab, is researching chip designs in which cores are connected by eight or maybe 16 lower-capacity connections, each of which can carry data in either direction. As the bandwidth requirements of the chip change, so can the number of connections carrying data in each direction. Devadas has demonstrated that small circuits connected to the cores can calculate the allotment of bandwidth and switch the direction of the connections in a single clock cycle.
In theory, a computer chip has two main components: a processor and a memory circuit. The processor retrieves data from the memory, performs an operation on it, then returns it to memory. But in practice, chips have for decades featured an additional, smaller memory circuit called a cache, which is closer to the processor, can be accessed much more rapidly than main memory, and stores frequently used data. The processor might perform dozens or hundreds of operations on a chunk of data in the cache before relinquishing it to memory.

Explained: Ad hoc networks

Decentralized wireless networks could have applications in distributed sensing and robotics and maybe even personal communications.


In the Internet, the responsibility for directing data traffic lies with special-purpose devices called routers. Internet service providers monitor the flow of traffic across their networks and, if they spot congestion, revise the routers’ instructions accordingly. With the cell network, two people a block apart could be having a phone conversation, but they aren’t directly exchanging data. Rather, they’re sending data to a cell tower that determines what to do with it — as it does for thousands of other cell-phone users in the vicinity. “If everything could be run by some node that’s on the Internet, that’s maybe a solved problem, kind of boring,” Lynch says. “The base station just computes everything and tells everybody what to do.”
In an ad hoc network, there are no base stations, and there are no supervisors monitoring network performance as a whole. A sensor dropped on the side of a volcano powers on and tries to determine how many other active sensors are within communication range. Together, the sensors then piece together whatever information they need to perform their collective task.
Another common feature of ad hoc networks is that they’re constantly changing. The wind blows — or the lava flows — and suddenly some of the volcano sensors are farther away from their neighbors, with lower-bandwidth data connections than they had before; or perhaps some of the connections have been broken entirely, while new ones have been formed; or perhaps some of the sensors have been destroyed outright. The problem of changing network topology is even more acute for, say, robots crawling all over an underwater oil rig looking for leaks, or sensor-laden cars exchanging data about traffic conditions as they weave among each other on a busy state highway.
If the devices in an ad hoc network had unlimited power, it would be relatively easy for them to accommodate changing topologies: any one device could send as much data as it needed to any other, regardless of the distance separating them. But for many of the envisioned applications of ad hoc networking, power is at a premium. The oil-rig robots might need to operate for hours between battery charges, the volcano sensors for years. The need to maximize the efficiency of data exchange — in order to minimize energy consumption — makes designing communications protocols for ad hoc networks even more challenging.
As handheld devices become more and more powerful, the prospect that they could arrange themselves into ad hoc networks also becomes more intriguing. MIT professor of electrical engineering Muriel Médard has investigated whether ad hoc networking could abet the dissemination of information among large localized groups. Médard imagines, for instance, that the cell phones of fans at a sporting event could organize into ad hoc networks to enable very efficient distribution of video data, so that thousands of people could simultaneously watch high-quality replays of entirely different plays without overburdening the local data networks. Lynch says that her group had toyed with the idea of a “HikerNet,” which would allow hikers without cell service to exchange information about trail conditions, and that other researchers have investigated multiplayer games that would use direct connections between cell phones. She also points to the failure of the cellular network in New Orleans after Hurricane Katrina as an instance in which ad hoc networking could have been useful.

Simple security for wireless

Researchers demonstrate the first wireless security scheme that can protect against “man-in-the-middle” attacks — but doesn’t require a password.
In early August, at the Def Con conference — a major annual gathering of computer hackers — someone apparently hacked into many of the attendees’ cell phones, in what may have been the first successful breach of a 4G cellular network. If early reports are correct, the incident was a man-in-the-middle (MITM) attack, so called because the attacker interposes himself between two other wireless devices.
Coincidentally, a week later, at the 20th Usenix Security Symposium, MIT researchers presented the first security scheme that can automatically create connections between wireless devices and still defend against MITM attacks. Previously, thwarting the attacks required password protection or some additional communication mechanism, such as an infrared transmitter.
Showcasing novel ways to breach security is something of a tradition at Def Con. In previous years, MITM attacks had been launched against attendees’ Wi-Fi devices; indeed, the MIT researchers demonstrated the effectiveness of their new scheme on a Wi-Fi network. But in principle, MITM attacks can target any type of wireless connection, not only between devices (phones or laptops) and base stations (cell towers or Wi-Fi routers), but also between a phone and a wireless headset, a medical implant and a wrist-mounted monitor, or a computer and a wireless speaker system.
 

New router enhances the precision of woodworking

Handheld device precisely follows a digital plan with minimal guidance from a user.



Anyone who has tried to build a piece of furniture from scratch knows the frustration of painstakingly cutting pieces of wood, only to discover that they won’t fit together because the cutting was not quite accurate enough.
That’s exactly what happened to Alec Rivers, a PhD student in the Department of Electrical Engineering and Computer Science (EECS), when he attempted to build a simple picture frame using woodworking equipment he had inherited from his grandfather. Despite measuring and aligning his tools as best he could by hand, Rivers found that he could not produce shapes with enough precision to make them all fit together. “I was getting incredibly frustrated, because just as with any home project I would cut things out and they would look about right, but none of the pieces would line up,” Rivers says.
But rather than simply throwing the pieces of wood into the trash and settling for a store-bought picture frame, Rivers decided there had to be a better way. So he and colleagues Frédo Durand, an EECS associate professor and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Ilan Moyer, a graduate student in the Department of Mechanical Engineering, began developing a new kind of woodworking router — a drill-like cutting tool — that could automatically cut out accurate shapes from a piece of material by following a digital design. The result is a handheld device that can adjust its position to precisely follow a digital plan when the user moves the router roughly around the shape to be cut.






 

Wireless tech means safer drones, smarter homes and password-free WiFi

System from MIT’s Computer Science and Artificial Intelligence Lab enables single WiFi access point that can locate users within tens of centimeters.A new wireless technology developed by MIT researchers could mean safer drones, smarter homes, and password-free WiFi. The team developed a system that enables a single WiFi access point to locate users to within tens of centimeters, without any external sensors. They demonstrated the system in an apartment and a cafe, while also showing off a drone that maintains a safe distance from its user with a margin of error of about 4 centimeters.