Saturday 27 December 2014

MOBILE WiMAX

              Mobile WiMax is a broadband wireless solution that enables convergence of mobile and fixed broadband networks through a common wide area broadband radio access technology and flexible network architecture. The Mobile WiMax Air Interface adopts Orthogonal Frequency Division Multiple Access (OFDMA) for improved multi-path performance in non line-of-sight environments. Scalable OFDMA (SOFDMA) is introduced in the IEEE 802.16eAmendment to support scalable channel bandwidths from 1.25 to 20 MHz.
The Mobile Technical Group (MTG) in the WiMax Forum is developing the Mobile WiMAX system profiles that will define the mandatory and optional features of the IEEE standard that are necessary to build a Mobile WiMax compliant air interface that can be certified by the WiMAX Forum. The Mobile WiMax System Profile enables mobile systems to be configured based on a common base feature set thus ensuring baseline functionality for terminals and base stations that are fully interoperable. Some elements of the base station profiles are specified as optional to provide additional flexibility for deployment based on specific deployment scenarios that may require different configurations that are either capacity-optimized or coverage-optimized
Introduction of Mobile WiMax
Release-1 Mobile WiMax profiles will cover 5,7, 8.75, and 10 MHz channel bandwidths for licensed worldwide spectrum allocations in the2.3 GHz, 2.5 GHz, and 3.5 GHz frequency bands.
Mobile WiMax systems offer scalability in both radio access technology and network architecture, thus providing a great deal of flexibility in network deployment options and service offerings. Some of the salient features supported by Mobile WiMax are:
•  High Data Rates. The inclusion of MIMO (Multiple Input Multiple Output) antenna techniques along with flexible sub-channelization schemes, Advanced Coding and Modulation all enable the Mobile WiMax technology to support peak DL data rates up to 63Mbps per sector and peak UL data rates up to 28 Mbps per sector in a 10 MHz channel.
•  Quality of Service (QoS). The fundamental premise of the IEEE 802.16 MAC architecture is QoS. It defines Service Flows which can map to Diff Serv code points that enable end-to end IP based QoS. Additionally, sub channelization schemes provide a flexible mechanism for optimal scheduling of space, frequency and time resources over the air interface on a frame by-frame basis.
•  Scalability . Despite an increasingly globalize economy, spectrum resources for wireless broadband worldwide are still quite disparate in its allocations. Mobile WiMax technology therefore, is designed to be able to scale to work in different canalizations from 1.25 to 20 MHz to comply with varied worldwide requirements as efforts proceed to achieve spectrum harmonization in the longer term. This also allows diverse economies to realize the multifaceted benefits of the Mobile WiMax technology for their specific geographic needs such as providing affordable internet access in rural settings versus enhancing the capacity of mobile broadband access in metro and suburban areas.
•  Security. Support for a diverse set of user credentials exists including; SIM/USIM cards, Smart Cards, Digital Certificates, and Username/Password schemes.
Mobility. Mobile WiMax supports optimized handover schemes with latencies less than 50milliseconds to ensure real-time applications such as VoIP perform without service degradation. Flexible key management schemes assure that security is maintained during handover
Physical Layer Description :
WiMax must be able to provide a reliable service over long distances to customers using indoor terminals or PC cards (like today's WLAN cards). These requirements, with limited transmit power to comply with health requirements, will limit the link budget. Sub channeling in uplink and smart antennas at the base station has to overcome these constraints. The WiMax system relies on a new radio physical (PHY) layer and appropriate MAC (Media Access Controller) layer to support all demands driven by the target applications.
The PHY layer modulation is based on OFDMA, in combination with a centralized MAC layer for optimized resource allocation and support of QoS for different types of services(VoIP, real-time and non real-time services, best effort). The OFDMA PHY layer is well adapted to the NLOS propagation environment in the 2 - 11 GHz frequency range.
It is inherently robust when it comes to handling the significant delay spread caused by the typical NLOS reflections. Together with adaptive modulation, which is applied to each subscriber individually according to the radio channel capability, OFDMA can provide a high spectral efficiency of about 3 - 4 bit/s/Hz. However, in contrast to single carrier modulation, the OFDMA signal has an increased peak: average ratio and increased frequency accuracy requirements. Therefore, selection of appropriate power amplifiers and frequency recovery concepts are crucial. WiMax provides flexibility in terms of channelization, carrier frequency, and duplex mode (TDD and FDD) to meet a variety of requirements for available spectrum resources and targeted services.

Tuesday 23 December 2014



XBOX 360 System


 Virtual reality (VR) is the creation of a highly interactive computer based multimedia environment in which the user becomes a participant with the computer in what is known as a “synthetic environment.” Virtual reality uses computers to immerse one inside a threedimensional program rather than simulate it in two-dimensions on a monitor. Utilizing the concept of virtual reality, the computer engineer integrates video technology, high resolution image-processing, and sensor technology into the data processor so that a person can enter into and react with three-dimensional spaces generated by computer graphics. The goal computer engineers have is to create an artificial world that feels genuine and will respond to every movement one makes, just as the real world does. Naming discrepancies aside, the concept remains the same - using computer technology to create a simulated, three-dimensional world that a user can manipulate and explore while feeling as if he were in that world. Scientists, theorists and engineers have designed dozens of devices and applications to achieve this goal.
Opinions differ on what exactly constitutes a true VR experience, but in general it should include:
 Three-dimensional images that appear to be life-sized from the perspective of the user
 The ability to track a user's motions, particularly his head and eye movements, and correspondingly adjust the images on the user's display to reflect the change in perspective Virtual realities are a set of emerging electronic technologies, with applications in a wide range of fields. This includes education, training, athletics, industrial design, architecture and landscape architecture, urban planning, space exploration, medicine and rehabilitation, entertainment, and model building and research in many fields of science.
Virtual reality (VR) can be defined as a class of computer-controlled multisensory communication technologies that allow more intuitive interactions with data and involve human senses in new ways. Virtual reality can also be defined as an environment created by the computer in which the user feels present. This technology was devised to enable people to deal with information more easily. Virtual Reality provides a different way to see and experience information, one that is dynamic and immediate. It is also a tool for model building and problem solving. Virtual Reality is potentially a tool for experiential learning.
The virtual world is interactive; it responds to the user’s actions. Virtual Reality is defined as a highly interactive, computer-based multimedia environment in which the user becomes the participant in a computer-generated world. It is the simulation of a real or imagined environment that can be experienced visually in the three dimensions of width, height, and depth and that may additionally provide an interactive experience visually in full real-time motion with sound and possibly with tactile and other forms of feedback. VR incorporates 3D technologies that give a real life illusion. VR creates a simulation of real-life situation. The emergence of augmented reality technology in the form of interactive games has produced a valuable tool for education. One of the emerging strengths of VR is that it enables objects and their behaviour to be more accessible and understandable to the human user.

KINECT
Microsoft Xbox 360 Kinect has revolutionized gaming In that you are able to use your entire body as the controller. Conventional Controllers are not required because the Kinect Sensor picks Up on natural body movements as inputs for the game. Three major components play a part in making the Kinect function as it does; the movement tracking, the speech recognition, and the motorized tilt of the sensor itself. The name “Kinect” is a permutation of two words; Kinetic and Connect. The Kinect was first announced on June 1, 2009 at E3 (Electronic Entertainment Expo) as “Project Natal,” the name stems from one of the key project leader’s hometown named “Natal” in Brazil. The software that makes Kinect function was by and large developed by Rare, a Microsoft subsidiary.
The Kinect Sensor XBOX 360 System
A company based In Israel known as PrimeSense developed the 3D sensing technology. Microsoft purchased the rights to use the technology for their gaming system. In the first 60 days on the market, Microsoft shipped 8 million units to retailers around the globe. The estimated Bill of Materials cost for the Kinect is estimated to be $56, which does not include Research and Development or Marketing costs, merely the cost of the hardware.
Sensing Technology
Behind the scene of PrimeSense's 3D sensing technology there are three main parts that make it work. An infrared laser projector, infrared camera, and the RGB colored camera. The depth projector simply floods the room with IR laser beams creating a depth field that can be seen only by the IR camera. Due to infrared’s insensitivity to ambient light, the Kinect can be played in any lighting conditions. However, because the face recognition system is dependent on the RGB camera along with the depth sensor, light is needed for the Kinect to recognize a calibrated player accurately. The following image shows a generalized concept of how kinect's depth sensing works.
How the sensor sees in 3D
In more detail, the IR depth sensor is a monochrome complimentary metal-oxide-semiconductor (CMOS) camera. This means that it is only sees two colors, in this case black and white which is all that’s needed to create a "depth map" of any room. The IR camera used in the Kinect is VGA resolution (640x480) refreshing at a rate of 30Hz. Each camera pixel has a photodiode connected to it, which receives the IR light beams being bounced off objects in the room. The corresponding voltage level of each photodiode depends on how far the object is from the camera. An object that is closer to the camera appears brighter than an object that is farther away. The voltage produced by the photodiode is directly proportional to the distance the object. Each voltage produced by the photodiode is then amplified and then sent to an image processor for further processing. With this process being updated 30 times per second, you can imagine the Kinect has no problem detecting full-body human movements very accurately considering the player is within recommended distance.
Infrared Beams in the Room
Although the hardware is the basis for creating an image that the processor can interpret, the software behind the Kinect is what makes everything possible. Using statistics, probability, and hours of testing different natural human movements the programmers developed software to track the movements of 20 main joints on a human body. This software is how the Kinect can differentiate a player from say a dog that happens to run in front of the IR projector or different players that are playing a game together. The Kinect has the capabilities of tracking up to six different players at a time, but as of now the software can only track up to two active players.
Infrared beams in the room
One of the main features of the Kinect is that it can recognize you individually. When calibrating yourself with the Kinect, the depth sensing and the color camera work together to develop an accurate digital image of how your face looks. The 8- bit color camera, also VGA resolution, detects and stores the skin tone of the person it is calibrating. The depth sensor helps make the facial recognition more accurately by creating 3-D shape of your face. Storing these images of your face and skin tone color is how the Kinect can recognize you when you step in front of the projected IR beams. As mentioned earlier, for the facial recognition to work accurately there needs to be a certain amount of light. Another added feature of the color camera is it takes videos or snapshots at key moments during game play so you can see how you look while playing.