Monday, 22 September 2014


E Ink ( Electrophoretic Ink ) Technology

               E Ink is the creator of electronic ink. You may have seen our displays in the Amazon Kindle, Barnes & Noble Nook, Novatel MiFi and many other devices. We refer to our displays as electronic paper, or ePaper. This ePaper takes the best elements of the printed page, and merges it with electronics to create a new generation of paper - a low power, instantly updateable paper - and just like your favorite book, readable even in the brightest sunlight.
                                     
 

Thursday, 18 September 2014

AMOLED DISPLAY

AMOLED (active-matrix organic light-emitting diode) is a display technology for use in mobile devices and televisions. OLED describes a specific type of thin-film-display technology in which organic components form the electroluminescent material, and active matrix refers to the technology behind the addressing of pixels.
As of 2012, AMOLED technology is used in mobile phones, media players and digital cameras, and continues to make progress toward low-power, low-cost and large-size (for example, 40-inch) applications



An AMOLED display consists of an active matrix of OLED pixels that generate light (luminescence) upon electrical activation that have been deposited or integrated onto a thin-film-transistor (TFT) array, which functions as a series of switches to control the current flowing to each individual pixel.Typically, this continuous current flow is controlled by at least two TFTs at each pixel (to trigger the luminescence), with one TFT to start and stop the charging of a storage capacitor and the second to provide a voltage source at the level needed to create a constant current to the pixel, thereby eliminating the need for the very high currents required for passive-matrix OLED operation.TFT backplane technology is crucial in the fabrication of AMOLED displays. The two primary TFT backplane technologies, namely polycrystalline silicon (poly-Si) and amorphous silicon (a-Si), are used today in AMOLEDs. These technologies offer the potential for fabricating the active-matrix backplanes at low temperatures (below 150 °C) directly onto flexible plastic substrates for producing flexible AMOLED displays.

Wednesday, 17 September 2014


Zettabyte File System




 

ZFS is a 128-bit file system developed by Sun Microsystems in 2005 for OpenSolaris.

The maximum volume size for a ZFS volume is 2 to the power of 64 for a total of 18 exabits. The maximum number of files in a directory which ZFS supports is 2 to the power of 48 or 281,474,976,710,656 files. The filename can have a maximum length of 255 characters.

ZFS supports deduplication.

RAID is supported by ZFS. ZFS does support RAID-1, except that more than two disks can be mirrored. Other RAID supported is not the standard RAID types, but RAID-Z. Specifically, ZFS supports RAID-Z levels 1, 2, and 3. RAID-Z1 will mirror small blocks across disks instead of using parity. RAID-Z2 uses double the parity across disks to allow for a maximum of two disks to fail and the data on the RAID volume to remain accessible. RAID-Z3 uses triple parity to allow for a maximum of three disks to fail before the volume is inaccessible. When a large disk fails on a RAID system, it takes a long time to reconstruct the data from the parity. Disks with storage capabilities in the high terabyte range can take weeks for data reconstruction from parity. Using a higher level of RAID-Z allows for the disks to not be slowed down by allowing disk access and data repair at the same time.

For increased size, ZFS supports Resizing. The file system can cover multiple block devices. In this case, multiple drives can be joined in a ZFS Storage Pool (zpool). Each device, or hard disk, is a virtual device (vdev). If one vdev fails, the whole zpool goes offline. To prevent this from occurring, a zpool can be implemented with RAID so it has redundancy to remain online in case of failure. The ZFS file system can support up to 18,446,744,073,709,551,616 vdev devices in a zpool. The same number is the amount of zpools on a system.

It should be noted that when a volume has RAID enabled (RAID 0 - striping), the volume can be increased by Resizing. When a new drive is added to a RAID volume, the stripes are dynamically resized to allow for the new drive to be included into the RAID set.

Snapshots allow for an image that can readily be used for making a backup and not require files to be locked. Files can also be skipped in some cases if the file is opened and being modified at the time of the backup. For writing, clones can also be used.

A zpool can support Quotas to limit the available space to a user or group. Unlimited access can allow certain users and/or groups to fill drives to full capacity.

To compensate for drive speed, ZFS uses a cache algorithm called ARC. For data that is accessed often, ZFS will keep the data in RAM, which is faster than a hard disk. If the data is no longer accessed as much, the data is not cached in RAM. If the hardware system has low RAM, then the caching is not managed and all data is stored on disk only. ZFS works with low memory systems, but works better with higher amounts of RAM.

All pointers to a block use a 256-bit checksum to provide data integrity. Data is written as a Copy-On-Write (COW). Data is written to new blocks before the pointer is changed to the new block location. Once done, the old blocks are marked as unused. Blocks are not overwritten.

Blocks can be of variable sizes, up to a maximum of 1,024 KB. When Compression is enabled, variable block sizes are used to allow for smaller block usage when a file is shrunk.

ZFS supports compression. Compression is used to reduce file size before storing it on disk. Compression saves space on the drives and produces faster reads. Since the data is compressed, there is less data to be read from the disk. Write times can also be reduced, but it does require a little overhead to compress before a write and uncompress after a read. Compression/decompression is performed by the CPU. The available compression methods are LZJB and gzip.

Saturday, 13 September 2014



NVIDIA Tesla Personal Supercomputer


            The Tesla Personal Supercomputer is a desktop computer that is backed by Nvidia and built by Dell, Lenovo and other companies. It is meant to be a demonstration of the capabilities of Nvidia's Tesla GPGPU brand; it utilizes NVIDIA's CUDA parallel computing architecture and powered by up to 960 parallel processing cores, which allows it to achieve a performance up to 250 times faster than standard PCs, according to Nvidia. At the heart of the new Tesla personal supercomputer are three or four Nvidia Tesla C1060 computing processors, which appear similar to a high-performance Nvidia graphics card, but without any video output ports. At the heart of the new Tesla personal supercomputer are three or four Nvidia Tesla C1060 computing processors, which appear similar to a high-performance Nvidia graphics card, but without any video output ports. Each Tesla C1060 has 240 streaming processor cores running at 1.296 GHz, 4 GB of 800 MHz 512-bit GDDR3 memory and a PCI Express x16 system interface. While typically using only 160-watts of power, each card is capable of 933 GFlops of single precision floating point performance or 78 GFlops of double precision floating point performance.


Tuesday, 9 September 2014

5G TECHNOLOGY
            
             5G (5th generation mobile networks or 5th generation wireless systems) denotes the next major phase of mobile telecommunications standards beyond the current 4G standards. 5G is also referred to as beyond 2020 mobile communications technologies. 5G does not describe any particular specification in any official document published by any telecommunication standardization body.
Although updated standards that define capabilities beyond those defined in the current 4G standards are under consideration, those new capabilities are still being grouped under the current ITU-T 4G standards.




                    A new mobile generation has appeared approximately every 10th year since the first 1G system, Nordic Mobile Telephone, was introduced in 1981. The first 2G system started to roll out in 1991, the first 3G system first appeared in 2001 and 4G systems fully compliant with IMT Advanced were standardized in 2012. The development of the 2G(GSM) and 3G (IMT-2000 and UMTS) standards took about 10 years from the official start of the R&D projects, and development of 4G systems started in 2001 or 2002.Predecessor technologies have occurred on the market a few years before the new mobile generation, for example the pre-3G system CDMA One/IS95 in the US in 1995, and the pre-4G systems Mobile Wimax in South-Korea 2006, and first release-LTE in Scandinavia 2009.
Mobile generations typically refer to non–backwards-compatible cellular standards following requirements stated by ITU-R, such as IMT-2000 for 3G and IMT-Advanced for 4G. In parallel with the development of the ITU-R mobile generations, IEEE and other standardisation bodies also develop wireless communication technologies, often for higher data rates and higher frequencies but shorter transmission ranges.