Technologies

Post-4 Microsoft’s Hyper-V technology:

The Hyper-V role in Windows Server lets you create a virtualized computing environment where you can create and manage virtual machines. You can run multiple operating systems on one physical computer and isolate the operating systems from each other. With this technology, you can improve the efficiency of your computing resources and free up your hardware resources.

Post-3 Advance technology Invented in Last Year(2016).


  1. The first 1TB SD card
  2. First Amazon package delivered by drone
  3. A breakthrough in lithium-metal batteries could double efficiency
  4. Hyperloop one begins actual high speed test
  5. Carbon nanotube transistors outperform silicon for the first time
  6. Dust-sized sensors that can be implanted within the body
  7. SolarCity opens Buffalo facility, Tesla opens Gigafactory 1
  8. Eternal nano-structured data recording
  9. SpaceX landed a rocket vertically in the ocean
  10. Google’s Tensor Processing Unit Pushes Machine Learning, Beats Go Champio
for more details visit this Link :
http://mashable.com/2016/12/17/best-tech-2016/
Post-2. What is the difference between a 32-bit and 64-bit processor?


32-bit processor:-

       Computers, operating systems, or software programs capable of transferring data 32-bits at a time. With computer processors, (e.g. 8038680486, and Pentium) they were 32-bit processors, which means the processor were capable of working with 32 bit binary numbers (decimal number up to 4,294,967,295). Anything larger and the computer would need to break up the number into smaller pieces.
A good example of the first 32-bit operating system is OS/2 and Windows NT, often versions of Windows that are 32-bit are referred to as WOW32. Today, 32-bit computers and operating systems are being replaced by 64-bit computers and operating systems such as 64-bit versions of Windows 7.
2. 32-bit can also refer to the amount of colors a video card is displaying. 32-bit is the same as 16.7 million colors (24-bit color with an 8-bit alpha channel).
64-bit processor:-
The 64-bit computer has been around since 1961 when IBM created the IBM 7030 Stretch supercomputer. However, it was not put into use in home computers until the early 2000s. Microsoft released a 64-bit version of Windows XP to be used on computers with a 64-bit processor. Windows Vista, Windows 7, and Windows 8 also come in 64-bit versions. Other software has been developed that is designed to run on a 64-bit computer, which are 64-bit based as well, in that they work with data units that are 64 bits wide.

Note: A computer with a 64-bit processor can have a 64-bit or 32-bit version of an operating system installed. However, with a 32-bit operating system, the 64-bit processor would not run at its full capability.
Note: On a computer with a 64-bit processor, you cannot run a 16-bit legacyprogram. Many 32-bit programs will work with a 64-bit processor and operating system, but some older 32-bit programs may not function properly, or at all, due to limited or no compatibility.

Differences between a 32-bit and 64-bit CPU

A big difference between 32-bit processors and 64-bit processors is the number of calculations per second they can perform, which affects the speed at which they can complete tasks. 64-bit processors can come in dual core, quad core, six core, and eight core versions for home computing. Multiple cores allow for an increased number of calculations per second that can be performed, which can increase the processing power and help make a computer run faster. Software programs that require many calculations to function smoothly can operate faster and more efficiently on the multi-core 64-bit processors, for the most part.
Another big difference between 32-bit processors and 64-bit processors is the maximum amount of memory (RAM) that is supported. 32-bit computers support a maximum of 3-4GB of memory, whereas a 64-bit computer can support memory amounts over 4 GB. This feature is important for software programs used in graphic design, engineering, and video editing as these programs have to perform many calculations to render their images.
One thing to note is that 3D graphic programs and games do not benefit much, if at all, from switching to a 64-bit computer, unless the program is a 64-bit program. A 32-bit processor is adequate for any program written for a 32-bit processor. In the case of computer games, you'll get a lot more performance by upgrading the video card instead of getting a 64-bit processor.
In the end, 64-bit processors are becoming more and more commonplace in home computers. Most manufacturers build computers with 64-bit processors due to cheaper prices and because more users are now using 64-bit operating systems and programs. Computer parts retailers are offering fewer and fewer 32-bit processors and soon may not offer any at all.

Post-1. What's the difference between ASCII and Unicode?

ASCII :

ASCII has 128 code points, 0 through 127. It can fit in a single 8-bit byte, the values 128 through 255 tended to be used for other characters. With incompatible choices, causing the code page disaster. Text encoded in one code page cannot be read correctly by a program that assumes or guessed at another code page.
Unicode came about to solve this disaster. Version 1 started out with 65536 code points, commonly encoded in 16 bits. Later extended in version 2 to 1.1 million code points. The current version is 6.3, using 110,187 of the available 1.1 million code points. That doesn't fit in 16 bits anymore.
Encoding in 16-bits was common when v2 came around, used by Microsoft and Apple operating systems for example. And language runtimes like Java. The v2 spec came up with a way to map those 1.1 million code points into 16-bits. An encoding called UTF-16, a variable length encoding where one code point can take either 2 or 4 bytes. The original v1 code points take 2 bytes, added ones take 4.
Another variable length encoding that's very common, used in *nix operating systems and tools is UTF-8, a code point can take between 1 and 4 bytes, the original ASCII codes take 1 byte the rest take more. The only non-variable length encoding is UTF-32, takes 4 bytes for a code point. Not often used since it is pretty wasteful. There are other ones, like UTF-1 and UTF-7, widely ignored.
An issue with the UTF-16/32 encodings is that the order of the bytes will depend on the endian-ness of the machine that created the text stream. So add to the mix UTF-16BE, UTF-16LE, UTF-32BE and UTF-32LE.
Having these different encoding choices brings back the code page disaster to some degree, along with heated debates among programmers which UTF choice is "best". Their association with operating system defaults pretty much draws the lines. One counter-measure is the definition of a BOM, the Byte Order Mark, a special codepoint (U+FEFF, zero width space) at the beginning of a text stream that indicates how the rest of the stream is encoded. It indicates both the UTF encoding and the endianess and is neutral to a text rendering engine. Unfortunately it is optional and many programmers claim their right to omit it so accidents are still pretty common.

Some Points to keep remember :


1. Unicode takes 2 byte. 1 byte for language page 1 byte for sign value.
2. ASCII takes 1 byte. It doesn't containt info about language page and all bytes ( 8 ) contain sign info.
3. If we will use in our application different language in one time. I mean we can see record on Engligh and Japan language - Unicode can solve this problem. Because it has language page info.
4. If we will use in our application different language in one time. I mean we can see record on Engligh and Japan language - ASCII can't solve this problem, because it can store info only about one language.
5. If our application using different language in different instance both ASCII and Unicode can solve this problem. Because it has 1 byte for storing sign.
6. Also usually writing that Unicode can store more signs. I mean it can store Japan and Chine alphabet. But I can't understand why it can. Both Unicode and ASCII has 1 byte for storing sign info. So why Unicode can store more signs that ASCII ?


No comments:

Post a Comment