Wednesday, November 4, 2009
| ANNA UNIVERSITY :: CHENNAI – 600 025 |
| MODEL QUESTION PAPER |
| V SEMESTER |
| B.TECH. INFORMATION TECHNOLOGY |
| IF356 – INFORMATION CODING TECHNIQUES |
| Time : 3 Hours Max. Marks : 100 |
| Answer all Questions |
| PART – A (10 X 2 = 20 MARKS) |
| 1. Calculate the bit rate for a 16 bit per sample stereophonic music whose sampling |
| rate is 44.1 KSPS. |
| 2. Draw the Huffman code tree and find out the code for the given data: |
| AAAABBCDAB |
| 3. What type of encoding technique is applied to AC and DC co-efficients in JPEG? |
| 4. How to decode the given frame sequence using MPEG coding technique. |
| IPBBPBBI |
| 5. Draw the block diagram of DPCM signal encoder. |
| 6. Probability |
| 0.25 |
| 0.20 |
| 0.15 |
| 0.15 |
| 0.10 |
| 0.05 For the above given data calculate the entropy by coding it using Shannon fano |
| technique. |
| 7. Compare Huffman coding and Shannon fano coding. |
| 8. Give the various pulse modulation techniques pulse code modulation technique |
| available. How do they differ from each other? |
| 9. What is a Generator polynomial? Give some standard generator polynomials. |
| 10. Write any two featur es of discrete memory less channels. |
| PART – B (5 X 16 = 80 MARKS) |
| 11.i) Discuss the various stages in JPEG standard (9) |
| ii) Differentiate loss less and lossy compression technique and give one example for |
| each. (4) |
| iii) State the prefix property of Huffman code. (3) |
| 12.a)i) How will you calculate channel capacity? (2) |
| ii) Write channel coding theorem and channel capacity theorem. (5) |
| iii) Calculate the entropy for the given sample data (3) |
| AAABBBCCD |
| iv) Prove Shannon Information Capacity theorem. (6) |
| (OR) |
| 12.b)i) Use differential entropy to compare the randomness of random variables. (4) |
| ii) A four symbol alphabet has following probabilities |
| Pr(ao) = 1/2 |
| Pr(ao) = 1/4 |
| Pr(ao) = 1/8 |
| Pr(ao) = 1/8 |
| and an entropy of 1.75 bits. Find a codebook for this four letter alphabet that satisfies |
| source coding theorem. (4) |
| iii) Write the entropy for a binary symmetric source. (4) |
| iv) Write down the channel capacity for a binary channel. (4) |
| 13.a)i) Compare and contrast DPCM and ADPCM (6) |
| ii) Define pitch, period and loudness (6) |
| iii) What is decibel? (2) |
| iv) What is the purpose of DFT? (2) |
| (OR) |
| 13.b)i) Explain Delta Modulation with examples. (6) |
| ii) Explain sub-band adaptive differential pulse code modulation. (6) |
| iii) What will happen if speech is coded at low bit rates? (4) |
| 14.a) Consider a hamming code C which is determined by the parity check matrix. |
| H = |
| i) Show that the two vectors C1 = (0010011) and C2 = (0001111) are codewords of C |
| and calculate the hamming distance between them. (4) |
| ii) Assume that a codeword C was transmitted and that a vector r = c + e is received. |
| Show that the syndrome s = r.HT only depends on error vector e. (4) |
| iii) Calculate the syndromes for all possible error vectors e with Hamming weight < = |
| 1 and list them in a table. How can this be used to correct a single bit error in an |
| arbitrary position. (4) |
| iv) What is the length and the dimension K of the code. Why can the minimum |
| Hamming distance dmin not be larger than three? (4) |
| (OR) |
| 14.b)i) Define linear block code. (2) |
| ii) How to find the parity check matrix? (4) |
| iii) Give the syndrome decoding algorithm. (4) |
| iv) Design a linear block code with dmin = 3 for some block length n = 2m-1. (6) |
| 15.a)i) What are Macro blocks and GOB’s? (4) |
| ii) On what factors does the quantisation threshold depends in H.261 standards. (3) |
| iii) Discuss the MPEG compression techniques. (9) |
| (OR) |
| 15.b)i) Discuss about the various Dolby audio coders. (6) |
| ii) Discuss about any two audio coding techniques used in MPEG. (6) |
| iii) Write down the principle behind video compression. (4) |
Click the following link to download: |



