IELTS|Upper-Intermediate|Exam Part 1

pic1_GE|Ad|Adv|Revise1

Read the information to get ready for the exam simulation

  • Today you will do the Reading and Speaking tasks in the exam format.
  • The speaking tasks will be assessed separately.
  • Read the instruction carefully before doing the task. The feedback will be given later.
  • Remember that you don’t have to answer all the questions correctly to get a good result.
  • Try to relax and deal with easier questions first. Then come back and think about the difficult ones.

The IELTS 9-band scale

You’ll receive a score between 1 to 9 for each section of your test – Listening, Reading, Writing and Speaking.

Your overall band score is the average achieved from each element of the test. You can score whole (e.g., 5.0, 6.0, 7.0) or half (e.g., 5.5, 6.5, 7.5) bands in each test.

pic1_IELTS|Upper-Int|Exam Part 1

Read the passage and do the task below

Dino discoveries

When news breaks of the discovery of a new species of dinosaur, you would be forgiven for thinking that the scientists who set out in search of the fossils are the ones who made the find. The reality tells a different story, as Cavan Scott explains.

The BBC series Planet Dinosaur used state-of-the-art computer graphics to bring to life the most impressive of those dinosaurs whose remains have been discovered in the past decade. One of these is Gigantoraptor erlianensis. Discovered in 2005, it stands more than three metres high at the hip and is the biggest bird-like dinosaur ever unearthed. Yet its discoverer, Xu Xing of Beijing’s Institute of Vertebrate Palaeontology and Paleoanthropology, was not even looking for it at the time. He was recording a documentary in the Gobi Desert, Inner Mongolia.

«The production team were filming me and a geologist digging out what we thought were sauropod bones,» says Xu, «when I realised the fossils were something else entirely.» Gigantoraptor, as it later became known, turned out to be an oviraptorid, a therapod with a bird-like beak. Its size was staggering. The largest oviraptorid previously discovered had been comparable in size to an emu; the majority were about as big as a turkey. Here was a creature that was probably about eight metres long, if the bone analysis was anything to go by.

Sometimes it is sheer opportunism that plays a part in the discovery of a new species. In 1999, the National Geographic Society announced that the missing link between dinosaurs and modern birds had finally been found. Named Archaeoraptor lianoingensis, the fossil in question appeared to have the head and body of a bird, with the hind legs and tail of a 124-million-year-old dromaeosaur — a family of small theropods that include the bird-like Velociraptor made famous by Jurassic Park films.

There was a good reason why the fossil looked half-bird, half-dinosaur. CT scans almost immediately proved the specimen was bogus and had been created by an industrious Chinese farmer who had glued two separate fossils together to create a profitable hoax.

But while the palaeontologists behind the announcement were wiping egg off their faces, others, including Xu, were taking note. The head and body of the fake composite belonged to Yanornis martini, a primitive fish-eating bird from around 120 million years ago. The dromaeosaur tail and hind legs, however, were covered in what looked like fine proto-feathers. That fossil turned out to be something special. In 2000, Xu named it Microraptor and revealed that it had probably lived in the treetops. Although it couldn’t fly, its curved claws provided the first real evidence that dinosaurs could have climbed trees. Three years later, Xu and his team discovered a closely related Microraptor species which changed everything. «Microraptor had two salient features,» Xu explains, «long feathers were attached not just to its forearms but to its legs and claws. Then we noticed that these long feathers had asymmetrical vanes, a feature often associated with flight capability. This meant that we might have found a flying dinosaur.»

Some extraordinary fossils have remained hidden in a collection and almost forgotten. For the majority of the 20th century, the palaeontology community had ignored the frozen tundra of north Alaska. There was no way, scientists believed, that cold-blooded dinosaurs could survive in such bleak, frigid conditions. But according to Alaskan dinosaur expert Tony Fiorillo, they eventually realised they were missing a trick.

«The first discovery of dinosaurs in Alaska was actually made by a geologist called Robert Liscomb in 1961,» says Fiorillo. «Unfortunately, Robert was killed in a rockslide the following year, so his discoveries languished in a warehouse for the next two decades.» In the mid-1980s, managers at the warehouse stumbled upon the box containing Liscomb’s fossils during a spring clean. The bones were sent to the United States Geological Survey, where they were identified as belonging to Edmontosaurus, a duck-billed hadrosaur. Today, palaeontologists roam this frozen treasure trove searching for remains locked away in the permafrost.

The rewards are worth the effort. While studying teeth belonging to the relatively intelligent Troodon therapod, Fiorillo discovered the teeth of the Alaskan Troodon were double the size of those of its southern counterpart. «Even though the morphology of individual teeth resembled that of Troodon, the size was significantly larger than the Troodon found in warmer climates.» Fiorillo says that the reason lies in the Troodon’s large eyes, which allowed it to hunt at dawn and at dusk — times when other dinosaurs would have struggled to see. In the polar conditions of Cretaceous Alaska, where the Sun would all but disappear for months on end, this proved a useful talent. «Troodon adapted for life in the extraordinary light regimes of the polar world. With this advantage, it took over as Alaska’s dominant therapod,» explains Fiorillo. Finding itself at the top of the food chain, the dinosaur evolved to giant proportions.

It is true that some of the most staggering of recent developments have come from palaeontologists being in the right place at the right time, but this is no reflection on their knowledge or expertise. After all, not everyone knows when they’ve stumbled upon something remarkable. When Argentine sheep farmer Guillermo Heredia uncovered what he believed was a petrified tree trunk on his Patagonian farm in 1988, he had no way of realising that he’d found a 1.5-metre-long tibia of the largest sauropod ever known to walk the Earth. Argentinosaurus was 24 metres long and weighed 75 tonnes. The titanosaur was brought to the attention of the scientific community in 1993 by Rodolfo Coria and Jose Bonaparte of the National Museum of Natural Sciences in Buenos Aires. Coria points out that most breakthroughs are not made by scientists but by ordinary folk. «But the real scientific discovery is not the finding; it’s what we learn from that finding.» While any one of us can unearth a fossil, it takes dedicated scientists to see beyond the rock.


Decide whether the following statements agree with the information in Reading Passage 1

Select:

True

False

Not Given

if the statement agrees with the information

if the statement contradicts the information

if there is no information on this

Look at the diagrams and label their parts. Write no more than two words


Dino discoveries

When news breaks of the discovery of a new species of dinosaur, you would be forgiven for thinking that the scientists who set out in search of the fossils are the ones who made the find. The reality tells a different story, as Cavan Scott explains.

The BBC series Planet Dinosaur used state-of-the-art computer graphics to bring to life the most impressive of those dinosaurs whose remains have been discovered in the past decade. One of these is Gigantoraptor erlianensis. Discovered in 2005, it stands more than three metres high at the hip and is the biggest bird-like dinosaur ever unearthed. Yet its discoverer, Xu Xing of Beijing’s Institute of Vertebrate Palaeontology and Paleoanthropology, was not even looking for it at the time. He was recording a documentary in the Gobi Desert, Inner Mongolia.

«The production team were filming me and a geologist digging out what we thought were sauropod bones,» says Xu, «when I realised the fossils were something else entirely.» Gigantoraptor, as it later became known, turned out to be an oviraptorid, a therapod with a bird-like beak. Its size was staggering. The largest oviraptorid previously discovered had been comparable in size to an emu; the majority were about as big as a turkey. Here was a creature that was probably about eight metres long, if the bone analysis was anything to go by.

Sometimes it is sheer opportunism that plays a part in the discovery of a new species. In 1999, the National Geographic Society announced that the missing link between dinosaurs and modern birds had finally been found. Named Archaeoraptor lianoingensis, the fossil in question appeared to have the head and body of a bird, with the hind legs and tail of a 124-million-year-old dromaeosaur — a family of small theropods that include the bird-like Velociraptor made famous by Jurassic Park films.

There was a good reason why the fossil looked half-bird, half-dinosaur. CT scans almost immediately proved the specimen was bogus and had been created by an industrious Chinese farmer who had glued two separate fossils together to create a profitable hoax.

But while the palaeontologists behind the announcement were wiping egg off their faces, others, including Xu, were taking note. The head and body of the fake composite belonged to Yanornis martini, a primitive fish-eating bird from around 120 million years ago. The dromaeosaur tail and hind legs, however, were covered in what looked like fine proto-feathers. That fossil turned out to be something special. In 2000, Xu named it Microraptor and revealed that it had probably lived in the treetops. Although it couldn’t fly, its curved claws provided the first real evidence that dinosaurs could have climbed trees. Three years later, Xu and his team discovered a closely related Microraptor species which changed everything. «Microraptor had two salient features,» Xu explains, «long feathers were attached not just to its forearms but to its legs and claws. Then we noticed that these long feathers had asymmetrical vanes, a feature often associated with flight capability. This meant that we might have found a flying dinosaur.»

Some extraordinary fossils have remained hidden in a collection and almost forgotten. For the majority of the 20th century, the palaeontology community had ignored the frozen tundra of north Alaska. There was no way, scientists believed, that cold-blooded dinosaurs could survive in such bleak, frigid conditions. But according to Alaskan dinosaur expert Tony Fiorillo, they eventually realised they were missing a trick.

«The first discovery of dinosaurs in Alaska was actually made by a geologist called Robert Liscomb in 1961,» says Fiorillo. «Unfortunately, Robert was killed in a rockslide the following year, so his discoveries languished in a warehouse for the next two decades.» In the mid-1980s, managers at the warehouse stumbled upon the box containing Liscomb’s fossils during a spring clean. The bones were sent to the United States Geological Survey, where they were identified as belonging to Edmontosaurus, a duck-billed hadrosaur. Today, palaeontologists roam this frozen treasure trove searching for remains locked away in the permafrost.

The rewards are worth the effort. While studying teeth belonging to the relatively intelligent Troodon therapod, Fiorillo discovered the teeth of the Alaskan Troodon were double the size of those of its southern counterpart. «Even though the morphology of individual teeth resembled that of Troodon, the size was significantly larger than the Troodon found in warmer climates.» Fiorillo says that the reason lies in the Troodon’s large eyes, which allowed it to hunt at dawn and at dusk — times when other dinosaurs would have struggled to see. In the polar conditions of Cretaceous Alaska, where the Sun would all but disappear for months on end, this proved a useful talent. «Troodon adapted for life in the extraordinary light regimes of the polar world. With this advantage, it took over as Alaska’s dominant therapod,» explains Fiorillo. Finding itself at the top of the food chain, the dinosaur evolved to giant proportions.

It is true that some of the most staggering of recent developments have come from palaeontologists being in the right place at the right time, but this is no reflection on their knowledge or expertise. After all, not everyone knows when they’ve stumbled upon something remarkable. When Argentine sheep farmer Guillermo Heredia uncovered what he believed was a petrified tree trunk on his Patagonian farm in 1988, he had no way of realising that he’d found a 1.5-metre-long tibia of the largest sauropod ever known to walk the Earth. Argentinosaurus was 24 metres long and weighed 75 tonnes. The titanosaur was brought to the attention of the scientific community in 1993 by Rodolfo Coria and Jose Bonaparte of the National Museum of Natural Sciences in Buenos Aires. Coria points out that most breakthroughs are not made by scientists but by ordinary folk. «But the real scientific discovery is not the finding; it’s what we learn from that finding.» While any one of us can unearth a fossil, it takes dedicated scientists to see beyond the rock.

Read the article and do the task below

pic2_IELTS|Upper-Int|Exam Part 1

Art to the aid of technology

What caricatures can teach us about facial recognition, by Ben Austen

A. Our brains are incredibly agile machines, and it is hard to think of anything they do more efficiently than recognise faces. Just hours after birth, the eyes of newborns are drawn to facelike patterns. An adult brain knows it is seeing a face within 100 milliseconds, and it takes just over a second to realise that two different pictures of a face, even if they are lit or rotated in very different ways, belong to the same person.

B. Perhaps the most vivid illustration of our gift for recognition is the magic of caricature — the fact that the sparest cartoon of a familiar face, even a single line dashed off in two seconds, can be identified by our brains in an instant. It is often said that a good caricature looks more like a person than the person themselves. As it happens, this notion, counterintuitive though it may sound, is actually supported by research. In the field of vision science, there is even a term for this seeming paradox — the caricature effect — a phrase that hints at how our brains misperceive faces as much as perceive them.

C. Human faces are all built pretty much the same: two eyes above a nose that’s above a mouth, the features varying from person to person generally by mere millimetres. So what our brains look for, according to vision scientists, are the outlying features — those characteristics that deviate most from the ideal face we carry around in our heads, the running average of every «visage» we have ever seen. We code each new face we encounter not in absolute terms but in the several ways it differs markedly from the mean. In other words, we accentuate what is most important for recognition and largely ignore what is not. Our perception fixates on the upturned nose, the sunken eyes or the fleshy cheeks, making them loom larger. To better identify and remember people, we turn them into caricatures.

D. Ten years ago, we all imagined that as soon as surveillance cameras had been equipped with the appropriate software, the face of a crime suspect would stand out in a crowd. Like a thumbprint, its unique features and configuration would offer a biometric key that could be immediately checked against any database of suspects. But now a decade has passed, and face recognition systems still perform miserably in real world conditions. Just recently, a couple who accidentally swapped passports at an airport in England sailed through electronic gates that were supposed to match their faces to file photos.

E. All this leads to an interesting question. What if, to secure our airports and national landmarks, we need to learn more about caricature? After all, it’s the skill of the caricaturist — the uncanny ability to quickly distil faces down to their most salient features — that our computers most desperately need to acquire. Clearly, better cameras and faster computers simply aren’t going to be enough.

F. At the University of Central Lancashire in England, Charlie Frowd, a senior lecturer in psychology, has used insights from caricature to develop a better police-composite generator. His system, called EvoFIT, produces animated caricatures, with each successive frame showing facial features that are more exaggerated than the last. Frowd’s research supports the idea that we all store memories as caricatures, but with our own personal degree of amplification. So, as an animated composite depicts faces at varying stages of caricature, viewers respond to the stage that is most recognisable to them. In tests, Frowd’s technique has increased positive identifications from as low as 3 percent to upwards of 30 percent.

G. To achieve similar results in computer face recognition, scientists would need to model the artist’s genius even more closely — a feat that might seem impossible if you listen to some of the artists describe their nearly mystical acquisition of skills. Jason Seiler recounts how he trained his mind for years, beginning in middle school, until he gained what he regards as nothing less than a second sight. «A lot of people think that caricature is about picking out someone’s worst feature and exaggerating it as far as you can,» Seiler says. «That’s wrong. Caricature is basically finding the truth. And then you push the truth.» Capturing a likeness, it seems, has less to do with the depiction of individual features than with their placement in relationship to one another. «It’s how the human brain recognises a face. When the ratios between the features are correct, you see that face instantly.»

H. Pawan Sinha, director of MIT’s Sinha Laboratory for Vision Research, and one of the nation’s most innovative computer vision researchers, contends that these simple, exaggerated drawings can be objectively and systematically studied and that such work will lead to breakthroughs in our understanding of both human and machine-based vision. His lab at MIT is preparing to computationally analyse hundreds of caricatures this year, from dozens of different artists, with the hope of tapping their intuitive knowledge of what is and isn’t crucial for recognition. He has named this endeavour the Hirschfeld Project, after the famous New York Times caricaturist Al Hirschfeld.

I. Quite simply, by analysing sketches, Sinha hopes to pinpoint the recurring exaggerations in the caricatures that most strongly correlate to particular ways that the original faces deviate from the norm. The results, he believes, will ultimately produce a rank-ordered list of the 20 or so facial attributes that are most important for recognition. «It’s a recipe for how to encode the face,» he says. In preliminary tests, the lab has already isolated important areas — for example, the ratio of the height of the forehead to the distance between the top of the nose and the mouth.

J. On a given face, four of 20 such Hirschfeld attributes, as Sinha plans to call them, will be several standard deviations greater than the mean; on another face, a different handful of attributes might exceed the norm. But in all eases, it’s the exaggerated areas of the face that hold the key. As matters stand today, an automated system must compare its target faces against the millions of continually altering faces it encounters. But so far, the software doesn’t know what to look for amid this onslaught of variables. Armed with the Hirschfeld attributes, Sinha hopes that computers can be trained to focus on the features most salient for recognition, tuning out the others. «Then,» Sinha says, «the sky is the limit».


Match the statements with the appropriate paragraphs

Reading Passage 2 has ten paragraphs, A-J.

Choose the paragraphs that contain the following information. You may use any letter more than once.

Match the names with the appropriate statements

pic3_IELTS|Upper-Int|Exam Part 1


Complete the summary with no more than two words


Art to the aid of technology

What caricatures can teach us about facial recognition, by Ben Austen

A. Our brains are incredibly agile machines, and it is hard to think of anything they do more efficiently than recognise faces. Just hours after birth, the eyes of newborns are drawn to facelike patterns. An adult brain knows it is seeing a face within 100 milliseconds, and it takes just over a second to realise that two different pictures of a face, even if they are lit or rotated in very different ways, belong to the same person.

B. Perhaps the most vivid illustration of our gift for recognition is the magic of caricature — the fact that the sparest cartoon of a familiar face, even a single line dashed off in two seconds, can be identified by our brains in an instant. It is often said that a good caricature looks more like a person than the person themselves. As it happens, this notion, counterintuitive though it may sound, is actually supported by research. In the field of vision science, there is even a term for this seeming paradox — the caricature effect — a phrase that hints at how our brains misperceive faces as much as perceive them.

C. Human faces are all built pretty much the same: two eyes above a nose that’s above a mouth, the features varying from person to person generally by mere millimetres. So what our brains look for, according to vision scientists, are the outlying features — those characteristics that deviate most from the ideal face we carry around in our heads, the running average of every «visage» we have ever seen. We code each new face we encounter not in absolute terms but in the several ways it differs markedly from the mean. In other words, we accentuate what is most important for recognition and largely ignore what is not. Our perception fixates on the upturned nose, the sunken eyes or the fleshy cheeks, making them loom larger. To better identify and remember people, we turn them into caricatures.

D. Ten years ago, we all imagined that as soon as surveillance cameras had been equipped with the appropriate software, the face of a crime suspect would stand out in a crowd. Like a thumbprint, its unique features and configuration would offer a biometric key that could be immediately checked against any database of suspects. But now a decade has passed, and face recognition systems still perform miserably in real world conditions. Just recently, a couple who accidentally swapped passports at an airport in England sailed through electronic gates that were supposed to match their faces to file photos.

E. All this leads to an interesting question. What if, to secure our airports and national landmarks, we need to learn more about caricature? After all, it’s the skill of the caricaturist — the uncanny ability to quickly distil faces down to their most salient features — that our computers most desperately need to acquire. Clearly, better cameras and faster computers simply aren’t going to be enough.

F. At the University of Central Lancashire in England, Charlie Frowd, a senior lecturer in psychology, has used insights from caricature to develop a better police-composite generator. His system, called EvoFIT, produces animated caricatures, with each successive frame showing facial features that are more exaggerated than the last. Frowd’s research supports the idea that we all store memories as caricatures, but with our own personal degree of amplification. So, as an animated composite depicts faces at varying stages of caricature, viewers respond to the stage that is most recognisable to them. In tests, Frowd’s technique has increased positive identifications from as low as 3 percent to upwards of 30 percent.

G. To achieve similar results in computer face recognition, scientists would need to model the artist’s genius even more closely — a feat that might seem impossible if you listen to some of the artists describe their nearly mystical acquisition of skills. Jason Seiler recounts how he trained his mind for years, beginning in middle school, until he gained what he regards as nothing less than a second sight. «A lot of people think that caricature is about picking out someone’s worst feature and exaggerating it as far as you can,» Seiler says. «That’s wrong. Caricature is basically finding the truth. And then you push the truth.» Capturing a likeness, it seems, has less to do with the depiction of individual features than with their placement in relationship to one another. «It’s how the human brain recognises a face. When the ratios between the features are correct, you see that face instantly.»

H. Pawan Sinha, director of MIT’s Sinha Laboratory for Vision Research, and one of the nation’s most innovative computer vision researchers, contends that these simple, exaggerated drawings can be objectively and systematically studied and that such work will lead to breakthroughs in our understanding of both human and machine-based vision. His lab at MIT is preparing to computationally analyse hundreds of caricatures this year, from dozens of different artists, with the hope of tapping their intuitive knowledge of what is and isn’t crucial for recognition. He has named this endeavour the Hirschfeld Project, after the famous New York Times caricaturist Al Hirschfeld.

I. Quite simply, by analysing sketches, Sinha hopes to pinpoint the recurring exaggerations in the caricatures that most strongly correlate to particular ways that the original faces deviate from the norm. The results, he believes, will ultimately produce a rank-ordered list of the 20 or so facial attributes that are most important for recognition. «It’s a recipe for how to encode the face,» he says. In preliminary tests, the lab has already isolated important areas — for example, the ratio of the height of the forehead to the distance between the top of the nose and the mouth.

J. On a given face, four of 20 such Hirschfeld attributes, as Sinha plans to call them, will be several standard deviations greater than the mean; on another face, a different handful of attributes might exceed the norm. But in all eases, it’s the exaggerated areas of the face that hold the key. As matters stand today, an automated system must compare its target faces against the millions of continually altering faces it encounters. But so far, the software doesn’t know what to look for amid this onslaught of variables. Armed with the Hirschfeld attributes, Sinha hopes that computers can be trained to focus on the features most salient for recognition, tuning out the others. «Then,» Sinha says, «the sky is the limit».

pic4_IELTS|Upper-Int|Exam Part 1

Read the passage and do the task below

Mind readers

«It may one day be possible to eavesdrop on another person’s inner voice,» Duncan Graham-Rowe explains.

As you begin to read this article and your eyes follow the words across the page, you may be aware of a voice in your head silently muttering along. The very same thing happens when we write: a private, internal narrative shapes the words before we commit them to text.

What if it were possible to tap into this inner voice? Thinking of words does, after all, create characteristic electrical signals in our brains, and decoding them could make it possible to piece together someone’s thoughts. Such an ability would have phenomenal prospects, not least for people unable to communicate as a result of brain damage. But it would also carry profoundly worrisome implications for the future of privacy.

The first scribbled records of electrical activity in the human brain were made in 1924 by a German doctor called Hans Berger using his new invention — the electroencephalogram (EEG). This uses electrodes placed on the skull to read the output of the brain’s billions of nerve cells or neurons. By the mid-1990s, the ability to translate the brain’s activity into readable signals had advanced so far that people could move computer cursors using only the electrical fields created by their thoughts.

The electrical impulses that such innovations tap into are produced in a part of the brain called the motor cortex, which is responsible for muscle movement. To move a cursor on a screen, you do not think «move left» in natural language. Instead, you imagine a specific motion like hitting a ball with a tennis racket. Training the machine to realise which electrical signals correspond to your imagined movements, however, is time consuming and difficult. And while this method works well for directing objects on a screen, its drawbacks become apparent when you try using it to communicate. At best, you can use the cursor to select letters displayed on an on-screen keyboard. Even a practised mind would be lucky to write 15 words per minute with that approach. Speaking, we can manage 150.

Matching the speed at which we can think and talk would lead to devices that could instantly translate the electrical signals of someone’s inner voice into sound produced by a speech synthesiser. To do this, it is necessary to focus only on the signals coming from the brain areas that govern speech. However, real mind reading requires some way to intercept those signals before they hit the motor cortex.

The translation of thoughts to language in the brain is an incredibly complex and largely mysterious process, but this much is known: before they end up in the motor cortex, thoughts destined to become spoken words pass through two «staging areas» associated with the perception and expression of speech.

The first is called Wernicke’s area, which deals with semantics — in this case, ideas based in meaning, which can include images, smells or emotional memories. Damage to Wernicke’s area can result in the loss of semantic associations: words can’t make sense when they are decoupled from their meaning. Suffer a stroke in that region, for example, and you will have trouble understanding not just what others are telling you, but what you yourself are thinking.

The second is called Broca’s area, agreed to be the brain’s speech-processing centre. Here, semantics are translated into phonetics and, ultimately, word components. From here, the assembled sentences take a quick trip to the motor cortex, which activates the muscles that will turn the desired words into speech.

Injure Broca’s area, and though you might know what you want to say, you just can’t send those impulses.

When you listen to your inner voice, two things are happening. You «hear» yourself producing language in Wernicke’s area as you construct it in Broca’s area. The key to mind reading seems to lie in these two areas.

The work of Bradley Greger in 2010 broke new ground by marking the first-ever excursion beyond the motor cortex into the brain’s language centres. His team used electrodes placed inside the skull to detect the electrical signatures of whole words, such as «yes», «no», «hot», «cold», «thirsty», «hungry», etc. Promising as it is, this approach requires a new signal to be learned for each new word. English contains a quarter of a million distinct words. And though this was the first instance of monitoring Wernicke’s area, it still relied largely on the facial motor cortex.

Greger decided there might be another way. The building blocks of language are called phonemes, and the English language has about 40 of them — for example, the «kuh» sound in «school» and the «sh» in «shy». Every English word contains some subset of these components. Decode the brain signals that correspond to the phonemes, and you would have a system to unlock any word at the moment someone thinks it.

In 2011, Eric Leuthardt and his colleague Gerwin Schalk positioned electrodes over the language regions of four fully conscious people and were able to detect the phonemes «oo», «ah», «eh» and «ee». What they also discovered was that spoken phonemes activated both the language areas and the motor cortex, while imagined speech — that inner voice — boosted the activity of neurons in Wernicke’s area. Leuthardt had effectively read his subjects’ minds. «I would call it brain reading,» he says. To arrive at whole words, Leuthardt’s next step is to expand his library of sounds and to find out how the production of phonemes translates across different languages.

For now, the research is primarily aimed at improving the lives of people with locked-in syndrome, but the ability to explore the brain’s language centres could revolutionise other fields. The consequences of these findings could ripple out to more general audiences who might like to use extreme hands-free mobile communication technologies that can be manipulated by inner voice alone. For linguists, it could provide previously unobtainable insight into the neural origins and structures of language. Knowing what someone is thinking without needing words at all would be functionally indistinguishable from telepathy.


Decide whether the following statements agree with the writer’s opinion in Reading Passage 3

Select:

Yes

No

Not Given

if the statement agrees with the claims of the writer

if the statement contradicts the claims of the writer

if it is impossible to say what the writer thinks about this

pic5_IELTS|Upper-Int|Exam Part 1

The list of sentence endings

A. receive impulses from the motor cortex.

B. pass directly to the motor cortex.

C. are processed into language.

D. require a listener.

E. consist of decoded phonemes.

F. are largely non-verbal.

G. match the sound they make.


Choose the correct options to answer Question 37-40


Mind readers

«It may one day be possible to eavesdrop on another person’s inner voice,» Duncan Graham-Rowe explains.

As you begin to read this article and your eyes follow the words across the page, you may be aware of a voice in your head silently muttering along. The very same thing happens when we write: a private, internal narrative shapes the words before we commit them to text.

What if it were possible to tap into this inner voice? Thinking of words does, after all, create characteristic electrical signals in our brains, and decoding them could make it possible to piece together someone’s thoughts. Such an ability would have phenomenal prospects, not least for people unable to communicate as a result of brain damage. But it would also carry profoundly worrisome implications for the future of privacy.

The first scribbled records of electrical activity in the human brain were made in 1924 by a German doctor called Hans Berger using his new invention — the electroencephalogram (EEG). This uses electrodes placed on the skull to read the output of the brain’s billions of nerve cells or neurons. By the mid-1990s, the ability to translate the brain’s activity into readable signals had advanced so far that people could move computer cursors using only the electrical fields created by their thoughts.

The electrical impulses that such innovations tap into are produced in a part of the brain called the motor cortex, which is responsible for muscle movement. To move a cursor on a screen, you do not think «move left» in natural language. Instead, you imagine a specific motion like hitting a ball with a tennis racket. Training the machine to realise which electrical signals correspond to your imagined movements, however, is time consuming and difficult. And while this method works well for directing objects on a screen, its drawbacks become apparent when you try using it to communicate. At best, you can use the cursor to select letters displayed on an on-screen keyboard. Even a practised mind would be lucky to write 15 words per minute with that approach. Speaking, we can manage 150.

Matching the speed at which we can think and talk would lead to devices that could instantly translate the electrical signals of someone’s inner voice into sound produced by a speech synthesiser. To do this, it is necessary to focus only on the signals coming from the brain areas that govern speech. However, real mind reading requires some way to intercept those signals before they hit the motor cortex.

The translation of thoughts to language in the brain is an incredibly complex and largely mysterious process, but this much is known: before they end up in the motor cortex, thoughts destined to become spoken words pass through two «staging areas» associated with the perception and expression of speech.

The first is called Wernicke’s area, which deals with semantics — in this case, ideas based in meaning, which can include images, smells or emotional memories. Damage to Wernicke’s area can result in the loss of semantic associations: words can’t make sense when they are decoupled from their meaning. Suffer a stroke in that region, for example, and you will have trouble understanding not just what others are telling you, but what you yourself are thinking.

The second is called Broca’s area, agreed to be the brain’s speech-processing centre. Here, semantics are translated into phonetics and, ultimately, word components. From here, the assembled sentences take a quick trip to the motor cortex, which activates the muscles that will turn the desired words into speech.

Injure Broca’s area, and though you might know what you want to say, you just can’t send those impulses.

When you listen to your inner voice, two things are happening. You «hear» yourself producing language in Wernicke’s area as you construct it in Broca’s area. The key to mind reading seems to lie in these two areas.

The work of Bradley Greger in 2010 broke new ground by marking the first-ever excursion beyond the motor cortex into the brain’s language centres. His team used electrodes placed inside the skull to detect the electrical signatures of whole words, such as «yes», «no», «hot», «cold», «thirsty», «hungry», etc. Promising as it is, this approach requires a new signal to be learned for each new word. English contains a quarter of a million distinct words. And though this was the first instance of monitoring Wernicke’s area, it still relied largely on the facial motor cortex.

Greger decided there might be another way. The building blocks of language are called phonemes, and the English language has about 40 of them — for example, the «kuh» sound in «school» and the «sh» in «shy». Every English word contains some subset of these components. Decode the brain signals that correspond to the phonemes, and you would have a system to unlock any word at the moment someone thinks it.

In 2011, Eric Leuthardt and his colleague Gerwin Schalk positioned electrodes over the language regions of four fully conscious people and were able to detect the phonemes «oo», «ah», «eh» and «ee». What they also discovered was that spoken phonemes activated both the language areas and the motor cortex, while imagined speech — that inner voice — boosted the activity of neurons in Wernicke’s area. Leuthardt had effectively read his subjects’ minds. «I would call it brain reading,» he says. To arrive at whole words, Leuthardt’s next step is to expand his library of sounds and to find out how the production of phonemes translates across different languages.

For now, the research is primarily aimed at improving the lives of people with locked-in syndrome, but the ability to explore the brain’s language centres could revolutionise other fields. The consequences of these findings could ripple out to more general audiences who might like to use extreme hands-free mobile communication technologies that can be manipulated by inner voice alone. For linguists, it could provide previously unobtainable insight into the neural origins and structures of language. Knowing what someone is thinking without needing words at all would be functionally indistinguishable from telepathy.

Read the task and answer 5 questions that you have choosen. Speak no longer than 3 minutes

pic9_IELTS|Upper-Int|L29

Speaking Part 1

Do you work or study?

According to your answer, choose the appropriate card, then tick 5 questions (1-2 questions in each section) and record your answers.

1. Tell me about yourself.

🔹Why did you choose the subjects/course you are studying?

🔹What do you like about your university or college?

🔹How much time do you spend on campus a week?

🔹What would you like to change about your studies?

2. Let’s talk about healthy lifestyles now.

🔹How often do you find time to relax?

🔹What’s your ideal form of relaxation?

🔹What activities did you do as a child to stay healthy?

3. I’d like to talk about outer space now.

🔹What aspects of space and space travel did you study at school?

🔹Would you rather watch a documentary about space or a science fiction film? Why?

🔹Will you ever take a holiday in space? Why/Why not?


1. Tell me about yourself.

🔹Why did you choose your current job?

🔹What do you like about your work?

🔹How much time do you spend at work a week?

🔹What would you like to change about your job?

2. Let’s talk about healthy lifestyles now.

🔹How often do you find time to relax?

🔹What’s your ideal form of relaxation?

🔹What activities did you do as a child to stay healthy?

3. I’d like to talk about outer space now.

🔹What aspects of space and space travel did you study at school?

🔹Would you rather watch a documentary about space or a science fiction film? Why?

🔹Will you ever take a holiday in space? Why/Why not?


Allow your browser access to your microphone, press the button «Record» and record the speech you have prepared

Read the task and deliver a two-minute speech

Before you start talking, you can plan your answer for one minute. You can take notes in the text area below.

Besides answering the points of Exam Task, you should also choose and answer one of Additional questions given below.


Speaking Part 2

Exam Task

Describe an area of your country which is known for its natural beauty.

You should say

🔹where this area is,

🔹what people can see and do there,

🔹how you can get there,

and explain why this area is considered to be so beautiful.

Additional questions:

🔹How can children be encouraged to take an interest in areas of natural beauty?

🔹Is it ever appropriate to charge visitors to enter areas of natural beauty? When?


Allow your browser access to your microphone, press the button «Record» and record the speech you have prepared

Read, choose and answer 4 questions. Speak for about 3 minutes

pic3_IELTS|Upper-Int|L19

Speaking Part 3

🔹Why do countries value their beautiful landscapes and wildlife?

🔹What disadvantages does tourism bring to these places?

🔹How do adults and children differ in the way they experience places of natural beauty?

🔹What can individuals do to help protect areas of natural beauty?

🔹Why is it sometimes difficult for governments to make decisions about protecting these places?

🔹When are authorities justified in banning people from visiting areas of natural beauty?


Allow your browser access to your microphone, press the button «Record» and record the speech you have prepared

Урок Homework Курс
  • Exam overview
  • Reading Section 1: Part 1
  • Reading Section 1: Part 2
  • Reading Section 2: Part 1
  • Reading Section 2: Part 2
  • Reading Section 3: Part 1
  • Reading Section 3: Parts 2-3
  • Speaking Part 1
  • Speaking Part 2
  • Speaking Part 3