Saturday, January 25, 2020

Novelty Preferences in Infants: Effects on Infant Cognition

Novelty Preferences in Infants: Effects on Infant Cognition Discuss the method of ‘familiarisation/novelty preference’ and consider its contribution to psychologists’ understanding of infant cognition. One of the keystones in an infant’s development is the ability to group similar items and experiences together. This, on the surface, may seem a trivial skill but it forms the basis for much of the infant’s cognitive development in the first months of life. Once similar things are identified into groups, structure and order can form around them. This process is referred to as ‘categorisation’. In fact, the development of the process itself provides a useful insight into the developmental progression of an infant more generally. One of the major tools psychologists have used to study this phenomenon is that of the ‘familiarisation/novelty preference’ technique: Fantz (1963) noted that infants showed a strong tendency to pay attention to novel objects, compared with those they had previously encountered. If an infant is presented with an object for an extended period of time, that infant will gradually reduce the attention it bestows upon the object. The baby will begin to look away, until eventually it no longer pays the object attention at all. This process is known as familiarisation (or ‘habituation’). Subsequently, if the infant is presented with the same object as before, alongside a new object (with which the infant has had no previous experience), then vastly more attention will be paid to the novel item. This is called ‘novelty preference’. Presumably this pairing of phenomena (familiarisation and novelty preference) comes about from a biological tendency (genetically hard-wired) which ensures an infant experiences as much of its environment as possible, in order to learn at an optimum rate. Psychologists have produced a lab-based version of the ‘familiarisation/novelty preference’ phenomenon in order to examine its effect on the cognitive development of infants. The technique has two stages: In stage 1 an infant is shown a number of different objects belonging to the same category (e.g. Siamese cat, Persian cat, Tabby cat). In the second stage the infant is presented with a pair of novel stimuli. One of the stimuli belongs to the category the baby has just encountered (e.g. Manx cat), the other belongs to an entirely new category (e.g. Labrador dog). The infant is then graded on the preference they pay to each stimulus. The infant normally shows a greater preference for the stimulus from the novel category. This is because they have formed a representation of the familiar exposed category (i.e. cats) which became habituated, so more examples of this category will hold less attention. When a new category is encountered (i.e. dogs) more attention is due to th is novel item since it has not been seen before. This process is used to examine many aspects of infant development which relate to categorisation, for example: how do infants form categories? How are these categories remembered? How are they organised? Also, since categorisation and language formation are so strongly linked the ‘familiarisation/novelty preference’ technique is used to understand the development of language in infants. In order to examine this process simply some of the first studies of infant cognition resorted to using very basic stimuli. Younger and Gotlieb (1988, see also Bomba and Siqueland, 1983; Quinn, 1987) used simple dot patterns, known to be effective in examining adult categorisation. Infants were familiarised with six pairs of distorted dot patterns, which had been derived from a single (undistorted) prototype (and hence were considered to belong to the same category). The infants were then shown a test pair which included the prototype of the exposed category, and a prototype of an entirely different dot category. The amount of time the infants spent looking at the novel dot pattern was recorded. When the prototypes were very simple patterns the infants (aged 3-7 months) spent a significantly larger amount of time observing the novel prototype than the familiar one. This indicated that they had all formed a representation of the dot category, without ever seeing the pattern which defi ned the category (the prototype). As the prototype patterns became more complex only older infants (5 months and above) showed this significant trend. So, older infants appeared to be better at forming a prototype from the series of distorted examples, although all showed evidence of category formation. Younger and Gotlieb (1988) went on to use this finding to examine how infants actually store their category representations. They hypothesised two possibilities for category storage: 1. all possible exemplars are stored in memory and are available for comparison with new instances (‘exemplar memory’); 2. an average of all observed exemplars is stored as a prototype (‘prototype memory’). Initially it would seem plausible that prototype memory is the more likely as this is the most efficient form of storage and retrieval. Comparing an example with all previous examples would be very time-consuming. Once another cohort of Younger and Gotlieb’s (1988) infants had been exposed to the distorted dot pattern exemplars (see above) they were then shown the prototype paired with one of the previously seen distorted exemplars. If the infant produced a prototype when it was exposed to the exemplars earlier (by averaging the features of the distorted patterns) then the prototype the infant formed should look much like the actual prototype. In this case the infant should perceive the distorted pattern as less familiar (and thus attended to for more time) than the prototype. If the infant was in fact just remembering each and every pattern it was presented with then the distorted exemplar should be more familiar (and attended to less) than the prototype, which was not seen until this point. However, it seems that infants use both of these category storage mechanisms, depending on the exact parameters of the experiment (i.e. if there are a few simple exemplars then it is more efficient to encode each one, when there are many complex exemplars a prototype is more appropriate). More importantly, infants use the same mechanism as adults performing the equivalent test. This not only indicates that infants are able to form prototypes (an essential mechanism for category formation), but are capable of adult-like cognitive tasks from a very early age (ED209, Child Development Course Team, 2008). Experiments like those described above have been criticised for their lack of environmental validity. In order to address whether or not infants can actually categorise items that are relevant to their surroundings a number of authors have used the ‘familiarisation/novelty preference’ technique. Quinn, Eimas and Rosenkrantz (1993, see also Eimas and Quinn, 1994; Quinn and Eimas, 1996) gave infants exposure to pictures of domestic cats from different breeds and in different orientations. Subsequently, the infants spent less time viewing novel cat pictures (as they considered them familiar) than pictures of animals from other species (which belonged to novel categories). These experiments show that infant categorisation is reproducible outside of the ‘lab’. Moreover, the experiments indicate that infants can produce categories that are both environmentally valid and useful, without the assistance of a vocabulary. Knowing that similar things go together is the first stepping stone to producing useful categorical knowledge. The next step than an infant makes is to organise their categories into hierarchical structures. This step brings the child closer to forming a strong basis for a lexical framework (i.e. towards speech). To illustrate: A Siamese cat belongs to the super-ordinate category of ‘cats’, which in turn are ‘animals’. Construction of this categorical framework is commonly investigated using the ‘familiarisation/novelty preference’ technique: Behl-Chadha (1996) set out to discover if infants were truly able to form hierarchical structure. Infants aged between 3 and 4 months were familiarised with a set of twelve photos of chairs (which included sub-ordinate categories like desk chairs and rocking chairs etc.). Following this the infants were shown pictures of novel chairs along with other items of furniture. This infant paid more attention to the novel items than the chair-related items. This standard ‘familiarisation/novelty preference’ effect showed that the babies had successfully formed the category ‘chair’. However, when the infants were familiarised with a set of ‘couch’ pictures they subsequently treated new ‘couch’ pictures as familiar, but pictures of other chair-types were treated as novel, attracting more attention (indicating the babies knew couches were an individual category, whilst at the same time knowing that chairs were a category also). This experiment proved that infants are in fact able to ‘nest’ categorical information into a hierarchical structure, needed for the formation of a vocabulary. Another aspect of categorical grouping that is a pre-requisite of early speech formation is that of spatial relation. This form of categorisation is more abstract than the types summarised above as it cannot rely on perceptual features. Quinn (1994, see also Quinn et al., 2003) showed that infants can categorise abstract spatial relations, grouping objects that are ‘above’ or ‘below’. If an infant was familiarised with stimuli that were all of the same spatial relation they would subsequently show preference for stimuli in another spatial relation. These kinds of experiment show that infants produce seemingly complicated categorical information without the a priori powers of speech and vocabulary. Infants therefore have the cognitive ability to form many complex representations of their environment. In fact, many authors believe this forms the basis for communication and language development. Waxman and Markow (1995) suggest that language acquisition is promoted due to the ability it provides the infant in referring to objects. In fact the onset of speech and the so-called ‘vocabulary spurt’ have both been attributed to categorisation. Gopnik and Meltzoff (1992), for example, note that children who are better at categorisation on the ‘familiarisation/novelty preference’ test are those who also use more words and names for items in their first months of speech production. Goldfield and Reznick (1990), note that half of all early words spoken by infants were object names, further strengthening the link between object category and cognitive development, and language in particular. The ‘familiarisation/novelty preference’ method is therefore key in understanding the building-blocks of infant cognition and speech. Bibliography Bomba, P. C. and Siqueland, E. R. (1983) ‘The nature and structure of infant form categories’, Journal of Experimental Child Psychology, vol. 35, pp. 294–328. ED209 Course Team (2008). Cognitive and Language Development in Children, Milton Keynes: The Open University. Eimas, P. D. and Quinn, P. C. (1994) ‘Studies on the formation of perceptually based basic-level categories in young infants’, Child Development, vol. 65, pp. 903–17. Fantz, R. L. (1963) ‘Pattern vision in newborn infants’, Science, vol. 140, pp. 296–7. Gopnik, A. and Meltzoff, A. N. (1992) ‘Categorization and naming: basic-level sorting in eighteen-month-olds and its relation to language’, Child Development, vol. 63, pp. 1091–103. Quinn, P. C. (1987) ‘The categorical representation of visual pattern information by young infants’, Cognition, vol. 27, pp. 145–79. Quinn, P. C. (1994) ‘The categorization of above and below spatial relations by young infants’, Child Development, vol. 65, pp. 58–69. Quinn, P. C. and Eimas, P. D. (1996) ‘Perceptual organization and categorization in young infants’, Advances in Infancy Research, vol. 10, pp. 1–36. Quinn, P. C., Eimas, P. D. and Rosenkrantz, S. L. (1993) ‘Evidence for representations of perceptually similar natural categories by 3-month-old and 4-month-old infants’, Perception, vol. 22, pp. 463–75. Quinn, P. C., Adams, A., Kennedy, E. et al. (2003) ‘Development of an abstract category representation for the spatial relation ‘‘between’’ in 6-to 10-month-old infants’, Developmental Psychology, vol. 39, pp. 151–63. Younger, B. A. and Gotlieb, S. (1988) ‘Development of categorization skills: changes in the nature or structure of infant form categories?’, Developmental Psychology, vol. 24, pp. 611–19. Waxman, S. R. and Markow, D. B. (1995) ‘Words as invitations to form categories: evidence from 12-to 13-month-old infants’, Cognitive Psychology, vol. 29, pp. 257–302.

Friday, January 17, 2020

Deontological Ethics and Emanuel Kant Essay

Describe Kant’s theory of Duty as the basis of morality (33 marks). Emanuel Kant was a German Philosopher who lived in the late 18th century and was arguably one of the greatest thinkers of all time. He came up with a guide to morals in direct opposition to teontological or consequential theories. Many people use his ethics as a guide to living a moral life, but what exactly is Kant’s ethics? How did he believe we should face moral problems and how can we apply it in our every day lives? Instead of situation based theories his theory was deontological ethics. This is a very absolute and objective form of ethics, which has been worked out using a rational thinking process. Kant believed that an ethical theory should be universalisable to be morally correct. This means it must be able to be applied to everyone all over the world regardless of situations or circumstances. Kant believed for this to be possible it must contain something that was ‘unconditionally and universally good’. This must me something that is ‘intrinsically good’ which is good in itself, the highest good ‘without qualification’. This thing that determines the moral worth of our actions cannot be instrumentally good, something that only becomes good pending the results of the action or like some things such as happiness, which are possible of making a situation morally worse. Kant believed that there was only one thing that is the right thing for us to do in any situation to make us morally correct. He said that ‘a morally good man is a man of good will’. Kant said that it was ‘impossible to conceive anything in the world as good without qualification, except good will’. For something to be of good will, it is not dependent on the goodness of what it effects or accomplishes. If it were, then it could not be considered to be of unconditional value and intrinsic goodness for it would become a ‘means to an end not an end in itself’. This leads us to therefore conclude that the consequences of any moral action are irrelevant. Kant describes the most important thing as being ‘not what the act accomplishes but the motive behind the act’ (Moral Problems – M Palmer). However we may ask what exactly is the right motive to have? Kant simply states that ‘ a good wills only motive is to act for the sake of duty’. For an act to be universally, intrinsically good in itself, it must not be done because of its consequences, nor from self-interest, fear or as a means to an end, rather only because it is our soul duty to do it. We should always act for duties sake simply because it is the right thing to do. We need to be very clear as to what this specifically entails. Kant is saying that we can not do a moral act because of self-interest. This is understandable because if we are doing it merely because we get something good out of it i.e. a reward or a good name then we are not doing it because we simply know it is the right thing to do. However we also need to be aware that this also includes the idea that we can not do a moral act because it comes naturally to us. We cannot do it because we derive pleasure or enjoyment from doing something we know is right or because we will feel good about ourselves if we help other people. This is because we are doing it indirectly for self-pleasure and this again is wrong, it does not include the presence of good will. Even if duty does coincides with what we naturally do, it does not make the act intrinsically good because we are doing it for another reason besides doing it because we know it is our duty to do so. The fact that we happen to be doing what duty prescribes is just luck. It is wrong because the moment anything that duty says we should do becomes something we no longer enjoy, we won’t do it. We cannot be for example honest as long as it pleases us to do so. Kant therefore concludes that ‘this will fails to be good will, just as if they had acted from self-interest. So far Kant has told us that a morally good person is a ‘man of good will’ and that a man of good will is one that follows where his duty lies. This is done for the very reason that it is the right thing to do and we have a responsibility to do it. It does not come from self-interest, calculating consequences, looking at specific circumstances or from pleasure out of doing something for someone else. However we still need to know ‘where our duty lies’ and what it is exactly that we are supposed to do to become man of good will who does what duty foretells him to do. We can be sure however that because it is a deontological argument, that we have an absolute principle to follow that does not look at consequences of particular actions or changes in certain situations. It is absolute and definite and we can be sure that there are no exceptions to the rule. We also know that it has to be universably applicable ‘to everyone irrespective of their situation’ (M Palmer – Moral Problems). It therefore must contain something that all humans have in common so we can all know where our duty lies in different situations and Kant believed that this was Reason or rationality. He said that humans are rational beings, we are all capable of resolving problems using reason. We all have an innate intellectual power that we are born with which we can use to work out rationally where our duty lies. Kant believed that it was unacceptable to look at consequences of a particular action and then decide if we should do it or not because there is not enough evidence for us to make a proper decision from. Rather we need to look at the actual experience of moral obligation and this is the feeling of what we think we ‘ought to do’. Following what our duty prescribes involves the idea that what we feel we ‘ought’ to do is what is right. We should all have a feeling of moral obligation; we all know the good and right thing to do so therefore we should do it. Therefore our duty becomes to obey our rational thinking which prescribes what the morally correct thing we ought to do is. However, we still have not established what the ‘supreme principle of morality is’. This one rule that we all must follow as a means to our rational thinking is something which Kant calls the categorical imperative. By imperative we mean something that tells us what actions would be good in the form of a command, usually using the words ‘I ought’. A categorical imperative therefore is an act that is solely good in itself or intrinsically good. The act is done because of the very ‘nature’ of the act itself and not to achieve something else as a means to it. It is done only ‘for its own sake’ and is free from ulterior beneficial motives. On the other hand we have hypothetical imperatives as an opposite. These acts are done because of a desire to achieve something else. For example if I exercise more I will become fitter. It tells us what acts are good as a means to something else. Palmer uses the example of telling the truth to illustrate the difference between the two. A categorical imperative would be ‘tell the truth’ because it is good in itself and always is the right thing to do. The hypothetical imperative would be ‘if you want to be trusted, tell the truth’ because we are gaining something for ourselves by doing the right thing i.e. we are trusted. Once we know the distinctive feature of the principle of morality, we can analyse it more deeply so we can specifically know exactly what it is that defines a moral act as being good. Kant said that a morally good act had intrinsic value. This is where something is good and valuable in itself. The very nature of them makes them valuable regardless of anything else. For example Kant believed that Humans were of intrinsic value and therefore should be treated as an ‘end in themselves’. The opposite to this is therefore is instrumental valuable which is when something is good only because of what it can achieve and therefore is treated as a ‘means to an end’. Kant said this is not how we should treat other human’s i.e. to use them to gain something for ourselves. He is saying that all humans should be treated equally and the same, we should treat everyone as we would treat ourselves. So for example, racism would always be wrong in the eyes of Kant. This links to the Christian idea of the Golden rule to ‘love thy neighbour as thyself’ which Jesus, the ultimate example of human goodness, instructed his people to follow. The final and key feature that Kant placed emphasis on when concerning the categorical imperative was the acts ability to be universalisable. A key quote he used was ‘ I ought never to act in such a way that I can also will that my maxim should become a universal law’. By this he is implying a method we can use to see exactly what laws are good because they have ‘moral worth’. Kant stated that if the law can be applied to everyone in the world without being contradicted then it is good. For example we can universalise the maxim ‘do not murder’ to all of society regardless of any situation without there being contradictions. By contradiction, Kant means one of two things, Contradiction in the Will or Nature. If we cannot universalise an act because of either one of these contradictions then we must conclude that it is morally wrong. By contradiction in the law of nature, Kant is referring to rules that cannot be applied because they are ‘straightforwardly self contradictory’ – (M Palmer – Moral Problems). The maxim or rule can not be applied universally because it contradicts the laws of nature meaning it physically is impossible to do. For example the maxim ‘never speak until you are spoken to first’ is not possible to keep because if everyone applied it then no one would talk at all because we would always be waiting to be spoken to. From this we can see that following this maxim would not be the good thing to do. The Contradiction in the will is not when something contridicts itself, rather a maxim that the person involved ‘could not possibly want to see universalised’ (Palmer). We may find that if it was applied universally we could be in the situation where we would not want everyone to apply it because it would help us if they didn’t. For example the maxim ‘do not give money to the poor’ because we may find ourselves one day, through no fault of our own, poor and homeless and then we would want people to give money to us to help us survive. Kant gave one simple rule to following universalisabiltiy and this was ‘ Act only on a maxim through which you can at the same time will that it be a universal law’. With this he prescribed a formula which we can all follow to see if a maxim is universalisable. Before acting we have to ask what rule we would be following if we carried out this act and this is the maxim. Then we are to ask ourselves if it was possible and would we would be willing for it to be followed by everyone at all times in all places. If it cannot then it is a contradiction in either the law of nature or in the will. Then quite simply, if it can be universalised do it, if not then don’t. In conclusion we can see that to follow Kants deontological ethics we must ‘act solely in accordance to duty and for the sake of duty only’ (Palmer – Moral Problems). It has been a very popular theory, which many people follow, sometimes without being aware of it. However we do need to ask is it of practical use in out lives today? Can we honestly say that it is useful, practical and realistic when making moral decisions? In my next section I shall be looking at these questions in a little more depth to see if we can logically come up with an answer.

Thursday, January 9, 2020

Social Work Origins and Values - Free Essay Example

Sample details Pages: 2 Words: 539 Downloads: 1 Date added: 2019/03/16 Category Society Essay Level High school Tags: Social Work Essay Did you like this example? Social work origins came from several contributions. This includes the Charities Organization Society, Settlement House Movement, Mary Richmond and National Association of Social Workers (NASW) (Ambrosino, R., Heffernan, J., Shuttlesworth, G., Ambrosino, R., 2015). The Charities Organization Society (COS) is the scientific charity, that introduced the scientific philanthropy (Ambrosino, R., Heffernan, J., Shuttlesworth, G., Ambrosino, R., 2015). Don’t waste time! Our writers will create an original "Social Work Origins and Values" essay for you Create order The COS was the first organization services that consisted of paid investigators who visited the clients for assistance. In the late 1800s, the Settlement House Movement aimed at the city and created houses (). The Settlement House workers used social group work to help socialize new immigrants to the city. This offered adult education for their urban neighbors and provided help and advice (Ambrosino, R., Heffernan, J., Shuttlesworth, G., Ambrosino, R., (2015). Settle House Movement workers focused on community problems together with the other residents of low-income urban neighborhoods. Many of the Settlement house workers were social scientists who worked with university-based academic social scientists (Ambrosino, R., Heffernan, J., Shuttlesworth, G., Ambrosino, R., 2015). In 1889, Jane Addams was well known for the beginning of social work profession in the United States (Ambrosino, R., Heffernan, J., Shuttlesworth, G., Ambrosino, R., 2015). One of the most important contribut ions to the origin of social work was the Social Diagnosis from Mary Richmond. Richmond presented her observations on the nature of social casework. (Ambrosino, R., Heffernan, J., Shuttlesworth, G., Ambrosino, R., 2015). NASW seeks to promote quality in the practice of social work. Social work has six underpinnings values, ethics, Liberal Arts base, knowledge that builds on a Liberal Arts base, practice skills and planned change. Values are a set of beliefs that shape the ways we view others and the world. Values are also the basis of how social workers live by, help social workers in the way they view their clients, and the decisions they will make (Ambrosino, R., Heffernan, J., Shuttlesworth, G., Ambrosino, R., 2015). Values create the awareness in social workers to not judge, condemned or demeaned, people with problems (). Ethics are a set of rules which a society, community, or organization functions, a product of values and relates to the moral principles of practice and defines what social workers should do in specific situations. Social workers are expected to follow the Code of Ethics (Ambrosino, R., Heffernan, J., Shuttlesworth, G., Ambrosino, R., (2015). If one decides not to they will receive professional sanctions. Liberal Arts base requires that social workers in all levels have a strong liberal arts base as they gain knowledge on human behaviors, social welfare policy, research, and practice (Ambrosino, R., Heffernan, J., Shuttlesworth, G., Ambrosino, R., 2015). Knowledge that builds on a Liberal Arts base involves social work practice from theories of human behavior. Social work students are expected to understand the life cycle, personality development, group, and organizational dynamics, social justice and the effects of discrimination, social policy formulation, research methods, community environments, developmental process and social functioning (Ambrosino, R., Heffernan, J., Shuttlesworth, G., Ambrosino, R., 2015). Practice skills ensures social workers are acquainted with techniques related to direct practice with individuals, groups and communities. Planned change is a process that is based on a professional social work intervention, assessment, knowledge of clients capacity for change, and focused intervention (Ambrosino, R., Heffernan, J., Shuttlesworth, G., Ambrosino, R., 2015).

Wednesday, January 1, 2020

Overview of HIV AIDS - 704 Words

Overview of HIV/AIDS 1.1.1 Discovery Records show that Acquired Immunodeficiency Syndrome (AIDS) was first observed in the United States in the early 1980’s among healthy young intravenous drug users and gay men, who came down with Pneumocystis jiroveci pneumonia (PCP), opportunistic cryptococcal or cytomegalovirus infections and some rare malignancies like Kaposi’s sarcoma that are known to occur in patients with compromised immune system (1). The rising incidences of PCP infections and Kaposi’s sarcoma in an unusual population prompted the task force team formed at the US Centers for Disease Control and Prevention (CDC) to monitor the outbreak (2-4). The term Acquired Immune Deficiency Syndrome (AIDS) was coined to name the fatal disease in 1982 (5). Franà §oise Barrà ©-Sinoussi and Robert Gallo in apparently independent research work both isolated the etiologic agent of AIDS and published their findings in the same issue of the Science Journal in 1983 (6,7). While the Barrà ©-S inoussi’s group believed lymphadenopathy-associated Virus (LAV) was the causative agent of AIDS, Gallos group insisted on HTLV-III (human T-cell lymphotropic virus, type 3) (8). A consensus was reached in May 1986 by the International Committee on the Taxonomy of Viruses, and the name HIV (Human Immunodeficiency Virus) was adopted (9). Outbreaks of wasting and severe infections were also observed in rhesus macaques (Macaca mulatta) by researchers at the California Regional Primate Research CenterShow MoreRelatedThe Immune System: HIV/AIDS Essay893 Words   |  4 PagesHIV is a world pandemic that has caused the death of â€Å"30 million† (CDC – Statistics Overview – Statistics Center – HIV/AIDS, CDC) innocent lives. HIV is devastating virus that destroys people’s immune systems and leaves them vulnerable to other diseases. HIV is an acronym for Human Immunodeficiency Virus, which means that the virus is found only in humans and targets the immune system. The virus kills CD4 cells, cells in the immune system that fight off infections and diseases. HIV has been aroundRead MoreHiv And Aids Epidemic. In The 21St Century, Everyone Has1439 Words   |  6 PagesHIV and AIDS Epidemic In the 21st century, everyone has heard of the frightening HIV and AIDS virus. The disease we first discovered in 1983 in Arica, when it killed millions of people, especially poor people and travelers. In the developing countries, the Human Immunodeficiency Virus (HIV) and Acquired Immunodeficiency Syndrome (AIDS) are considered a death sentence, the world over, it is a frightening virus that has killed many people. The research provided me with the information the developingRead MoreDurex Save Sex Campaign Against AIDS-HIV688 Words   |  3 PagesPresentation Overview Durex Save Sex Campaign Against AIDS/HIV Hello to everyone, my name is Irem Gamsizoglu, and I`m first year medicine student. I will tell you about disease that is spread all around the world – AIDS and HIV and the corporate social responsibility campaign made by one of the most famous contraception related company Durex. And I`d like to start my overview with famous quote by Elizabeth Taylor; once she said that â€Å"It is bad enough that people are dying of AIDS, but no one shouldRead MoreThe Global Health Issue Of Hiv And Aids869 Words   |  4 Pages and finally in Africa aids is the health issues. HIV and AIDS has become on the biggest epidemics in the Sub-Saharan Africa region. There has been an estimated 24.7 million people were living with HIV (Shah, A. n.d). The 24.7 million people represent the seventy-one percent of the aids population in the world (Shah, A. n.d). Background of HIV/AIDS To understand how devastating HIV/AIDS is there must be background on how HIV and AIDS develops and why it is so deadly. HIV stands for human immunodeficiencyRead MoreHiv Is A Human Immunodeficiency Virus1721 Words   |  7 PagesPicture this: a young child who is very skinny, ribs and all other bones are showing through the skin, they are born with HIV. it then leads to AIDS, due to their parents. HIV is a Human Immunodeficiency Virus. If HIV is left untreated, it can lead to AIDS, which is an acquired Immunodeficiency Syndrome. In Nigeria, Africa millions of people have the disease of AIDS and HIV. There is not many treatment options or solutions for this serious issue that takes place all over the country. There are aRead MoreAids, Hiv, And Aids1726 Words   |  7 PagesAIDS and HIV in Africa Picture this: a young child who is very skinny, ribs and all other bones are showing through the skin, they are born with HIV. it then leads to AIDS, due to their parents. HIV is a Human Immunodeficiency Virus. If HIV is left untreated, it can lead to AIDS, which is an acquired Immunodeficiency Syndrome. In Nigeria, Africa millions of people have the disease of AIDS and HIV. There is not many treatment options or solutions for this serious issue that takes place all over theRead MoreHiv Research Paper808 Words   |  4 PagesWhich Populations in the United States Today, are at the Highest-Risk of HIV Infection? In the United States (US) there are currently 1.2 million people infected/living with the Human Immunodeficiency Virus (HIV). The HIV epidemic in the US is concentrated in the following at risk populations and geographic area: (1) Gay, bisexual, and other men who have sex with other men of all races/ethnicities (high burden of HIV among Black gay and bisexual men), (2) Black women and men, (3) Latinos/LatinasRead MoreHuman Immunodeficiency Virus And The Body s Natural Defense System1688 Words   |  7 Pages Human Immunodeficiency Virus â€Å"HIV also known as human immunodeficiency virus is a virus that attacks the immune system, the body s natural defense system.† When a person has a weak immune system as oppose to a strong immune system the body has a hard time fighting off the disease. The HIV virus and the infection that it causes is called HIV. White blood cells are an important part of the immune system. One of the major symptoms and by far the worst is when HIV infects and destroys certain whiteRead MoreHiv Prevention Among The Usa1285 Words   |  6 PagesHIV Prevention amongst Minorities in the USA PROBLEM Human immunodeficiency virus better known as HIV, is a virus that attacks the body’s immune system. The virus specifically attacks the CD4 cells (T cells), which helps the immune system fight off infections. Overtime, HIV can destroy so many of these cells that body can’t fight off infections and diseases. HIV cannot be cured but it can be controlled with proper treatment and medical care (â€Å"What is HIV/AIDS?†). According to the Center for DiseaseRead MoreHiv And Aids : A Deadly Virus Essay1358 Words   |  6 Pages HIV and AIDS Explained Sebastian V. Aparicio NorthWest Career and Technical Academy The human immunodeficiency virus commonly known as HIV, is a deadly virus if left untreated. Unfortunately there’s no cure, but ever since the outbreak 30 years ago in the United states, there have been many advances in drugs.5 The death of HIV has decreased, as well as the amount of people getting diagnosed. All in all, HIV has a unique life cycle with stages that it advances