Saturday, August 31, 2019

Mendocino Triple Junction

The deformational chronicles of the tectonic plates is examined from the standpoint of movement of the numerous faults bounded by the structural topography and environment and by the fault kinematics connected with the lately industrialized San Andreas transform margin. Accumulation and subduction are continuing underneath the continental margin north of the Mendocino fracture zone, with the linkage from the San Andreas fault system from the south. The continuous deformation due to the interactions among the Gorda, Pacific, and North American plates are collectively called the Mendocino Triple Junction. The Mendocino Triple Junction stands for the setting of extreme change in plate tectonic processes from the convergence in the Cascadia subduction (north of the Mendocino Triple Junction) and the translation in the San Andreas system (south of the Mendocino Triple Junction. The abrupt changes form this tectonic processes resulted in the dramatic change in the thermic region in North America and the evolution in the lithosphere formation in the said region (Dickinson and Snyder, 561; Lachenbruch and Sass, 195-197; Zandt and Furlong, 377-378). The unstable intersection of the fault zones called the Mendocino Triple Junction divided the Pacific region, and North America, including North California. The meeting point The Mendocino Triple Junction is an unstable triple junction located in the North California west coast that is comprised of the intersection of the Pacific Plate, the North American Plate, and the Gorda Plate (Oppeheimer et al. 1666). The junction has complex plate activities due to seismic destructive above 6.5 in magnitude earthquakes that occurred over the past 100 years (Dengler et al., 48-50). The positioning of the Mendocino Triple Junction has transferred from its original location in the Los Angeles latitude to Cape Mendocino over the last decades. (Jachens and Griscom, 9380-9385).    One of its junctions, the San Andreas Fault, is located in the northwest-southwest portion of North America and the Pacific. The Pacific Plate moves northward simultaneously as the North American Plate moves to another region to collide with the Pacific Plate (Henstock, doi:10.1029/2001JB000902). The precise location of the San Andreas Fault is not established because of the complexities of the area (Simlila, 1402). Since the year 1800, statistics have shown that seven earthquakes have taken place with the least of magnitude four in the scale (Kelsey and Carver, 4799-4811). Another junction in the Mendocino group is the Cascadia subduction zone that is located north of the Mendocino Triple Junction. The 15-degree subduction angle continues to escalate as the depth of the subducting plate increases (Dengler et al., 45). Earthquakes in this area occur after 300 to 560 years, depending upon ruptures if the zone crossing point (Clarke and Carver, 190). The last area to be included in the Mendocino Triple Junction is the Gorda plate. The eastward movement of the Gorda Plate caused a clockwise movement in the Juan de Fuca plates, a plate formerly part of the Gorda region. The Mendocino zone remained still while the Gorda Plate repositions itself and interacts simultaneously with the Pacific Plate, therefore, increasing its deformation over time (Stoddard, 11527-11529). The Mendocino Fault is a zone located between the Pacific Plate in the south and the Gorda Plate in the north. The lateral movement of this fault contributed to seismic activities in Cape Mendocino, extending to the coast of the Gorda Plate. Earthquakes have been reported along the Mendocino Fault in 1994 as the most recent and massive tectonic plate movements with an intensity of 7.4 (Dengler et al., 45-50). In this actively deforming area, imbricate thrusts and the axes of folds overlying blind thrusts in the accretion prism offshore are oriented west-northwest and project southeastward  to align with several moderately low- to steep-dipping faults onshore. In the onshore region, the rocks bounded by these faults correspond to increasingly more distal parts of the uplifted accretion margin from northeast to southwest. The underlying problem is then rooted in the steep-dipping Pacific-North American plate boundary and to be related to ongoing northward propagation of the San Andreas movement. The area south of Cape Mendocino is significantly associated with the Pacific-North American faulting and the transformation of the plate boundary. The Future of Mendocino Triple Junction: The history of the activities in the Mendocino Triple Junction can be a potential benchmark for the cause and intensity of future earthquakes, tectonic plate ruptures, and ground breakdown. The rupture in the plates is the immediate effect of the activities happening in the Mendocino Triple Junction. Ground shaking can deliberately have an effect to structures, which is measured in the logarithmic scale to determine the intensity of the earthquake. Ground shaking is accountable for most loss of life and man-made property destruction throughout an earthquake so the importance of historical evaluation and earthquake assessment can help in the improvement of building and architectural designs and standards. The shaking intensity during an earthquake is dependent on the position of the land area hit by the massive ground activity, the type of soil and the slope of the vicinity, and its distance from the earthquake’s epicenter. Seismologists have monitored that a number of regions tend to repetitively encounter robust seismic trembling than other zones. This is due to the ground beneath these regions is comparatively soft than the other parts of the district. Soft soils intensify and amplify ground Soft soils also intensify shear waves, creating a more hazardous and damaging effects on any structure lay on the ground. Individuals living in the area that have experienced strong earthquakes will be likely to suffer strong earthquakes in the future depending upon the distance of the region to the epicenter (source) of the massive ground shaking. Amplification is caused when a seismic wave moves through subsurface materials and is amplified to produce relatively higher horizontal and vertical motion. In contrast, bedrock has a tendency to dampen seismic waves and therefore reduce ground motion. About one-quarter of the entire earthquake energy released in California during historic times has occurred along the Humboldt County coast. The size, location, and frequency of past earthquakes give an indication of what to expect in the future. Strong earthquakes with epicenters onshore have recurred about every 20 years. Since the 1870s, the largest of the historic seismic activity in the area of the Mendocino Triple Junction reached a magnitude of 7.2, which took place in 1923. The Northern Coast affected areas have been the focal point in the Gorda Plate activity. The recurrence of the earthquakes in this area occurs every two years.   Neighborhood in the coastal expanse from Cape Mendocino to Eureka has been smashed into frequent ground shaking than the remainder of the Humboldt County. Earthquakes have hardly ever affected Northern Humboldt County in history. On the other hand, because the historic documentation is comparatively transitory, regions not distressed in the past may even be at danger. Massive and huge earthquakes have been a part of the lives on the north coast so residents ought to take measures to get ready for any earthquakes to come. Up to date earthquake activity consists of several large-scale happenings in the Cape Mendocino district. Three powerful earthquakes hit Cape Mendocino area, with magnitudes  of 7.1, 6.6, and 6.7, respectively , in 1992. A strong upheaval set upon the north coast in the Cape Mendocino area with a magnitude of 5.6 last January 1997. The earthquake was situated on the Mendocino fault extremely close to the Mendocino Triple Junction. The United States Geological Service (USGS) set up new building codes in preparation for the severe earthquakes in the Mendocino Triple Junction belt. â€Å"Building codes provide the first line of defense against future earthquake damage and help to ensure public safety,† said the USGS.   According to the records of USGS, historical accounts provided a â€Å"firm basis for revising building codes to more fully reflect the need for extra strength in structures built on soft ground.† â€Å"Designing and building large structures is always a challenge, and that challenge is compounded when they are built in earthquake-prone areas. More than 60 deaths and about six billion dollars in property damage resulted from the Loma Prieta earthquake (caused by the San Andreas Fault). As earth scientists learn more about ground motion during earthquakes and structural engineers use this information to design stronger buildings, such loss of life and property can be reduced.† Earthquake-resistant plan and construction are necessary to plummeting earthquake losses. These code amendments are a major step toward better earthquake safety to withstand large earthquakes can be further improved with groundwork bolts, cut off walls, and additional strengthening strategies. References Clarke S. H. Jr. and Carver G. A., Late holocene tectonics and paleoseismicity, southern Cascadia subduction zone, Science, 255:188-192, 1992. Dengler, L., G. Carver, and R. McPherson, Sources of north coast seismicity, Calif. Geol. 48, 43-53, 1992. Dengler, L., Moley, K., McPherson, R., Pasyanos, M., Dewey, J., and Muray, M, 1995, The September 1, 1994 Mendocino fault earthquake, California Geology, v. 48, p 43-53, 1995. Dickinson, W. R. and W.S. Snyder, Geometry of Triple junctions related to San Andreas transform, J. Geophys. Res., 84(#B2), 561-572, 1979. Henstock, T.J., and A. Levander, Structure and seismotectonics of the Mendocino Triple Junction, California, J. Geophys. Res., 108(B5, 2260), doi:10.1029/2001JB000902. Jachens, R.C. and A. Griscom, Three-dimensional geometry of the Gorda plate beneath northern Californa. J. Geophys. Res. 88, 9375-9392, 1983. Kelsey, H.M. and G.A. Carver, Late Neogene and Quaternary tectonics associated with northward growth of the San Andreas fau lt, northern California. J. Geophys. Res., 93, 4797-4819, 1988. Lachenbruch, A. H. and Sass, J. H., Thermo-mechanical aspects of the San Andras, in Proc. Conf. On the Tectonic Problems of the San Andreas Fault System, edited by R. Kovach and A. Nur, 192-205, Stanford University Press, Palo Alto, Calif., 1973. Oppenheimer, D., G. Beroza, G. Carver, L. Dengler, J. Eaton, L. Gee, F. Gonzalez, A. Jayko, W.H. Li, M. Lisowski, M. Magee, G. Marshall, M. Murray, R. McPherson, B. Randall, G.R., Ammon, C.J, and Owens, T.J., Moment tensor estimation using regional seismograms from a Tibetan Plateau portable network deployment, Geophys. Res. Lett., 22, 1665-1668, 1995. Simila, G.W, Peppin, W.A., and McEvilly, T.V., Seismotectonics of the Cape Mendocino, California, area. Geol. Soc. Am. Bull. 86, 1399-1406, 1976. Stoddard, P.R., A kinematic model for the evolution of the Gorda plate, J. Geophys. Res., 92, 11524-11532, 1987. Zandt, G. and K. Furlong, Evolution and Thickness of the lithosphere ben eath coastal Caifornia, Geology, v. 10, 376-3

Friday, August 30, 2019

Positive psychology Essay

Positive psychology is literally all about the positive energy within a person. It is an organized attempt to make the most out of it. It is believed by the psychologists that it is preventative therapy instead of it being post illness therapy. Most of the psychologists that believe in this system believe in the glass being half full. It is optimism and happiness that can lead a person to live a better and a fuller life. I believe that action must be taken before it is too late. The effort of professionals in trying to make the world a happy place and to attempt to make a go at it is commendable. It is a very effective method of trying to bring out the positive energy. In the fast changing world of today, materialism and consumerism rule the roost. It is that only that which is making a lot of people unhappy unnecessarily. It is the competition and the need to strive for more. Positive psychology teaches one to be satisfied with what one has. Once that person is happy, he will emit happiness. Happiness, laughter and smiles are all contagious. One smile goes a long way. Many a times it takes just a smile to light up a person’s day. Small changes in ones lifestyles on a daily basis can go about a long way in maintaining a healthy outlook on life. It is important to appreciate the small things in life. One needs to take time out to stop and stare. To be able to smell the flower. To be able to see the ants in the garden. On a more personal level, being more organized can save a person from unnecessary frustration and anger while looking for something. Knowing where to find what can add to the confidence and that’s a small step towards having a good day. In the long term, it is important to remind to oneself, the need for patience and tolerance. A cool mind can handle any problem more effectively and let it affect one much lesser. At the end of the day, positive psychology is all about the ripple effect one person can have on the community. It is the chain reaction caused by the positive energy radiating from one person to the other. It is all about happiness, and sharing. It is truly about being human.

Thursday, August 29, 2019

Creating the Decision Environment Essay Example | Topics and Well Written Essays - 750 words - 2

Creating the Decision Environment - Essay Example Groupthink occurs when the group is highly cohesive and when they are pressured to make better decisions .this occurs when pressure from the unanimity seem to be more influential. Additionally, the group pressure leads to carelessness and irrational thought since the groups experiencing the considerable group thinking fails to take the alternative as they seek to maintain unanimity. Most decisions that are shaped by group thinking have a lower probability of achieving successful results. Group thinking has become a consequential phenomenon that occurs because of the desired groups being in a consensus and therefore overriding the people’s common sense. The desire to represent the alternative and assessing critically the position, at the same time expressing the unpopular view of the group. The desires of the group that come into a common cohesion cause a very effective drive upon a good decision making strategy and an appropriate style to solve any arising problems. When pressures for unanimity seem overwhelming, members are less motivated to reasonably consider the alternative courses of action available to them.  These group pressures lead to carelessness and irrational thinking since groups experiencing groupthink fail to consider all alternatives and seek to maintain unanimity. In fact, groupthink is not commonly recognized in that Groupthink is a behavior that is found in some situations or even across several types of groups in the team settings. It is therefore important to look out for the key symptoms as group thinking involves the following symptoms: After a number of successes the group starts feeling like any decision that they make is the best and the most right one because of the common disagreement from any other source. The brainstorming technique has helped in allowing the ideas to flow freely with no disapproval. Peer pressure expresses itself in and opposing opinion of the team members and questions that are rationale behind their

Wednesday, August 28, 2019

Business in its cultural Environment Essay Example | Topics and Well Written Essays - 3000 words

Business in its cultural Environment - Essay Example The material and human resources at its command are considerable, and their various orientations and expertise a veritable potential source of wealth. However, the same is also a source of difficulty for a company that has to cope with the imperfections of all human endeavor. Usually when the size of the undertaking is bigger, the imperfections become magnified, as the case appears to be here. In this case, the Airbus had difficulty integrating and coordinating among its multi-national components. The fact that the various components of the Airbus A380 were manufactured in centers located in the different countries already present logistical difficulties. However, it was the cultural divide amongst the groups of workers in the collaborating countries that posed the more serious challenge. There appear to be two general levels of cross-cultural inconsistencies in the Airbus organization. Neither of the types of problems Airbus is encountering has any bearing on racial discrimination, so much as technical mismatch. For instance, the case cited the wiring problems that materialized due to the difference in the versions of the CAD software used by technical personnel in France and in Germany. Such a problem does not bear on deep-seated cultural traits, and may be addressed by adopting standard technical systems (not just measurements or verbal language) that are consistent th roughout the organization. The case study shows that these are lapses caused by company attention being focused on power plays in internal company politics (â€Å"jostling for positionâ€Å"), and the human resources diverting its efforts to resolving legal issues rather than operational problems. Simply put, the discrepancies were the result of negligence. The second problem is more inherent to the way Airbus SAS is organized, and was highlighted by the failure of communication and coordination besetting production and design. Because of inefficiencies in operations, the

Tuesday, August 27, 2019

Public Administration Assignment Example | Topics and Well Written Essays - 500 words

Public Administration - Assignment Example Wilson’s theory of public administration was put forward by Woodrow Wilson who is considered as one of the pioneers of public administration. In his 1887 article titled â€Å"The Study of Administration,† Wilson formally recognized public administration and the role it plays in the society. In the article, he argued that undertaking a course in public administration is essential in discovering what a government can successfully and properly do and how it can do things efficiently and with minimal cost and energy. The main theme of Wilson’s theory of public administration was that public administration should be founded on a science of management and separation of administration from traditional politics. Also, he advocated that public administration should involve a comparative analysis of private and political organizations. Besides, he advocated the need to enhance the effectiveness of public service through training of civil servants, as well as through assess ment that is merit-based. In addition, he advocated the improvement of efficiency of public administration by the adoption of business-like attitudes and practices in daily operations. Like other theories, Wilson’s theory of public administration has had key points of contention. The main point of contention of this theory has been on the view that politics and administration should be separated. On the one hand, there are those who argue that effective and efficient public administration can only be achieved through a distinction of politics and administration. On the other hand, there are those who argue that politics and administration are inextricably linked and, therefore, cannot be separated.  

Monday, August 26, 2019

Writing Style Research Paper Example | Topics and Well Written Essays - 500 words

Writing Style - Research Paper Example For me, the writing process involves several challenges. The first one is organizing ideas. I am lucky to have a teacher in my primary years who emphasized the importance of organization in writing. That teacher taught me that in every composition I make, I should take the initial step of organizing my ideas. I could still remember the exercises we had on outlining, with the main topics and subtopics organized under some roman numerals and letters. This training in writing has been very valuable to me because it makes writing easy. An organized essay is easier to read and understand. In addition, teachers appreciate an organized essay with the main ideas carefully laid out in separate paragraphs. As I move along in schooling, I learned about the challenge of composing a thesis statement. I believe that training students to make plausible thesis statements should be an important aim of Writing teachers. Teaching students to make thesis statements will enable them to achieve emphasis and organization. Identifying the thesis statement goes along with outlining. The thesis statement serves as the guide in expounding a topic and composing paragraphs to make up the essay. In my experience, establishing the thesis statement allows me to expound on the topic without going beyond it. Nevertheless, I still have to improve the way I construct a thesis statement and introduce it in the first part of my composition. The next challenge is expounding ideas. Some people find it easy to expound ideas on any topic. Maybe they are gifted with a lot of information to write about. In my case, I find expounding ideas easy when I am interested in the topic. Personal experience, home decorating, gardening, and entrepreneurship are some topics I find easier to write about, whereas sports, statistics, and environmental issues are some of my waterloos. Academic writing, unlike other forms of writing, also challenges me to use outside sources.

Sunday, August 25, 2019

Comparable Destinations, Different Paths Essay Example | Topics and Well Written Essays - 1500 words

Comparable Destinations, Different Paths - Essay Example The letters also explain why nobody else has been to Utopia. The reason is so ridiculous that it lends believability to the statement--someone had coughed and the exact longitude and latitude had not been heard, but Raphael was being sought to disclose the information. The first book tells of the traveler Raphael Hythlodaeus, to whom More is introduced in Antwerp and also explores the subject of how best to counsel a prince, a popular topic at the time. Plato doubtless did well foresee, unless kings themselves would apply their minds to the study of philosophy, that else they would never thoroughly allow the council of philosophers, being themselves before, even from their tender age, infected and corrupt with perverse and evil opinions (More 8). More tries to persuade Raphael that into the lucrative service of royalty, acting as an advisor to kings or lords. Raphael, however, seems to be saying that his views would not be listened to because such men are too corrupt to understand so pure a study as philosophy. Raphael speaks of Plato, of whom he seems to emulate the thoughts and ideals. The above statement screams of Platonic thought, suggesting that kings must be philosophers as well in order to rule fairly and wisely. More seems to consider it a philosopher's duty to ... While Raphael has chosen to embrace the idea of starting again, the character More desires to fix things as they are. "The difference is only a matter of one syllable," Raphael says as a way to illustrate the short distance from service to servitude (More 7). One syllable indeed. This simple sentence may offer a clue that can answer a question of much debate in regards to More's Utopia. The question is, as a reader, can one accept this imaginary society as a realistic blueprint for a working nation On the other hand, should one assume that More had no intention of his book being taken seriously One could arguably maintain either position. More's Utopia most definitely works if viewed as a satire. By taking into account More's use of these puns, one can deduce that his intention in writing Utopia is to reveal more about the current state of politics in 16th England than it is to offer a model for an ideal society. Support for such an argument can be easily supported by simply looking at the title of the book. The word "Utopia" has two roots. However, depending on which root is chosen, the meaning of the word changes drastically. "Utopia" is a combination of the syllable eu, which can be translated as "good," and the word topos, which means "place." At the same time, if one assumes that the ou prefix is used, which has a negating tone and translates into "no," then suddenly More's "good place" becomes "no place." The difference is only a matter of one syllable. More's hints at absurdity extend beyond simply the use of pun in the title/name of the island, being found in even his description of the island of Utopia. According to the

Saturday, August 24, 2019

Care of Creation Essay Example | Topics and Well Written Essays - 1000 words

Care of Creation - Essay Example This applies to my experience of that hour spent in God’s creation and my renewed appreciation for the Sabbath, an old practice that has gained a new modern dimension. This experience has given me a renewed vision of God, the purpose of my life and of those around me, more especially the people who depend on me for pastoral leadership, and of what I need to do to be more effective as a follower and pastor, of one who leads others to God’s kingdom and who is led by the hand by the Divine Master. My first lesson is that of having gained a deeper understanding of the meaning of the Sabbath, a weekly reminder that God blessed and made holy this day of rest (Bass 78). Being the perfect teacher he is, God laid down in scripture that on the Sabbath day, he rested, not so much to tell us that he got tired from all that creating stuff, because God by definition does not get tired. Rather, God rested to show us how important it is to spend time, slow down, experience moments of t ranquility and peace, and reflect on the important things in life. God in his divine wisdom foresaw how exciting the world would become, and how man and woman would be so caught up with the beauties and riches of this world that we forget why we are alive and how we ever get to be here at all; in a word, to â€Å"remember who we are and what is important† (Bass 88). Just looking around me and reflecting on what I witnessed that Sabbath hour, I remembered those words about Jesus looking at the people around him who went around like sheep without a shepherd, living lives devoid of meaning and purpose. We work so hard to earn money so they can enjoy life, but with the economic difficulties happening all around us, many realize that making both ends meet is not easy. Likewise, we experience that the world, limited and full of evil, only offers limited and imperfect happiness. The second series of lessons proceed from the renewed understanding of the Sabbath. Experiencing the Sabb ath with a deeper and more personal and spiritual meaning made me realize that I too have been engrossed with the worldly dimension of life and even of prayer and worship. I have focused on the means and not the ends, on the form, not the substance, and on the body of faith, not in its soul. Like the religious leaders of whom Jesus warned the people to do what they preach, but not do as they do, I realized that pastors who should lead their flocks to God are focusing too much on â€Å"preaching and teaching† instead of â€Å"leading and living by doing†. Sure, one needs to preach and teach, but the best way to learn is by doing, watching and learning from the example of Jesus himself, who taught by word and example. Like most pastors, I need to hone and develop that dual vision of which Scharen and Volf (97) wrote, of seeing material things and their relation to God, as source, creator, joy, good, gift, talent, the one to which everything leads, to see the invisible Go d in the visible things of the material universe. A more effective pastoral leadership must be characterized by a spiritual life grounded on God through prayer and the imitation of Jesus’ life of sacrifice and love for all souls, even those

Research on Ford Manufacturing Paper Example | Topics and Well Written Essays - 2000 words - 1

On Ford Manufacturing - Research Paper Example Effective employee management is the most critical factor for organization’s success. This paper evaluates some findings at the multinational automobile giant, Ford Motor Company, in terms of employee performance and team structures. In this process, an attempt is made to understand the relation that exists between the two, if any. A short research will be conducted based on literature on Ford Motor Co and a case will be built to identify what could be the potential obstacles towards better performance and employee motivation. Further, this paper tries to establish that conflicts among teams members and between teams can affect performance at individual, team and organizational levels. These conflicts can also affect employee motivation and performance, which can be restored with effective leadership and appropriate organizational practices. 1.1 The Ford Motor Company: Ford Motor Company is an automobile producer, founded by Henry Ford in 1903 and has multinational presence. The automotive operations at Ford involve designing, developing, manufacturing, sales and servicing of cars, trucks and other automotive parts. Ford Motor Co revolutionized its operations and business by making ‘quality’ its main focus and adopted Total Quality Management approach to improve quality as well as reduce costs and wastage. The company has its presence all over the world, with its main manufacturing units based in many parts of the US, some parts of Europe and Asia Pacific (Ford Motor Co., n.d). Ford Motor Co is known for the concept of Fordism, which Siegel (1988) explains as the systematic control of all stages of accumulation from producing raw materials through marketing (cited in Jessop & Sum, 2006; p.59). This management concept yielded huge success to Ford during early 20th century. With changing markets, capitalistic structures and advent of other mass production techniques, Fordism was transformed from an

Friday, August 23, 2019

Comparing the movie with the book(The Elegance of the Hedgehog) Essay

Comparing the movie with the book(The Elegance of the Hedgehog) - Essay Example The wide appreciation of the movie affirms that the novel has been well presented in audio form and the director has used brilliant skills and techniques to present the cinematic version of the novel. The movie revolves around the tale of an intelligent girl with some distinct type of emotions and attitude. She used the camera of his father to record the personal moments of the family members in order to show the reason their different types of attitudes. She planned to end her life on her sixteenth birthday by using sleeping pills of her mother. The parents of the girl were busy enough in their own life and engagements that they rarely have quality time to spend with their children. The lack of parent attention instigate some different types of feelings in the girls and rather living a normal life as other girls of her age, she used to become a spy at her own home and recording different moments of the lives of people of her home. Her life took a turn when she began to look inside the life of her building’s concierge who was experiencing pleasant change in her life after the entry of a person. The main idea and theme of the movie has been taken from an international best seller book. The book was appraised by the literary experts due to the intelligent dealing of the intricate subject and the same kind of reception was given to the movie because the director has used complete potential and cinematic proficiency to produce same level of work as was presented in the book form. It is widely believed that the movie has done justification with the book because the main ideas of the book are not ruined in the movie; in fact, they are well presented and well explained with the support of proficient acting and direction. It is not always easy to make a movie from a book or novel because when people have already read the story

Thursday, August 22, 2019

Three Men in a Boat Essay Example for Free

Three Men in a Boat Essay British humor is often hard to grasp for Indians, with their references and rambling. But Jerome K. Jeromehits the nail on the head with ‘Three Men In A Boat‘ and brings out British absurdities and traditions in a comic way. This book though meant to be a travelogue of sorts ended up, inexplicable as one of the most loved funny stories of all time. And even after 100+ years since publishing, the jokes remain fresh and witty! It outlines the story of three men – Jerome and his two friends, Harris and George and their dog, a fox terrier called Montmorency on their holiday on the River Thames in Oxford, England as the three men suffer from some weird ailment. So they set themselves off on a trip to cure themselves – a boating trip. Apparently, that was quite common in those times, which is around 1889 when the book was published. The travelogue portion is quite apparent as along the trip, Jerome mentions all the villages and landmarks they pass, such as Hampton Court Palace, Hampton Church, Monkey Island, Magna Carta Island, Marlow etc and the few titbits about the culture in those places. But this, though intended to be the main part of the book was overshadowed by the comic instances that cropped up and the book gave up and now is just a humorous tale. Jerome often digresses and starts off random anecdotes and ponders upon life. From friends to friends-of-friends to historic characters, Jerome’s detours truly know no bounds. He gives up brilliant character sketches of not only Harris and George, but of Montmorency also. From how his Aunt Podger used to take a week long refuge at her mother’s place when Uncle Podger donned the role of a handyman trying to fix â€Å"little† things in the house to how the making of Irish Stew from all the leftovers compelled Montmorency to add his bit by bringing a dead-water rat, each episode will leave you giggling in delight. And the best is that it describes the common frustrations of life that you and I go through. The downside of the book is that the return journey back up the Thames is wrapped up in just one chapter which leaves you slightly hurt and bewildered at the sudden ending. But the wry and sarcastic humour and wit with the misadventures, exaggerations, mad caps, twists and sheer silliness in this P. G Wodehouse-y book leaves you pleasantly happy at the end. Of course, one can’t help but read this book in a proper British accent. Be it in their heads, or out loud. And that would be perfectly fine.

Wednesday, August 21, 2019

Sources Of Stress Among Teachers Psychology Essay

Sources Of Stress Among Teachers Psychology Essay Stress is a psycho-physiological process which arises out from the intercommunication of the individual with the environment (Muhammad et al, 2010) and (luthans, 1998) which ends up in disturbances and manifestations depending on the individual characteristics such as health status or psychological process such as attitude (Muhammad et al, 2010).As reported by Ingrid (1997) adapted from Hans Selye (1982), few people would be able to give the definition of stress or even attempt to give a clear cut definition due to the huge number of causes which can contribute to stress. Stress can be defined as a charismatic situation in which a person is affront with an event, constraint or appeal related to what he desires and for which the result is perceived to be both ambiguous and vital (De Cenzo, 1998) whereas Hans selye simply said that stress is the expansive response of the body to any appeal made upon it. It is to be noted that stress can be classified into eutress and distress whereby e ustress is a positive psychological response to a stressor and distress is the negative response (Geraldine, 2011). An example of eustress is an opportunity or a challenge which will lead to a promotion and for distress if any tension, worry or frustration. This study will be focusing mainly on distress. Stress is experienced by nearly all people in their life; however occupational stress is more present in the daily life of worker. As stated by Smith (2000) occupational stress is widespread and can be a major cause of ill health. 2.2 OCCUPATIONAL STRESS Jobs have always been a little stressful however over the last few years the workplace has become increasingly stressful. As per the report of National Institute for Occupational Safety and Health 75% of workers believe that todays workers experience more work stress than the prior generation. In 1992, the United Nations in its report even qualified job stress as the 20th century disease (Bob, Rollin, and Bruce, 1997). The World Health Organization (WHO) has characterise job-related stress in countries that are developing as a modern hazard in a traditional working environment and it is not to be forgotten that Mauritius as well is a developing country. Due to the important metamorphosis in company organization, change in nature of work, and technological advances, stress affect employees of all hierarchical level. Work-related stress can be said to be experienced when the appeals from the work environment exceed the employees ability to cope with (or control) them. It is not a disease, but it can lead to problem of ill health in areas like mental and physical health. Occupational stress is a sign of a problem within the organization, not individual weakness (European Agency 2000a, 2002a). It is to be noted that in this study, the term work related stress and the term occupational stress has been used interchangeably however they are distinguished from each other(Health and safety executive,2006) work related stress includes cases where work may have aggravated the stress experienced hence work may be a contributory factor but not necessarily the sole cause whereby the term occupational stress refers to cases where work is the sole cause of the stress experienced and associated symptoms of ill health. The Health and safety executive defined work related stress as the unfavorable reaction people have to increasing pressures or any types of request on them at work. Those reactions, people have to cope with a great number of demands, trying to cope with their duties and responsibilities concerned about their job and they have difficulty in doing so. On an individual level, it is often the fact of not being able to cope with the work load and environment with an associated negative manifestation. In comparison with other professions, teaching is considered as a high stress occupation. (Lambros, 2006) and (Mc and Von, 2005). 2.3 MODELS OF OCCUPATIONAL STRESS A stress model helps to better understand the various stages, and therefore can help people to better cope with the stress. The main model of stress is discussed below: 2.3.1 HANS SELYE MODEL OF STRESS Selye (1977) developed the general adaptation syndrome model .The general adaptation syndrome can be defined as the sum of whole expansive, systemic reactions, of the body which arise due to continuous and prolonged exposure to stress. When a person emits symptoms of not being in good health, it may surely be due to the fact that he is stressed. He identified that stress gather up in the body and if the relevant stressor is not removed, the person body will go through predictable phases as shown in Figure 1. Hans selye talked of three phases: ALARM STAGE This is the stage whereby the body recognize that there is a danger and prepare to deal or not to deal with the threat which is also known as the stressor, this situation is often called the fight or flight response. The stress hormone, adrenaline, is secreted and give the person the energy to fight or flight the stressor. RESISTANCE STAGE In this stage the body tries to return to a situation of psychological tranquility by resisting the alarm due to the fact that the threat still exists. The body remains activated however on a lesser extent than in the alarm stage but sufficient to cause an increase in the metabolic rate and consequently it may enter the third and final stage EXHAUSTION STAGE Exhaustion takes place when one or more target organs shows sign on dysfunction, this happens when the body can no longer meet up with the demand places upon it and start to dysfunction. This is the state whereby a disease is diagnosed which shows that a disease is the consequence of the related stressor. This can result in having serious consequences even death of the organ or the organism as a whole. http://210.9.134.6/zinecmt/templ/aipc/ezineimages/stress.gif Diagram 1: graphical evolution of the three stages of general adaptation syndrome. (Source: Carlson et al,(eds) (2007). Psychology: The Science of Behaviour (6th Ed). Boston, USA: Allyn and Bacon-Pearson) The above theory has been a basis for later research work to be carried out, however there are some issues with it, as it has been carried out on rats, and to extend it to humans may not be that clever, considering the emotional content of the stress perceived which may alter the way a human will react to the stress. Also as the name of the theory state, selye claimed it to be general irrespective of any nature of stress, however different type of stressors may act differently and hence different responses may be observed. 2.3.2 THE PERSON- ENVIRONMENT FIT MODEL The person-environment fit model is one of the well known among others, whereby it lays emphasis on the interaction between the peculiarity of the individual and that of the situation. One important aspect of this interaction is that occupational stress is the extent to which the individual will fit to his working environment, in other words, according to the person-environment fit model , the extent to which the person will face occupational stress depend on how much can he fit the workplace and the demands of the work thereof. Other factors can be motivation, ability and productivity. Â  If the person-environment fit turn out to be wrong, it can cause serious problems in any work environment. Occupational Stress and lack of productivity are natural conflicts related to the fact of a failure between a person fit into their workplace environment. This model can be highly related to occupational stress as it is important for the person to feel his importance in the work place and to fit the working environment so that later on, the problem of occupational stress dont arise, this was confirmed by Edwards and Ship (2007) whereby the study he did confirmed that the misfit between the person and the environment can lead to people having stress. As well as not being able to meet up the demands, the person-environment fit can also failed due to different motives of the individual and the work respectively which will surely lead to occupational stress. Person-environment fit theory argues that stress effects may arise when threatening job demands lead to disequilibrium in the interaction between an individual and the work environment. Although influential, the model is considered to have some drawbacks .There remains confusion over the notion of fit and its measurement (Edwards Cooper, 1990) and also difficulty to conceptualise the notion of fit (Julian and john, 2010) 2.3.3 THE KARASEK DEMANDS- CONTROL MODEL The demand-control model was developed by Karasek in 1979 and it focuses on the job demands and control at a work place situation whereby the employee is faced with a heavy work load/demand and at the same time , is given little control over the work. Hence any job which has a very high demand and very little or simply no control is given to the employee will end up in the employee facing stress, as confirmed by Doi.Y (2005), this is also known as the strain hypothesis. Demands can be in terms of various work activities, such as a close deadline to submit a particular work, too much responsibilities to handle, conflicting roles at work, not having the right to share views in decision making among others and control can also be in terms of the decision making of the employee and the way how he want to carry his job, this model divide the employees in four categories. The first category is the one who have high control over the high demands are known as active employees, second category, those who have high control but low demand, as low strain category of employees. Third category, those with low demands and low control are known as the passive category and finally the one with low control and high demands, as the high strain category (D Overgaard et al, 2004). The most common criticism is that the demand -Control model is too simple. Peter, Arnold B., Ad de (2001) and Johnson (1989) has argued that job control is not the only resource available to coping with job demands and proposed that social support from colleagues or superiors may also function as a mediator of the relationship between job demands and Stress reactions which agrees with the study done by Cristina et al (2012) who said that this model does not apply equally to all individuals and various factors, such as personality can act as a moderator or a buffer. 2.4 TEACHERS STRESS In the field of education, it is not easy to define teachers stress; however Kyriacou (2001) defined the latter as the involvement by a teacher of abhorrent, negative emotions, such as anger, Anxiety, tension, frustration or depression, resulting from some aspect of their work as a teacher. Since the late 1970s, many research work has been done to analyse teacher stress. The wealth of research published over the last years shown that there is a big problem concerning teachers and they do face stress. Compared to other professions, teaching is considered to be high stress occupations (Mc Shane and von Glinow, 2005), which therefore results to higher stress level among teachers consequently leading to poor performance, staff health problems and higher job dissatisfaction (Williams and Gersh, 2004).Occupational stress did had a negative effect on the job performance of the teachers in a secondary school which in turn may result in poor teaching to the students (Mohammad Aklaq et al, 201 0). A survey carried out by Karl Peltzer et all (2008) proved that teachers do have a high level of stress as well as job stress were associated with most stress- relating illnesses such as hypertension, mental distress, tobacco and alcohol abuse which agrees with the study of Wang pei and Zhang Guoli (2008) who deducted from their study that the negative effect of occupational stress on teachers health are significant. Teachers stress can have various consequences therefore it is important to manage it and one way of managing it , is by knowing the sources which stress the teachers most and deal with it directly. 2.5 MANIFESTATIONS OF STRESS AMONG TEACHERS Sign and symptoms of stress among teachers can be seen as manifestations when they face a threshold of stress that they cannot control or deal with. It can differ individually as some may be more prone to certain symptoms than others. Sign and symptoms of stress or those manifestations need to be identified as soon as possible so that it doesnt end up in serious consequences or even fatalities. These can be categorized into three main areas: first of all, the mental symptoms, for example depression and anxiety, secondly, the physical symptoms such as indigestion, palpitations and stomach cramps, and finally the behavioural symptoms which may be in terms of alcohol abuse, using prescription drugs etc. Despite being exposed to same level of stress, secondary school teachers may react differently and have different symptoms to that stressor, some teachers may exhibit depression whereas others may have back ache (leung et al, 2009).Mental symptoms/emotional symptoms tend to be most prese nt and among the first reaction to stress from the teachers. A study carried out in the George region in south Africa showed that teachers stress manifest itself mainly on a mental or emotional level as well as but to a much lesser extent in terms of physical level (M.A.J Olivier, D.J.L and Venter, 2003).In the study carried out by Kyriacou.C and J. Sutcliffe (2011) they found that, the most frequent symptoms of stress reported were tiredness and feeling frustrated 2.6 SOURCES OF STRESS AMONG TEACHERS The sources or causes of stress are also known as stressors. Stressors vary in severity and duration, some situations maybe stressful for everyone but in other situations, individuals may react differently. What is a stressor for a particular individual may not be a stressor for the other one. For some teachers, not being able to cope with the classroom indiscipline maybe a source of stress whereas for others, this may be easily tacked with and not a source of stress hence despite every one of them are teachers; sources of stress may vary from people to people, from organization to organization as well as from situation to situation. Factors such as the mental state of the individual, degree of stress coping, experience of teaching and other demographic variables may play a vital role in deciding if a specific problem can be a cause of stress or not. The research conducted by Brown Ralph (1992) concluded the following most Prevailing work-related factors leading to stress among teac hers, which gave us a better Idea and knowledge on teachers stress: Students-An absence of discipline or disruptive students, students motivation and their respective attitude towards the class and their teachers, the size of the class and their ability to cope and preparing the students for examination purposes. Stressors such as disruptive students may endanger the mental health of the teachers as confirmed by (Nurrul Izzah Abdul Samad et al , 2010) and the national union of teachers in UK(2009) Changes- A lack of information and the necessary resources to smoothen or make changes easier as well as more changes than the demand by the teachers which end up in stress among the teachers. This agrees with the study of kyriacou.(2001) whereby change itself is implicated in teachers stress and could be a problem thereof The management of the school -There is poor cooperation in terms of decision making process and no required training is given to meet the job new demands which keep increasing day by day, an example can be new and latest technological demands, the latter may as well be a potential source of stress for teachers (Olivier and Venter, 2003). Interpersonal relationships- poor social interactions among colleagues and lack of team spirit which may result in interpersonal conflicts. Reviews studies have identified interpersonal conflicts as a major source of teachers stress.( lambros lazuras , 2006) Parents and the community-parents pressure on the teachers to make their children achieve good results and accommodate unrealistic expectations. A study done by G.M steyn and G.D.Kamper (2006) shown that Parents pressure do cause educators to experience stress and the community poses a serious hindrance to learning. Another modern problem nowadays is a particular type of role ambiguity as often the teachers have to nurture, counsel or be mother and father of certain students due to family shift work factors or simply divorce factors.(Nhundu,T,J, 1999) which was confirmed by Joachim Stoeber and Dirk Rennert( 2008). In short, the most common sources of stress for teachers mentioned above are poorly motivated pupils, ill discipline, organizational culture, poor working conditions and poor collegiality. Jarvis (2002) in his critical review of more recent findings on teacher stress focus on three ample causative factors for this: (1) Factors intrinsic to teaching, (e.g., working condition, work under load/overload, repetition and boredom) (2) Cognitive factors which affect the teachers (e.g., time pressures, role conflict and role ambiguity), and (3) Systemic factors operating at the institutional and managerial level (e.g., appraisal system, Non-managerial support, reward system) Holmes (2005) stated that several scholars researching on teachers stress and stress management have categorized some situations that caused stress as follows: (1) Stress resulting from anticipation: fear of whats to come, worry and anxiety. (2) Stress as a response to a current situation: its happening now, and you have to react. (3) Stress from the past: it happened a while ago, but is still lingering in your mind and seemingly impossible to let go. (4) Chronic stress: its an on-going situation, or reaction to a specific event, the impact of which is lingering. Career development can also be a major source of stress (G.M steyn and G.D. kamper, 2006),which agrees with the study done by Ahlam b el shikieri and Hassan A. musa (2012)These can be classified into three main categories namely, job security, performance appraisal and professional training whereby the threat of losing one job is a potential source of stress (M.A.J Olivier and Venter, 2003).Any possibility of demotion may also lead to stress( Rout and Rout,2002).the performance appraisal system can also be a major source of stress for the individual especially if the outcome may influence any promotion or the latter salary.( Rout and Rout, 2002).Required training programme is needed to meet new demands and challenges of the education sector. 2.7 FIMIAN TEACHER STRESS INVENTORY In the present study, we have followed the Fimians teachers stress inventory with minor adaptations to suit the local context in Mauritius. This model explains the teacher stress in a ten factor theory, whereby five consist of sources of occupational stress and the other five about manifestations of stress. According to fimian, when those stressors are present, teachers do have stress and it becomes evident in terms of psychological, behavioural and other type of symptoms. It should thus be possible to identify one array of events that acts as sources of stress and other array of stress that acts as manifestations of stress. Teachers stress is related more to environmental events and the perception of these events, than it is to personal or professional variables such as teacher age, gender, age, education level and number of years of teaching. The factors described in the inventory are time management, work related stressors, professional distress, discipline and motivation, profess ional investment, emotional manifestations, fatigue manifestations, cardiovascular manifestations, gastronomical manifestations and behavioural manifestations. A study carried out by M.A.J Olivier(2003) using the fimian teachers stress inventory concluded that teachers stress manifest itself mainly on an emotional level as well as on a physical level and the most significant source of stress among them was professional investment however the results cannot be generalized as the limited scope of the investigation. In 2009, the study carried out by Victoria sanderlin hand in New Orleans, among novice secondary school teachers, using the fimian teacher stress inventory found that time management, along with discipline and motivation, were the two highest sources of stress for novice teachers moreover Fatigue manifestation and emotional manifestation were the most apparent manifestations of stress, novice secondary teachers may not have those experience to tackle with stress and it may have been different sources or manifestations if they were not novice. Another study carried out by Rubina hanif, Sadaf tariq and Masood nadeem (2011) using the same inventory concluded that teachers shows highest level of stress at work related stressors and fatigue manifestations was the most common one, however this study was carried out in Islamabad, the capital of Pakistan, and the teachers of Islamabad may not be facing same problem as small town or cities teachers are facing, hence the sample is not well represented. 2.8 SUMMARY OF LITTERATURE REVIEW

Tuesday, August 20, 2019

Difference Between Memory And Data Storage Computer Science Essay

Difference Between Memory And Data Storage Computer Science Essay The terms memory and data storage are often confused. Both memory and data storage are means by which a computer keeps data used to perform tasks. Both of them are measured in bytes. The term memory refers to the amount of Random Access Memory (RAM) but also includes Read-Only Memory (ROM) installed in the computer, whereas the term storage refers to the capacity of the computers hard disk. Figure 12: Difference between Memory and Data Storage For example, in figure 12, the file cabinet represents the computers hard disk, which provides storage for all the files and information we need in your office. When we come in to work, we take out the files we need from storage and put them on our desk for easy access while we work on them. The desk is like memory in the computer. It holds the information and data we need to have handy while youre working. The main difference between memory and data storage is their function. Storage is used to hold all the computers information. Data stored in the hard disk is permanent and it is not lost when the computer is turned off. When we delete a file, only the access to that file is removed, not the information itself. To permanently delete a file, the hard disk must be formatted or overwritten. It is even possible that even though a disk has been formatted, an expert can still view the information. Memory and data storage can work together. When the computer does not have enough RAM to support its processes, it converts a portion of the hard disk into virtual memory. Virtual memory acts the same way RAM does. However, since it is a part of the hard disk, using virtual memory slows the computer down. 2.2 Internal Memory and External Memory Computer internal memory is used to store data that is used by the system at startup and to run various types of programs such as the operating system. Typically, internal memory is contained on small microchips that are either attached or connected to the computers motherboard. Computer memory can range from a couple of megabytes to several gigabytes. SRAM, DRAM, and ROM are the example of internal memory. External computer memory is technically any storage device that we can connect to our computer to record data. Flash drives, HDD with USB cable, any SD card are the example of external memory. 2.3 Hierarchy of Storage Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary and off-line storage is also guided by cost per bit. Historically, memory has been called core, main memory, real storage or internal memory while storage devices have been referred to as secondary storage, external memory or peripheral storage. 2.3.1 Primary Storage In computer memory the term primary storage or primary memory is used for the information in physical systems which function at high-speed as a difference from secondary storage. Primary storage often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses, an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address that indicates the desired location of data. Then it reads or writes the data itself using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks. 2.3.2 Secondary Storage Secondary storage or secondary memory is physical devices for program and data storage which are slow to access but offer higher memory capacity. It differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfers the desired data using intermediate area in primary storage. Secondary storage does not lose the data when the device is powered down because it is non-volatile. In modern computers, hard disk drives are usually used as secondary storage and it is typically about a million times slower than memory. Some other examples of secondary storage technologies are USB flash drives, floppy disks, magnetic tape, paper tape, punched cards, standalone RAM disks, and Iomega Zip drives. 2.3.3 Tertiary storage Tertiary storage or tertiary memory provides a third level of storage. Typically it involves a robotic mechanism which will mount and dismount removable mass storage media into a storage device according to the systems demands, these data are often copied to secondary storage before use. It is much slower than secondary storage. This is primarily useful for extraordinarily large data stores, accessed without human operators. When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information and then the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library. 2.3.4 Off-line storage Off-line storage is computer data storage on a medium or a device that is not under the control of a processing unit. The medium is recorded, usually in a secondary or tertiary storage device and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction. Off-line storage is used to transfer information. Additionally, in case a disaster like a fire destroys the original data, a medium in a remote location will probably be unaffected, enabling disaster recovery. In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are most popular, and to much lesser extent removable hard disk drives. In enterprise uses, magnetic tape is predominant. Older examples are floppy disks, Zip disks, or punched cards. 2.4 Characteristics of Storage 2.4.1 Volatility Volatile memory is computer memory that requires power to maintain the stored information and non-volatile memory is computer memory that can retain the stored information even when not powered. That is why the contents of RAM are erased when the power of the computer is turned off but ROM holds its data indefinitely. The fastest memory technologies of today are volatile ones. Non-volatile memory is suitable for long-term storage of information. 2.4.2 Mutability Read/write storage or mutable storage allows information to be overwritten at any time. A computer without some amount of read/write storage for primary storage purposes would be useless for many tasks. Modern computers typically use read/write storage also for secondary storage. Read only storage retains the information stored at the time of manufacture, and allows the information to be written only once at some point after manufacture. These are called immutable storage. Immutable storage is used for tertiary and off-line storage. Examples include CD-ROM and CD-R. Slow write, fast read storage is the read/write storage which allows information to be overwritten multiple times, but with the write operation being much slower than the read operation. Examples include CD-RW and flash memory. 2.4.3 Accessibility In random access, any location in storage can be accessed at any moment in approximately the same amount of time. Such characteristic is well suited for primary and secondary storage. Most semiconductor memories and disk drives provide random access. In sequential access, the accessing of pieces of information will be in a serial order, one after the other, therefore the time to access a particular piece of information depends upon which piece of information was last accessed. Such characteristic is typical of off-line storage. 2.4.4 Addressability In modern computers, location-addressable storage usually limits to primary storage. Each individually accessible unit of information in storage is selected with its numerical memory address. In modern computers, secondary, tertiary and off-line storage use files addressable systems. Information is divided into files of variable length, and a particular file is selected with human-readable directory and file names. Content-addressable storage can be implemented using software or hardware, hardware being faster but more expensive option. Each individually accessible unit of information is selected based on the basis of the contents stored there. 2.4.5 Capacity The total amount of stored information that a storage device or medium can hold is raw capacity. Memory storage density is a measure of the quantity of information bits that can be stored on a given length of track, area of surface, or in a given volume of a computer storage medium. Generally, higher density is more desirable, for it allows greater volumes of data to be stored in the same physical space. Density therefore has a direct relationship to storage capacity of a given medium. 2.4.6 Performance Latency is a measure of time delay experienced in a system, the precise definition of which depends on the system and the time being measured. The time it takes to access a particular location in storage. The relevant unit of measurement is typically nanosecond for primary storage, millisecond for secondary storage, and second for tertiary storage. It may make sense to separate read latency and write latency, and in case of sequential access storage, minimum, maximum and average latency. In communication networks, such as Ethernet or packet radio, throughput or network throughput is the average rate of successful message delivery over a communication channel. In computer data storage, throughput is usually expressed in terms of megabytes per second, though bit rate may also be used. As with latency, read rate and write rate may need to be differentiated. Also accessing media sequentially, as opposed to randomly, typically yields maximum throughput. 2.5 Fundamental Storage Technologies The most commonly used data storage technologies are semiconductor, magnetic, and optical, while paper still sees some limited usage. Some other fundamental storage technologies have also been used in the past or are proposed for development. 2.5.1 Semiconductor Semiconductor memory is an electronic data storage device, often used as computer memory, implemented on a semiconductor-based integrated circuit. A semiconductor memory chip may contain millions of tiny transistors or capacitors. It is made in many different types and technologies. Semiconductor memory has the property of random access, which means that it takes the same amount of time to access any memory location. Semiconductor memory also has much faster access times than other types of data storage. A byte of data can be written to or read from semiconductor memory within a few nanoseconds, while access time for rotating storage such as hard disks is in the range of milliseconds. For these reasons it is used for main computer memory or primary storage, to hold data the computer is currently working on, among other uses. 2.5.2 Magnetic Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. Magnetic storage is non-volatile. The information is accessed using one or more read/write heads which may contain one or more recording transducers. A read/write head only covers a part of the surface so that the head or medium or both must be moved relative to another in order to access data. 2.5.3 Optical Optical storage is a term from engineering referring to the Storage of data on an optically readable medium. Data is recorded by making marks in a pattern that can be read back with the aid of light, usually a beam of laser light precisely focused on a spinning disc. An older example, that does not require the use of computers, is microform. 2.5.4 Paper Paper data storage refers to the use of paper as a data storage device. This includes writing, illustrating, and the use of data that can be interpreted by a machine or is the result of the functioning of a machine. Paper data storage, typically in the form of paper tape or punched cards, has long been used to store information for automatic processing, particularly before general-purpose computers existed. Information was recorded by punching holes into the paper or cardboard medium and was read mechanically to determine whether a particular location on the medium was solid or contained a hole Apr 14 2 Computer Organization and Architecture Facebook Twitter Google Tumblr Computer Memory Chapter 1: Introduction to Computer Memory Figure 1: An Imaginary Computer Memory To know about the detail of computer memory, first of all we should know what is memory? and then what is computer memory?. As we know the memory is a power to remember things. In psychology, memory is the process by which information is encoded, stored, and retrieved. But in computing, memory refers to the physical devices used to store programs or data on a temporary or permanent basis for use in a computer or other digital electronic device. Computer data storage, often called storage or memory. It is a core function and fundamental component of computers. A computers memory can be said as a list of cells into which numbers can be placed or read. Each cell has a numbered address and can store a single number. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits. A bit is the basic unit of information in computing and telecommunications. A bit can have only two values, either 1 or 0. Eight bits form a byte. Each byte is able to represent 256 different numbers either from 0 to 255 or à ¢Ã‹â€ Ã¢â‚¬â„¢128 to +127. To store larger numbers several consecutive bytes typically two, four or eight may be used. When negative numbers are required they are usually stored in tows complement notation which is a mathematical operation on binary numbers. A computer can store any kind of information in memory if it can be represented numerically. The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. A computer consists of at least one processing element, typically a central processing unit (CPU) and some form of memory. Since the CPU does not differentiate between different types of data, it is the softwares responsibility to give significance to what the memory sees as nothing but a series of numbers. Modern computers have billions or even trillions of bytes of memory. In computer architecture, the CPU contains a special set of memory cells which is called registers. A processor register can be read and written too much faster than the main memory area. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. 1.1 First Computer Memory The first computers were often very large, usually as big as a desk or even a room and had minimal processing capability. These early computers were built to work on specific problems or solve certain types of equations and not much more. The first computers used punch cards for input and had very limited memory for processing. The average memory in the first computers was between three and forty five kilo-bits Memory was used to store data in the processing of linear equations, and the results were then printed in binary. When compared to the computers available today, the first computers do not seem very technologically advanced, but at the time they were created these first computers were engineering masterpieces. Many of the concepts created when building early computers are still in use in modern computing technology. The Atanasoff Berry Computer (ABC) credited with being the very first computer. The ABC had a whopping 3000 bits of memory which allowed it to process sixty items at one time. Other early computers had more memory. Some has high as forty five kilobits, which allowed them to process several more words of data in a shorter amount of time increasing the overall processing speed of the system. Without the first computers and their ability to make simple calculations, the technology we use today might not be as advanced as it has become. 1.2 History of Computer Memory In 1940s memory technology mostly permitted few bytes capacity. The first electronic programmable digital computer the ENIAC (ElectronicNumericalIntegratorandComputer) using thousands of octal base radio vacuum tubes. In electronics, a vacuumtube is a device controlling electric current through a vacuum in a sealed container. The ENIAC could perform simple calculations involving 20 numbers of ten decimal digits which were held in the vacuum tube accumulators. The next significant advance in computer memory delaylinememory. It was a form of computer memory used on some of the earliest digital computers. Like many modern forms of electronic computer memory, delay line memory was a refreshable memory but it was opposed to modern random access memory. Delay line memory was sequential access. Delay lines could store bits of information within a quartz and transfer it through sound waves propagating through mercury. Delay line memory would be limited to a capacity of up to a few hundred thousand bits to remain efficient. In 1946 two alternatives to the delay line, the Williams tube and Selectron tube, both using electron beams in glass tubes for storage. The Williams tube would prove more capacious than the Selectron tube because the Selectron was limited to 256 bits, while the Williams tube could store thousands. It was also less expensive than Selectron tube. To find non-volatile memory Jay Forrester, Jan A. Rajchman and An Wang developed magnetic core memory in the late 1940s. It was allowed for recall of memory after power loss. Magnetic core memory would become the dominant form of memory until the development of transistor-based memory in the late 1960s. 1.2. 1 Historical Development of Computer Memory Computer memory is much more than DRAM or Flash. It has come a long way up until the origins of todays omnipresent memory technologies. If we take it more than 160 years back in time and revisit the milestones of computer memory technology then products we may have never heard of. 1.2.1.1 Punch Cards Figure 2: Punch Cards Apunched card is a piece ofstiff paper that contains digital information represented by the presence or absence of holes in predefined positions. Earlydigital computers used punched cards as the primary medium for input of bothcomputer programs anddata. In 1837 Charles Babbage first purposed the Analytical Engine, which was the first computer to use punch cards as memory and a way to program the computer. Punch cards also known as Hollerith cardsand IBM cardsare paper cards containing several punched holes that where originally punched by hand and later by computers that represent data. Hollerithspunchcardsused in the 1890 census had round holes, 12 rows and 24 columns. The IBMpunchedcard designed in 1928, had rectangular holes, 80 columns with 12 punch locations and each one character to each column. So 80 x 12 = 960, this gives us a maximum capacity of 960 bits or 125 bytes of data. The use of punch cards predates computers. They were used as early as 1725 in the textile industry f or controlling mechanized textile looms. From the 1900s, into the 1950s, punched cards were the primary medium for data entry,data storage, and processing in institutional computing. As of 2012, some voting machinesstill utilize punched cards to input data. During the 1960s, the punched card was gradually replaced by magnetic tape. 1.2.1.2 Drum Memory Figure 3: DrumMemory Drummemory is an obsolete magnetic data storage device. A drum is a large metal cylinder that is coated on the outside surface with a ferromagnetic recording material. It could be considered the precursor to the hard disk platter, but in the form of a drum rather than a flat disk. In most cases a row of fixed read-write heads runs along the long axis of the drum, one for each track. The drums of the Atanasoff-Berry Computer stored information using regenerative capacitor memory. Regenerativecapacitormemory is a type of computer memory that uses the electrical property of capacitance to store the bits of data. A difference between most drums and a modern hard disk drive is that on a drum there was a track per head so that the heads do not have to move to the track to access data. Head per track disks were used mostly for paging. Particularly while drums were used as main working memory, programmers often took to positioning code onto the drum in such a way as to reduce the amount of t ime needed to find the next instruction. In 1932 Austrian IT engineer Gustav Tauschek invented the first widely used computer memory, called drum memory. In 1942 John Atanasoff successfully tests the Atanasoff-Berry Computer (ABC) which was the first computer to use regenerative capacitor drum memory. It was widely used in the 1950s and 60s as the main working memory of computers. Tauscheks original drum memory had a capacity of about 500,000 bits or 62.5 kilobytes. One of the early mass-produced computers, IBM 650, had about 8.5 kilobytes of drum memory, which in a later model was doubled to about 17 kilobytes. Some drum memories were also used as secondary storage. Drums were later replaced as the main working memory by memory such as core memory. 1.2.1.3 Williams Tube Figure 4: Williams Tube Freddie Williams applies for a patent on his cathode-ray tube (CRT) storing device in 1946. The device that later became known as the Williams tube or Williams- Kilburn tube. It was used as a computer memory to electronically store binary data. It was the first random-access digital storage device. The Williams tube depends on an effect called secondary emission. When a dot is drawn on a cathode ray tube, the area of the dot becomes slightly positively charged and the area immediately around it becomes slightly negatively charged, creating a charge well which is unable to convert to another type of energy. The charge well remains on the surface of the tube for a fraction of a second, allowing the device to act as a computer memory. The charge well lifetime depends on the electrical resistance of the inside of the tube. By drawing a second dot immediately next to the first one the dot can be erased. Information is read from the tube by means of a metal pickup plate that covers the face of the tube. Each time a dot is created or erased, the change in electrical charge induces a voltage pulse in the pickup plate. There is no practical restriction in the order of positions so it is called à ¢Ã¢â€š ¬Ã‚ ³random-accessà ¢Ã¢â€š ¬Ã‚ ³ nature of the lookup. Reading a memory location creates a new charge well, destroying the original contents of that location, and so any read has to be followed by a write to reinstate the original data. Since the charge gradually leaked away, it was necessary to scan the tube periodically and rewrite every dot. Some Williams tubes were made from radar-type cathode ray tubes with a phosphor coating that made the data visible. Each Williams tube could store about 512-1024 bits of data. 1.2.1.4 Selectron Tube Figure 5: Selectron Tube Between 1946 and 1953 Jan Rajchman begins his work on developing the Selectron tube. The original 4096-bit Selectron was a large, 5 inch by 3 inch vacuum tube with a cathode running up the middle, surrounded by two separate sets of wires forming a cylindrical grid, a dielectric material outside of the grid, and finally a cylinder of metal conductor outside the dielectric, called the signal plate. The smaller capacity 256-bit system was constructed similarly, but built in a planar fashion rather than cylindrical,resulting in an even larger vacuum tube. The device used an indirectly heated cathode running up the middle, surrounded by two separate sets of wires and offered a storage capacity of 4096 bits to 256 in the proposed production device. The Williams tube was an example of a general class of cathode ray tube (CRT) devices known as storage tubes. The primary function of a conventional CRT is to display an image by lighting phosphor using a beam of electrons fired at it from an el ectron gun at the back of the tube. Like the Williams Kilburn tube, the Selectron was also a random access storage device. Because of the popularity of magnetic core memory at the time, the Selectron tube was never put into mass production. 1.2.1.5 Magnetic-core Memory Figure 6: Magnetic-core Memory Second major milestone in modern computer memory technology was magnetic core memory which was widely adopted. Core memory or magnetic core memory became a widespread form of random-access memory, relying on an array of magnetized rings and was invented in 1947 and developed up until the mid-1970s. It is said to be non-volatile and will not lose its contents when the power is removed. The term core comes from conventional transformers whose windings surround a magnetic core. The basic principle of core memory was using a core as a ring of ferrite that could be magnetized in one of two directions. As a result, the memory was able to store digital information either a 1 or 0. In core memory the wires pass once through any given core, they are single turn devices. The core can take two states, encoding one bit, which can be read when selected by a sense wire. When the core is read, it is reset to a zero which is known as destructive readout. Circuits in the computer memory system then r estore the information in an immediate re-write cycle. Magnetic core memory was initially very expensive to fabricate but prices dropped as the market developed. It was the standard form of memory system until displaced by solid-state memory in integrated circuits, starting in the early 1970s. 1.2.1.6 Random access Memory Figure 7: Random access Memory Random-access memory (RAM) is a form of computer data storage. A random-access device allows stored data to be accessed directly in any random order. Today, random-access memory takes the form of integrated circuits which is a set of electronic circuits on one small plate or chip of semiconductor material, normally silicon. One distinguishing characteristic of RAM is that it is possible both to read data from the memory and to write new data into the memory easily and rapidly. Both the reading and writing are accomplished the use of electrical signals. The other distinguishing characteristic of RAM is that it is volatile. A RAM must be provided with a constant power supply. If the power is interrupted, then the data are lost. Thus, RAM can be used only as temporary storage. The three main forms of modern RAM are static RAM (SRAM), dynamic RAM (DRAM) and phase-change memory (PRAM). In SRAM, a bit of data is stored using the state of a flip-flop. This form of RAM is more expensive to produce but is generally faster and requires less power than DRAM. In modern computers, it is often used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair which together comprise a memory cell. The capacitor holds a high or low (0 or 1) charge and the transistor acts as a switch that lets the control circuitry on the chip read the capacitors state of charge or change it. Phase-change memory is also known as PRAM, is a type of non-volatile random-access memory. PRAM can offer much higher performance in applications where writing quickly is important, both because the memory element can be switched more quickly and also because single bits may be changed to either 1 or 0 without needing to first erase an entire block of cells. PRAMs high performance, thousands of times faster than conventional hard drives, makes it particularly interesting in nonvolatile memory roles that are currently performance-limited by memory access timing. ECC memory, which can be either SRAM or DRAM, includes special circuitry to detect or correct random faults or memory errors in the stored data, using parity bits or error correction code. A parity bit or check bit is a bit added to the end of a string of binary code that indicates whether the number of bits in the string with the value one is even or odd. Parity bits are used as the simplest form of error detecting code. In information theory and coding theory with applications in computer science and telecommunication, error detection and correction or error control are techniques that enable reliable delivery of digital data over unreliable communication channels. Many computer systems have a memory hierarchy consisting of CPU registers, on-die SRAM caches, external caches, DRAM, paging systems and virtual memory or swap space on a hard drive. This entire pool of memory may be referred to as RAM by many developers. 1.2.1.7 Read Only Memory Figure 8: Read Only Memory Read-only memory (ROM) is a class of storage medium used in computers. Data stored in ROM cannot be modified, or can be modified only slowly or with difficulty. It is really only suitable for storing data which is not expected to need modification for the life of the device. When only a small number of ROMs with particular memory content is needed, a less expensive alternative is the programmable ROM (PROM). Like the ROM, the PROM is nonvolatile and may be written into only once. For the PROM, the writing process is performed electrically and may be performed by a supplier or customer at a time later than the original chip fabrication. Special equipment is required for the writing or programming process. Another variation on read only memory is the read mostly memory, which is useful for applications in which read operations far more frequent than write operation but for which nonvolatile storage is required. There are three common forms of read mostly memory, they are EPROM, EEPROM and flash memory. The optically erasable programmable read only memory (EPROM) is read and written electrically, as with PROM. However, before a write operation, all the storage cells must be erased to the same initial state by exposure of the packaged chip to ultraviolet radiation. Erasure is performed by shining an intense ultraviolet light through a window that is designed into the memory chip. This erasure process can be performed repeatedly. It has the advantage of the multiple update capability. A more attractive form of read mostly memory is electrically erasable programmable read only memory (EEPROM). This is a read mostly memory that can be written into at any time without erasing prior contents, only the byte or bytes addressed are update. The write operation takes considerably longer than the read operation, on the other of several hundred microseconds per byte. The EEPROM combines the advantage of nonvolatility with the flexibility of being updatable in place, using ordinary bus control, address, and data lines. It is suppor

Monday, August 19, 2019

Lowering the Drinking Age to 18 :: Argumentative Persuasive Essays

There has been an ongoing controversy in the United States on whether the drinking age should be lowered to eighteen like most of the world or if it should stay at twenty-one. Underage drinking has been a major controversial issue for years, yet why is it not under control? Teenagers are continuing to buy alcohol with fake identification cards, drink, get into bars, and drink illegally. As a teen I have proof that these things are going on not only in college but in high school as well. There are a lot of factors that come together to why the drinking age should be lowered to eighteen; the most obvious reason is too many people are drinking before they are twenty-one. Liquor stores, bars, and clubs all want to make money and if they can get away with selling to underage teens then they will. A study done by the Academic Search Premier agrees that, ?By now it is obvious that the law has not succeeded in preventing the under-21 group from drinking? (Michael Smith 1). There would be a lot of benefits to having the drinking age change to eighteen. The amount of binge drinking would lessen, and the out rage to drink would also decrease. According to Smith, ?Reports of binge drinking come from all types of campuses across the country. In 1992, researchers reported that more college?s students were drinking to get drunk than their counterparts a decade earlier, and one recent study reported an increase, just since 1994, in the number of students who drink deliberately to get drunk? (Smith 1). I interviewed my friend Shelly Mitchell who recently turned twenty-one and asked her how she felt about finally being legal to drink. She quoted, ?It is not as exciting to drink anymore, I mean I still like to go out with my friends to bars, but the fun is all over, in high school and college it was so exciting trying to get alcohol by using a fake ID.? All of these factors could be changed by lowering the drinking age to eighteen. In a study done by the Harvard School of Public Health, binge drinking is defined as five drinks in a row for boys and four drinks for girls. And when they did a survey they found that 44% of the students attending Harvard binge drink (Jeffrey Kluger 1).

Sunday, August 18, 2019

Confucianism :: Papers China Religion

Confucianism Confucianism is the major system of thought in China. It was developed from the teachings of Confucius and his disciples, and concerned with the principles of good conduct, practical wisdom, and proper social relationships. Confucianism has influenced the Chinese attitude toward life, set the patterns of living and standards of social value, and provided the background for Chinese political theories and institutions. It has spread from China to Korea, Japan, and Vietnam and has aroused interest among Western scholars. Although Confucianism became the official ideology of the Chinese state, it has never existed as an established religion with a church and priesthood. Chinese scholars honored Confucius as a great teacher and philosopher but did not worship him as a personal god. Nor did Confucius himself ever claim divinity. Unlike Christian churches, the temples built to Confucius were not places in which organized community groups gathered to worship, but public edifices designed for annual ceremonies, especially on the philosopher's birthday. Several attempts to deify Confucius and to convert Confucianism failed because of the essentially profane nature of the philosophy. The principles of Confucianism are contained in the nine ancient Chinese works handed down by Confucius and his followers, who lived in an age of great philosophic activity. These writings can be divided into two groups: the Five Classics and the Four Books. The Wu Ching (Five Classics), which originated before the time of Confucius, consist of the I Ching (Book of Changes), Shu Ching (Book of History), Shih Ching (Book of Poetry), Li Chi (Book of Rites), and Ch'un Ch'iu (Spring and Autumn Annals). The I Ching is a manual of divination probably compiled before the 11th century BC; its supplementary philosophical portion, contained in a series of appendixes, may have been written later by Confucius and his disciples. The Shu Ching is a collection of ancient historical documents, and the Shih Ching is an anthology of ancient poems. The Li Chi deals with the principles of conduct, including those for public and private ceremonies; it was destroyed in the 3rd century BC, but presumably much of its material was preserved in a later compilation, the Record of Rites. The Ch'un Ch'iu, the only work reputedly compiled by Confucius himself, is a chronicle of major historical events in feudal China from the 8th century BC to Confucius's death early in the 5th century BC. The Shih Shu (Four Books), compilations of the sayings of Confucius

Othello :: essays research papers

Without Trust, Love Cannot Prevail   Ã‚  Ã‚  Ã‚  Ã‚  Trust can be defined as assured reliance on the character, ability, strength, or truth of someone or something (Webster’s 1246). In life many people are faced with the decision on who or who not to trust, especially in the realm of love. Trust is one of the main factors needed to determine a healthy, loving relationship. In William Shakespeare’s Othello, the main character Othello is unable to trust his new bride Desdemona. In act one, scene three, Desdemona professes her love for Othello in front of her father, as well as the court. Regardless of Desdemona’s professed love for Othello, Othello fails to see her honest love and becomes blind of her devotion for her husband, when he should have had trust. Since Othello failed to have confidence in himself, when it pertained to love, he failed to have confidence in his wife and marriage, which led Othello to not trust his new bride. In society today lack of trust in one’s partner is able t o ruin a good relationship just as Othello did with his new marriage. Trust is a key factor in a healthy relationship, without it love is unable to prevail.   Ã‚  Ã‚  Ã‚  Ã‚  In act one, scene three, Othello calls in his new bride to allow her to speak on behalf of their relationship. Without hesitation Desdemona speaks on behalf of herself and her love for her husband.   Ã‚  Ã‚  Ã‚  Ã‚  Ã¢â‚¬Å"That I (did) love the Moor to live with him   Ã‚  Ã‚  Ã‚  Ã‚  My downright violence and storm of fortunes   Ã‚  Ã‚  Ã‚  Ã‚  May trumpet to the world. My heart’s subduded   Ã‚  Ã‚  Ã‚  Ã‚  Even to the very quality of my lord.   Ã‚  Ã‚  Ã‚  Ã‚  I saw Othello’s Visage in his mind,   Ã‚  Ã‚  Ã‚  Ã‚  And To his honors and his Valiant parts   Ã‚  Ã‚  Ã‚  Ã‚  Did I my soul and Fortunes consecrate.   Ã‚  Ã‚  Ã‚  Ã‚  So that, dear lords, if I be left behind,   Ã‚  Ã‚  Ã‚  Ã‚  A moth of peace, and he go to the war,   Ã‚  Ã‚  Ã‚  Ã‚  The rites for why I love him are bereft me   Ã‚  Ã‚  Ã‚  Ã‚  And I a heavy interim shall support   Ã‚  Ã‚  Ã‚  Ã‚  By his dear absence. Let me go with him.† (Shakespeare 45, line 283-293). Desdemona professed her love for Othello in front of her father and the court. By doing so she reinforced the fact that no magic or mystical powers were used and she explained how she went with Othello freely, by doing so she redefines her own status as Othello’s wife (Calderwood 29). By confessing her love, Desdemona is presenting to Othello her faithfulness to him because she is allowing all of them (those in the court) to know about her love, which should allow Othello to see her love for him is real and that he is able trust her and love her.

Saturday, August 17, 2019

Methodology on Renewable Energy in Powys Wales

This chapter identifies the justifications for, and cogency of, the methods and procedures used during the research. As briefly outlined in the debut, this research undertaking will sketch the bing and possible usage of renewable energy through the planning system in Powys, Mid Wales. Presently there are a figure of major undertakings, preponderantly wind farms already present in Powys nevertheless there is range for more every bit good as other renewable beginnings such as Anaerobic Digestion and Micro Hydro Power. The undermentioned subdivisions will aˆÂ ¦ ( do sure you signpost )What is Research?Harmonizing to the Encarta World English Dictionary ( day of the month? Page? ) , Research can be defined as â€Å" Organized survey: methodical probe into a topic in order to detect facts, to set up or revise a theory, or to develop a program of action based on the facts discovered † you need to include page Numberss for direct citations In feeble adult male ‘layman ‘s ‘ footings, this fundamentally refers to the manner in which one undertakes a procedure of achieving information that is required for a peculiar ground. There are changing grades of research methods and besides to changing grades. This could run from a individual â€Å" researching † into a vacation all the manner to a PhD pupil roll uping information for their concluding thesis. This research undertaking is for a Masters degree therefore considerable research is required ref to back up? . Clough and Nutbrown ( 2002 ) who published â€Å" A Students usher to Methodology † suggest that the research at Masters Level is non so much to turn out things but more to look into inquiries and research issues. Clough and Nutbrown both define Research by the moral act within which the research worker holds duty for guaranting that attendant alteration is for the better. It is the purpose of the author to set about this procedure in order to understand the renewable facet through the planning system and to seek to determine the short-falls, if any and how to turn to them. Necessitate to utilize a assortment of refs throughout. Try and avoid the one ref per paragraph manner.Research moralssâ€Å" Research that involves human topics or participants raises alone and complex ethical, legal, societal and political issues † ( Nancy Walton, day of the month? Page? ) . Research moralss concentrates on the analysis of ethical issues that are raised when people are involved as participants in research. Consequently, there are three chief aims in research moralss, the first and broadest is to protection over the human participants. The 2nd aim is to guarantee that any research is conducted in a manner that serves involvements of the persons or groups as a whole. Ref? The 3rd aim is to research the specific research activities and undertakings for their ethical soundness such as analysis of the direction hazards, protection of confidentiality and the procedure of informed consent. Ref? Ethical protection for research participants can be described in a figure of ways which will be explained below. These descriptions have been compiled by the â€Å" research constitution † ref? Need to draw above together into one paragraph Try and avoid utilizing mini paragraphs – it can do the treatment disjointed Voluntary Engagement Voluntary Participation by and large refers to when the populace are non obliged to take part in research. For illustration, this could be when a research worker relies upon a big figure of people where no antecedently audience has taken topographic point, such as pupils at a university or the crowd at a clean event. This would, in the sentiment of the research worker addition more information than any other type. Need to back up these thoughts with refs Informed Consent Closely related to Voluntary Participation, is the demand of Informed Consent which means that research workers must advise any possible participants to derive blessing about the research, including inside informations such as what their purposes are and the relevant processs and hazards involved. ref? Hazard of Harm Parallel with the engagement process, the research worker has a responsibility of attention for their participant to guarantee they are non subjected to any Hazard of Harm. This can be both physical and/or psychological and in order to seek to forestall either, two criterions can be applied in order to keep the privateness of the participant. Unless it is a self apparent fact e.g. the Earth is round, you need to back up points with mentions. Particularly of import as you are seeking to convert the reader that you are utilizing a valid and appropriate method. Consequently, the bulk of research guarantees the participants confidentiality. This means that the participant ‘s individuality would non be made available to anyone who is non involved with the research. The other chief to keep the privateness of the person is to do the participant anon. , therefore non even the research worker being cognizant of the individuality of the topic. Subsequently in the Findings chapter, it will be apparent that in order to determine public perceptual experience of the Case Study ( Wind Farm Application ) , a assortment of voluntary participants will be asked a few basic inquiries after the planning commission for the finding of the application for the aforesaid air current farm. The individuality of the participants will besides stay anon. nevertheless the assortment of participants will be varied such as the general populace, commission members and local authorization officers. Harmonizing to Social Research Methods ( website day of the month? ) , it claims that research workers are obliged to cover with the ethical issues of participants â€Å" right to service and good research pattern † which often uses a control group named a No-treatment control group. This group is devised of individuals who neither include the participants who receive intervention nor the plan that is being researched. However, if there are good consequences from the plans, the persons assigned to the no-treatment control group may non believe they have been allowed equal entree to the services. If there is clear grounds that these ethical criterions and rules exist, the research worker may frequently set about probes that contradict the rights of the participant, nevertheless in this undertaking, there is no ground for this to go on. Obviously, it is highly hard to expect every ethical circumstance nevertheless at that place needs to be some generic processs which have been outlined earlier in this chapter to guarantee research workers to see all relevant ethical issues whilst showing research proposals.Establishing the Research MethodIn order to set up the research methods, it must be noted that the purposes and aims are to be realistic and accomplishable. Upon consideration of both, the purposes and aims are accomplishable and hence the appropriate research techniques must be applied. As with any research, it is non possible to cover every facet of the subject and hence the research inquiries rose in this thesis in bend influenced the pick of methodological analysis. ( A Students usher to Methodology ( Clough and Nutbrown 2002 ) demand to incorporate mentions carefully – where does the above ref belong? Clough and Nutbrown ( 2002 ) have devised three classs that can be assessed against the author of their research and these are as follows: Specifying Limits Clarification of research survey Empirical Issues must be identified In order to specify one ‘s bounds, the author must be cognizant of their restrictions and guarantee that the chosen subject is of a suited volume to successfully roll up a undertaking on. Once the author has identified the bounds, lucidity of the research survey must be recognised. This can be achieved by garnering a broad scope of information and literature in order to measure a assortment of statements and statistics ref? . Finally, to place the Empirical Issues, these must be once more located through a assortment of literature both historical and present informations. Harmonizing to Dirks ( 2005 ) , â€Å" much research is reported informally and seldom qualified by its beginnings † ( p? ) . attempt and avoid tonss of mini paragraphs – can do confused treatment Dirks ( 2005 ) goes on to province that through a sequence of clip, and with polish of what has been named the scientific method, these methods of research have been analysed to guarantee that the consequences are recognised throughout as valid. Furthermore, Dirks ( 2005 ) claims that these methods can be altered to the traditional constructions of deductive and inductive logic. Anyone agree with Dirks? Sprinkle in more refs where you can It is obvious to detect such a procedure in certain probes such as a scientific experiment, in which 1 could try to deduce a truth in a deductive mode. Further observation of this can be seen in societal scientific discipline research where a study may necessarily take to a factual determination of kinds. The model for this undertaking is in conformity with the observations of Cook & A ; Campbell ( 1979, cited in Rudestam & A ; Newton, 2001 ) who identified that the research method pick for societal scientific discipline surveies is of a ‘quasiexperimental ‘ design. This design comprises of some of the cogency of the controlled experiment but maintains the statement and logic of experimental research. In the tabular array 1 below, Chappell ( 2004, cited Robson, 2002 ) identifies the chief constituents of a research design: Table 1: rubric Aim What is the survey seeking to accomplish and what specific facet or procedure within the construct is being investigated? Theory What conceptual model is the construct embedded in? Research Questions What information needs to be acquired to fulfill the intent of the survey? Methods What specific methods will be used to roll up informations? Sampling Scheme From whom will the informations be sought? Beginning: Robson ( 2002, cited in Chappell, 2004, p55 ) . The primary constituents that are contained within this research model have led to the production of believable and robust research.Different types of ResearchIn order to accomplish a believable research method, one must take from a huge array of research types to garner information. This survey is to concentrate chiefly on the planning procedures and counsel and hence chiefly classified as qualitative and non quantitative which would hold concentrated on mensurable issues. The difficultly of using qualitative research methods, are considered, more flexible than those of quantitative methods, is to utilize them to do robust analysis of modern-day issues which can be complex and unconstrained ( Robson, 2002 ) . Table 2 identifies the informations aggregation methods which were engaged throughout the researching process. Need to be consistent in table rubrics Beginning Type Method Information location Primary Interviews Work/Telephone Questionnaires Work Secondary Books Libraries/Shops Diaries Libraries Governmental Guidance Libraries/Internet/Work Other Internet/Work Table 2: Methods of Data Collection Within this undertaking, it is the author ‘s purpose to use the above research methods in order to accomplish the best, most accurate information for the ulterior chapters to analyze.Case Study ResearchHarmonizing to Cassell and Symon ( 2004 ) , â€Å" a instance survey consists of a elaborate probe with informations collected over a period of clip, of phenomena, within its specific context. † Need to associate this is The author ‘s purpose is to supply an analysis of the context and procedures which elucidate the theoretical subjects being studied in this thesis. The chosen subject in this case, Wind Farms is non isolated from its context, i.e. in a research lab, but is of importance due to the purpose to understand how procedures are understood and influenced by, in an influence context. Necessitate to fall in these subdivisions together and make a streamlined treatment A instance survey is peculiarly appropriate for research inquiries which require a elaborate apprehension of societal or organizing procedures because of the rich informations collected in context. Ref? In visible radiation of the above account, it is considered appropriate in this thesis is to integrate a instance survey as it illustrates how the author can place with existent life illustrations without achieving primary informations which would turn out hard due to the limited timescale. There are assorted thoughts of the importance of instance surveies ( Yin, Campbell 1994 ) and harmonizing to Clough and Nutbrown ( 2002 ) , â€Å" They are frequently seen as premier illustrations of quantitative research which can follow an interpretative attack to informations, surveies things within their context and considers the subjective significances that people bring to their state of affairs. † P? Harmonizing to Yin ( 2003 ) , instance surveies, in this case are referred to as an â€Å" explanatory † instance survey, and are the preferable research scheme. This is due to the fact that they are able to get by with operational links which need to be identified over a period of clip alternatively of simply set uping frequences of happening. Yin ( 2003 ) besides argues that when the concluding research inquiries and hypothesis have been established, the concluding survey may non automatically be a instance survey but assume an alternate signifier of research therefore underscoring the flexible nature of instance survey research. Case survey research is hence flexible in that it is capable of accommodating to and examine into countries of planned but besides emergent theory ( Cassell and Symon day of the month? ) Due to a figure of obvious restraints such as clip and location, the clip allocated for field work and desk surveies were slightly restricted. The country chosen is northern Powys ( Montgomeryshire ) as it was the natural pick being an employee for the council, and being based in the Montgomeryshire office in Welshpool. Need to warrant your instance survey choice carefully – usage refs to back up the standards used ( i.e. location, contacts, entree to info ) The instance survey chosen will be a current, undetermined planning application for a Wind Farm in the locality. This specific application was chosen due to the significant changes due to landscape and other environmental restraints that have been considered since entry in 2007. This application was chosen with the counsel of a work co-worker who has been and still is covering with the g application. The procedure that has taken topographic point in the 2.5 twelvemonth period will be interesting to measure and measure in an environmental context and will be to the full explained in the Findingss and Discussions Chapters.Primary Research DataInterviews One of the aims within this undertaking is to seek to set up academic work that small work had been done antecedently utilizing secondary informations. However, in order to accomplish this, it was considered appropriate and good to set about at least one interview. This determination was taken as this signifier of primary research appears to hold a figure of benefits in comparing with other methods such as questionnaire including the following taken from sociologiy.org.uk web site: day of the month? â€Å" Positive resonance between interviewer and interviewee. Very simple, efficient and practical manner of acquiring informations about things that ca n't be easy observed ( feelings and emotions, for illustration ) . High Validity. Peoples are able to speak about something in item and deepness. The significances behind an action may be revealed as the interviewee is able to talk for themselves with small way from interviewer. Complex inquiries and issues can be discussed / clarified. The interviewer can examine countries suggested by the respondent ‘s replies, picking-up information that had either non occurred to the interviewer or of which the interviewer had no anterior cognition. Pre-Judgement: Problem of research worker predetermining what will or will non be discussed in the interview is resolved. With few â€Å" pre-set inquiries † involved, the interviewer is non â€Å" pre-judging † what is and is non of import information. Easy to enter interview ( video / sound tapes ) . † Freedom to research general positions or sentiments in more item that a questionnaire. The interviewer can re-word a inquiry should the topic be unable to understand. There is more to derive information face-to-face than in authorship. Upon reception of an reply, an chance to lucubrate upon such is readily available. Above top 6 points taken from ( www.socialogy.org.uk ) is this a dependable beginning? Blumer ( 1969 ) suggest that the research worker â€Å" seek participants in the domain of life who are who are acute perceivers and who are good informed † ( p? ) need more than one ref from 1969 to back up pick of interviewees In visible radiation of this, the determination of whom to take to interview was highly easy to make up one's mind as there is merely one specialist officer for the full county who dealt with the applications used as the instance surveies. This surely alleviated the force per unit area of agreements although said officer is located in a different office about 40 stat mis off in Llandrindod Wells. For the research to be as nonsubjective and robust as possible, it was indispensable to guarantee a just representation. Is one expert interview sufficient? Argue your instance It is besides proposed to set about a figure of little graduated table structured interviews with the general populace, council members and hopefully planning advisers. This will be undertaken following the planning commission for the finding of the instance survey application. Why the combined attack – explain with refs Due to limited clip available, resources were restricted and therefore it is of import for the author to maximize the end product from the research methods by integrating interviews alternatively of questionnaires, as the information gathered is normally more elaborate and precise to the present twenty-four hours. As already outlined a semi-structured interview will be undertaken with the specializer be aftering officer. This method is frequently used to roll up qualitative informations by set uping an interview which allows the interviewee clip to react with their positions. The agreement for the interview has already been assigned a day of the month and therefore has been notified good in progress should the topic require to research any inside informations relevant to the interview. Contained within the Sociology cardinal web site ( www.socialogy, org.uk ) , it discusses the semi-structured interview and states that its aim is to â€Å" understand the respondent ‘s point of position instead than do generalizations † . This construction uses unfastened ended inquiries which frequently requested by the interviewer and others of course originate during the interview. As listed above, it can be good to derive a resonance with the topic which may enable more information to be discussed as the ambiance of the interview should be slightly relaxed. The inquiries will be prepared nevertheless, as the topic is cognizant of the subject, they will non hold sight of the inquiries. Try and infix other back uping refs instead than over trusting on one beginning Due to the volume of informations being discussed, it is decided that a Dictaphone be used to guarantee all information relayed is recorded every bit good as written and the topic was besides cognizant and agreeable of this determination. If the topic declined a recorded interview, handwritten notes would be the lone method available to retain the information. After an analysis of the gathered recordings, inclusive of the notes, the cardinal issues of the findings will be fed into the treatment for analysis. Detailss of the interview, including the transcript will be included in the ulterior treatment chapter of the thesis. Questionnaires Within this thesis, there was no purpose to include a questionnaire as the pick to use the interview technique on the chosen topic is the preferable option lucidity? . If nevertheless, a big figure of individuals were being used to garner informations, the determination to take a questionnaire would be considered more favorable. Support determination with mentions Harmonizing to Sociological Central ( SC day of the month? ) , a questionnaire comprises of a list of written inquiries that can be completed by one or two ways. Postal Questionnaire – the respondents could finish the questionnaire without the research worker being present. This type refers to any questionnaire that a respondent completes without necessitating any aid. An interview – this has been discussed antecedently nevertheless ; in short, it allows the respondent an chance to reply the questionnaire verbally in the presence of the research worker. The SC recognises, as does the author, that both types are similar whereby the same inquiries can presented in either a postal questionnaire or during an interview format. Basically, there are two chief types of inquiries: Closed-ended ( or â€Å" closed inquiry † ) – this type of inquiry allows the research to bring forth a battalion of responses the respondent can take e.g. Yes/No. ( Quantitative Data ) Open-ended ( or â€Å" unfastened inquiry † ) – this type inquiry does non let the research worker to include any specific reply therefore allows the respondent to utilize their ain words. ( Qualitative Data ) There are strengths and failing to both types and below some are listed: Postal method can let the research worker to entree a big figure of possible respondents Potentially abashing subjects can be easy addressed via the postal system which may besides include namelessness. Questionnaires are by and large easy to compose and reply, particularly should â€Å" closed-ended † inquiries are used. Questionnaires are easy to standardize which allows the research worker to derive huge sums of information on the same subjects. Conversely, the failings are as follows: The simpleness of questionnaire format can turn out hard for research workers to analyze complex issues and sentiments even if open-ended inquiries are used. The research worker will ne'er cognize who completed the postal questionnaire even though it was sent to the intended receiver. The research worker ‘s reading of the inquiries may non fit that of the respondent ‘s – this can frequently be addressed via the usage of Pilot Studies prior to the study. Response rates to postal questionnaire are general rather low ( & lt ; 20 % ) therefore the research worker must direct out the big Numberss of participants to guarantee sufficient responses As mentioned above, it is non my purpose to transport out any questionnaires nevertheless should the chosen capable be unable to go to the interview, a â€Å" back up † questionnaire using a big figure of the interview inquiries will be devised.How relevant is the above subdivision if you clearly province more than one time that you will non be utilizing this method? Possibly paraphrase some of it to propose that you considered it and may utilize it as a supporting technique if required ( update when you have completed the research )Secondary Research DataHarmonizing to Patzer ( 1995 ) , selling research routinely utilizes many types of informations and information. This widespread type is informations and other pieces of information that were originally collected for another intent. When such informations is used in a peculiar mode for market research, this is considered the 2nd usage and the informations becomes classed as Secondary Data. There are two chief types of secondary informations harmonizing to Patzer ( 1995 ) , which are internal and external secondary informations. Within each of these classifications there are several considerations which range from making internal secondary informations to accessing external informations. Need to associate little paragraphs together See presentation – spacing, headers, subheadings, font size etc During this research, a figure of books, diaries, authorities counsel, the cyberspace and university counsel were the chief beginning of informations for this undertaking. The purpose was to utilize the most up-to-date information and to besides guarantee that the beginnings are believable than secondary informations as by and large there are written for academic intents. The usage of the cyberspace in research is considered rather extended due to the ever-growing and updated nature of the world-wide-web. Haddon ( 1997 ) , who has undertaken considerable research in this field, makes suggestions about the type of research that is required to understand people outlooks whilst allowing the Internet ( Haddon and Hartman, 1997 ) . These include what the phenomenon symbolises to people such as how they foremost encountered the Internet, what support they have and any jobs they face, how it is perceived and possibly used in relation to other engineerings and media ; and where it can perchance suit into the clip constructions of families and persons. ( Haddon and Hartman, 1997 ) . It is acknowledged that Haddon and Hartman suggested these points about 13 old ages ago when, the cyberspace could be argued, was still in its comparative babyhood. It is noted that the cyberspace these yearss is considerable more user-friendly than in 1997 hence entree to the cyberspace is virtually in every family and available to everyone. Try and add in another ref above to back up this thought Out of Date Research Some of the literature noted in this undertaking could be considered instead dated nevertheless, it would ne'er be considered â€Å" outdated † due to the information being used and recognised to this current day of the month. Are you placing a job to the reader? Is it necessary? Possibly merely reference that attention has been taken to obtain a scope of stuff, although some of it by its nature is older. In add-on, due to the shear extent of information available on the cyberspace, it is of import to guarantee that the information used is reliable and accurate. This was done by utilizing authorities web site or reliable web site accredited by recognized administrations. Need reasoning drumhead subdivision of chapter Necessitate sentence or so to associate into following chapter e.g. the undermentioned chapter moves on to see aˆÂ ¦