Featured Post

Management Decision Model and Financial Management

Question: Legitimize the job of money related administration in the event of various undertakings and various circumstances? Answer: ...

Tuesday, August 25, 2020

Management Decision Model and Financial Management

Question: Legitimize the job of money related administration in the event of various undertakings and various circumstances? Answer: Presentation Money related administration is the spine for each association and people. It very well may be surmised that there are a few strategies for monetary administration that causes an association to gauge and screen their money related structure successfully. This task will illuminate the various capital planning instruments and its few favorable circumstances and inconveniences. Aside from this, the task will likewise assess the suitability of money the executives framework and estimation of salary of the monetary part of the association will be assessed. Section 1: The absolute receipts or assortments of the association Normas Cat Food is given as 5 million. It is likewise given that the new arrangement of the association will limit the all out assortment time frame by 2.5 days and complete time of payment will be expanded by 0.5 days or half days. The complete distinction between money among assortment and payment is coming as $2million. The all out number of days is coming as three days. In this manner, the measure of opened up sees is coming as $6million * 360/3 = $720 million every year. In the given case, the quantity of days is taken to be 360 in the year. The aggregate sum of income is coming to be as 8 percent of the measure of opened up discovers which is equivalent to $57.6 million. In the event that the new framework worth $80000 is introduced, at that point the normal return can be normal as $57.6 million-80000 = $5.68 million. This is mostly on the grounds that the aggregate sum of earnings and opened up sum is relied upon to be more than the measure of expenses of the new machine. Section 2 There are many capital planning methods that can quantify the attainability of the undertaking viably. These are regarding IRR, Payback and Net present worth. It very well may be additionally induced all the given apparatuses and methods can gather various qualities and determinations towards a given task. The clashing circumstance may emerge because of the size of the given undertaking adequately. Prior to tolerating the practicality of the given undertaking, the money related administrator and monetary expert of the venture is required to compute all the methods regarding capital planning. To investigate and execute the right choice of the task, it is critical to know the benefits and negative marks of the given capita planning strategies. Focal points of NPV The greatest bit of leeway of net present worth is that it assists with interpretting the future estimation of the given venture. It depends on the time estimation of cash idea and furthermore assists with finding the appropriateness and possibility of the given task. It is a practical strategy and can be execute in a basic way. Another value of the given technique is that future measure of the given incomes of the association can be deciphered with the assistance of this strategy. The danger of the venture can likewise be evaluated with the given undertaking (Brigham Ehrhardt, 2014) Detriments of NPV The greatest detriment of the strategy for Net Present worth is that this technique for capital planning absolutely overlooks the size of the given task. If there should be an occurrence of shared elite ventures, this technique for capital planning can decipher wrong legitimizations in contrast with different strategies for capital planning like IRR and recompense period. Furthermore, the aggregate sum and pace of cost of capital is accepted in this technique for capital planning. IRR: Merits and Demerits The value of this apparatus of capital planning is that it assists with interpretting the estimation of interest in examination with the pace of return of the venture. It contemplates pretty much all incomes, dangers and absolute time estimation of cash of the venture. It additionally thinks about the pace of cost of capital of a given venture. These are the greatest value of this technique for capital planning. As far as faults, if there should arise an occurrence of progress of limiting pace of incomes, the association may not assess the plausibility of the venture through this technique for IRR. A given undertaking may likewise have more than 1 pace of profits. In this manner, it is another negative mark of the given device of capital planning technique. Recompense period (Pros and Cons) The upside of the technique for recompense period is that it is very simpler to compute and it additionally assists with estimating the liquidity of some random undertaking. Furthermore, the dangers and changes of the task can likewise be estimated with the assistance of instruments and strategy of compensation period technique for capital planning. This strategy absolutely disregards the estimation of the given task or level of increment or reduction of the given undertaking. This is one of the bad marks of the given device of capital planning. This strategy thoroughly invalidates the estimation of time estimation of cash and just considers about the period in which the all out capital will be caused with the given timeframe. These are greatest burden of this instrument (Brigham Houston, 2004) From the above examination, it tends to be reasoned that since the outcomes and understanding of NPV is restricting and clashing with the investigation of IRR and restitution period, the undertakings can be considered as shared selective ventures. Net present worth don't thinks about the size of some random task. In the given case, the undertaking of the huge company can be considered as immense and it is overlooked by NPV. In this manner, the money related expert of the undertaking is required to acknowledge the task according to the capital planning instruments of IRR and restitution period. End From the above investigation, the job of money related administration in the event of various activities and various circumstances has been supported. All the strategies of capital planning and money related administration have been deciphered successfully. References Brigham, E., Ehrhardt, M. (2014). Money related administration. Artisan, Ohio: South-Western. Brigham, E., Houston, J. (2004). Essentials of budgetary administration. Artisan, Ohio: Thomson/South-Western.

Saturday, August 22, 2020

Natural Rights Theory Essays

Characteristic Rights Theory Essays Characteristic Rights Theory Essay Characteristic Rights Theory Essay Characteristic Rights Theory Name: Course: Organization: Teacher: Date: Characteristic Rights Theory Characteristic rights hypothesis is the conviction that an individual goes into this world with some essential rights that can't be denied by any administration, political force or even a constitution. Regular rights hypothesis keeps up that since people come into this world normally, they have fundamental rights that nobody can detract from them. The hypothesis proposes that people as normal animals should live and sort out themselves in the general public utilizing fundamental guidelines that are set somewhere around nature. As characteristic creatures, human have rights that not anybody can disregard, with the fundamental right being opportunity. Further, the hypothesis proposes that the rights are gotten from the idea of man as a reasonable being, the place the rights are important for his endurance. When all is said in done, the hypothesis expresses that man has rights allowed or ensured upon their creation regardless of any administration control, and exists over all countries, or are all inclusive. These rights are viewed as over any law made by the administration (Donald, n.d.) Common rights are the opportunities that are built up by a universal understanding that forces direct on over all countries. The characteristic rights are exceptionally unmistakable from legitimate rights, which are the opportunities set up specifically states and applies to that specific state. They are rights that every single individual have and are not restricted by any fake lawful set-up, and apply to people, yet different species. For example, ocean creatures live submerged by normal right and not from lawful enactment directing the equivalent. Thusly, normal rights are those rights that can't be change by any fake lawful enactment. A few instances of normal rights are the rights to life, responsibility for and opportunity or freedom. Normal rights keep individuals from specific practices, for example, torment and subjugation, permitting them to guard their lives, freedom and property (Donald, n.d.). Common rights hypothesis intently identifies with regular law hypothesis. In the edification age, the characteristic rights hypothesis served to challenge the privileges of rulers and pioneers, which made an avocation of building up a positive law, implicit understanding and an administration, which added up to the legitimate rights. In this manner, legitimate rights are gotten from the regular rights, which additionally serve to challenge the lawful rights when they go over the edge. The legitimate rights were built up with a point of ensuring the opportunity of individuals, their property, and rights to live their lives as every individual fit for thinking wished to live. The normal rights hypothesis expresses that all men are equivalent, and have the opportunity to settle on their decisions. A portion of the scholars on the side of the hypothesis have expressed this is characterized by the ethical rights every individual has, which work out easily from their through and through fr eedom and considerations, empowering them to settle on their own decisions (Donald, n.d.). The hypothesis further recommends that people are fit for acting judiciously and reserve the privilege to settle on their decisions. this is to imply that anyone as long as they are grown-ups fit for settling on decisions reserve the option to do what satisfies them without anyone limiting them. Also, this would imply that everyone has an option to the specific right, and no one would deny the person in question the opportunity to practice it. What's more, the hypothesis proposes that individuals should act normally, where individuals determine their profound quality. The hypothesis recommends that profound quality is revered in the characteristic idea of man, where he goes about as nature in him directs, a similar way creatures will act as per the nature driving them. Issues of Natural Rights Theory The regular rights hypothesis like some other has a few issues with huge numbers of its rivals reprimanding it and don't see it as right. One of the issues with the normal rights law is various translations of nature, which is distinctive across numerous districts and among various people. Therefore, the idea from normal rights hypothesis expressing that regular rights are general would not be valid since individuals will have various translations of nature. Along these lines, characterizing what is ethically right would be troublesome, making the characteristic tights hypothesis very tricky to comprehend. All the more thus, individuals are exceptionally different, with ability of each having their own regular characters. For example, a few people are commonly forceful and bold while others are normally meek. Since human instinct permits them to have both, it would be ethically directly for a forceful man to assault the hesitant one since they will be following their characteristic c ommon character and thinking (Sullivan Pecorino, 2002). What's more, deciding ethical quality is hard because of such contrasts, where every individual will have their own thinking on what is ethically right or wrong. Regular rights hypothesis recommends that acting as per nature is ethically right, while conduct not as indicated essentially is ethically off-base. For example, when a man assaults a lady, there is nothing unnatural about it naturally. Along these lines, this would not be viewed as an unnatural conduct, qualifying it as an ethical right. This would be on the grounds that the characteristic rights hypothesis recommends that it is ethically directly for living beings to act in agreement to nature. Under such a thought, men considered forceful would reserve a privilege to follow their characteristic wants and proceed with assaulting ladies since it is normal for a man to want a lady. This brings up the issue of whether men should act in understanding to their characteristic senses or whether they should oppose a portion of the ir innate nature. Thinking about the above issues, the pundits of the normal rights hypothesis contend that even youngsters are not honest, and acting from their intrinsic characteristic character, some will be forceful on others while others will get out of hand. In this manner, the youngsters go to class so as to figure out how to tame a portion of their characteristic practices, implying that common doesn't generally characterize ethical quality since this would not be good. Likewise, pundits of the characteristic hypothesis propose that as per the common right, individuals who submit a few demonstrations, for example, homosexuality, ambush, murdering among others would not be acting unnaturally; in this manner, their practices would be viewed as ethically directly as indicated by the normal rights hypothesis (Sullivan Pecorino, 2002). Another issue is that the inborn idea of individuals that is worried about building up laws isn't equivalent to the creatures, which causes another trouble with the hypothesis. Common law implies following the characteristic regular qualities where creatures go about as their innate nature directs. For example, it is normal for a lion to slaughter a gazelle for food, and different creatures, or for a feline to pursue rodents and mice. Then again, man doesn't follow his innate nature precisely. For example, it is realized that man is childish and consistently needs to have greatest advantages from anything, without thinking about others. This isn't considered ethically directly as per the ethical lessons, which implies that ethical lessons don't instruct us to follow the characteristic qualities in us as directed naturally like creatures. Another issue of the normal rights hypothesis is that greater part of the proposed rights don't have demonstrate, where it accepts that characteristic rights originates from God subsequent to making man. It is extremely unlikely to demonstrate that the characteristic rights are given by God. What's more, various individuals have various religions, implying that the recommended rights can't be all inclusive as the hypothesis propose. This makes one more issue for the hypothesis, causing a ton of analysis from its adversaries (Sullivan Pecorino, 2002). Bentham Rejection of Natural Rights Bentham is one of the significant adversaries of the normal rights hypothesis, and rejects it totally, excusing it as drivel with the rights recommended not qualifying as rights. Bentham dismisses the regular rights hypothesis totally, and takes on the utilitarian good view that thinks about the activity with the best outcomes for everyone. He recommends that human instinct similarly as though science can pick the activities with the best worth and advantage for individuals required, with the principle thought process being delight and torment. He recommends that nature puts man under two angles, torment and delight, where joy is the most wanted individuals. Thusly, it is dependent upon the individuals to figure out what could possibly be done request to understand the best outcome (Robnights, 2012). He assaults normal rights and proposes that rights are just made by the law. He further recommended that laws are only an order of the tow sovereigns, joy and agony. A legislature must b e available so as to have laws and rights inside a network or state. Rights in his view are recommended to be in relationship to the obligations that are dictated by the law. The idea of having rights that depend on regular rights or those prior a built up government are viewed as off-base and dismissed (iep.utm.edu, 2008). He assaults the characteristic law on his comprehension of legitimate rights, and nature of the law. As indicated by iep.utm.edu (2008), â€Å"the term characteristic right is a depravity of language. It is uncertain, nostalgic, and metaphorical and had rebel outcomes. Bentham proposed that common right gets vague because it makes a recommendation of general rights with no details to any object, and anybody could guarantee what has just been picked by another. In this way, practicing such an all around acknowledged right would mean wiping out the correct itself, since what turns into an option to each man is definitely not an option to any man whatsoever. All the more in this way, he proposed that under such a condition of comprehensively and questionably suggested rights, there couldn't be a legitimate framework. His other assault on the normal rights hypothesis is that characteristic rights is non-literal, he proposes th

Sunday, August 9, 2020

Mood Lability and Borderline Personality Disorder

Mood Lability and Borderline Personality Disorder BPD Print Mood Lability and Borderline Personality Disorder By Kristalyn Salters-Pedneault, PhD Kristalyn Salters-Pedneault, PhD, is a clinical psychologist and associate professor of psychology at Eastern Connecticut State University. Learn about our editorial policy Kristalyn Salters-Pedneault, PhD Medically reviewed by Medically reviewed by Steven Gans, MD on August 05, 2016 Steven Gans, MD is board-certified in psychiatry and is an active supervisor, teacher, and mentor at Massachusetts General Hospital. Learn about our Medical Review Board Steven Gans, MD Updated on September 26, 2019 Noel Hendrickson/DigitalVision/Getty Images More in BPD Diagnosis Treatment Living With BPD Related Conditions People with borderline personality disorder (BPD) are often described as having mood lability. Mood lability is an emotional response that is irregular or out of proportion to the situation at hand. It is associated with severe mood swings, intense reactions, and dramatic changes in opinions and feelings. Understanding the Scope of Mood Lability Mood lability is often evidenced by destructive or harmful behaviors. Those actions can include angry tantrums or screaming, destroying objects, aggression or violence towards others, and self-harm. The responses can occur seemingly out of nowhere, triggered in seconds. Mood lability is present in people with various mental illnesses, including bipolar disorder, post-traumatic stress disorder, and BPD. Because of how disruptive mood lability can be, it can inhibit daily life and functioning. This may include harming interpersonal relationships and careers. Symptoms of Mood Lability With BPD Many of the symptoms of BPD involve difficulties in managing or regulating emotions. This highlights the role of mood lability in BPD. According to the Diagnostic and Statistical Manual of Mental Illnesses, 5th edition â€"  the reference manual mental health professionals use â€"  BPDs criteria outlines several difficulties with emotions, including: Emotional Lability: Those with BPD experience unstable emotions and frequent mood swings. Emotions are easily triggered and are typically inappropriate or disproportionate to the circumstances.Anxiety: People with BPD may display symptoms of intense anxiety, even seeing things in a distorted light. For instance, they may think theyre going to fail or mess up even when they are well prepared. This can lead to serious paranoia and stress.Insecurity: Often, those with BPD are insecure in interpersonal relationships. They regularly assume that other people will leave them or reject them, leading to constant worrying and the need for reassurance.Impulsiveness: In response to some sort of stimulus, people with BPD often respond erratically and may engage in dangerous behaviors. This can even include self-harm when confronted with times of stress. Many people with BPD cycle between emotions rapidly. In the morning, they may be happy, full of energy, and optimistic. As the day progresses, they can become despondent, depressed, and express feelings of hopelessness. Particularly in the case of mood lability in people with BPD, the effects of the outbursts can last much longer than in other people. Thats because those with BPD tend to have heightened emotional states, to begin with. This longer-lasting effect can make managing mood lability all the more difficult. Treatment The heavy mood swings associated with mood lability and BPD can be extremely disruptive. It may keep people from being able to manage their daily routines. Routine functions can become more difficult, requiring intervention. However, BPD and mood lability can be treated. If you have BPD, its important to look for a therapist or healthcare professional who specializes in emotional regulation and personality disorders. Engaging in psychotherapy will help you learn how to manage your emotions and urges in a healthy way. From learning new coping skills to better understanding your emotional triggers, you will be prepared to handle all aspects of your mood lability. The 9 Best Online Therapy Programs

Saturday, May 23, 2020

Essay on It Leadership - 4557 Words

U10a1 Project – Final Paper Likwa Moyo Capella University TS5335 - Project Leadership and Management Instructor: Dr. Miriam Masullo Date: March 15, 2013 Abstract Zambia, like many third world countries, is facing scarcity of Information and Communication Technology (ICT) infrastructure and leadership in this technology. While advances in the ICT can support self-directed interactive learning, organizations that recognize the macro challenges of responding to the ever-changing demands of technology, are turning to leadership concepts that adapt to societies socioeconomic demands. This project examines the application of leadership to the eLearning model within the†¦show more content†¦In particular, the project is interested in children’s authoring experience with digital media and is developed to facilitate the application of a pedagogy approach which develops cognitive and interpersonal capabilities rather than one which concentrates merely on instructing with the aim of enabling learners to acquire knowledge, understanding algorithms. 1.2 Background In the 1950 period, South Korea and Zambia were roughly at the same economic level. Sixty years later, South Korea is one of the world’s most economically advanced nations today while Zambia is still struggling economically. The difference is due almost entirely to a half century of investments in education, development experts say (International Herald Tribune, 2003.) Like many developing countries, high rates of unemployment in Zambia, especially after completion of middle and high school, point to poor external efficiency (Achola, 1990). Other factors contributing to problems with the successful implementation of educational policies have been a poor economy, inadequate supply of teachers from kindergarten level, problems with curriculum relevance, and an entrenched debate about the effectiveness of what is the best leadership style in the educational arena. There is a demonstrable connection between education and development,† said Steve Packer of the â€Å"Education for all Global Monitoring Report,† which tracks governments progres s for the UN education,Show MoreRelatedThe Leadership Of Leadership And Leadership842 Words   |  4 Pagesideals of leadership, I met with two respected and admired school leaders: the Assistant Principal/Dean of Curriculum, and the Athletic Director. I chose these two school leaders because I wanted to gain an understanding of leadership from two diverse perspectives. I am thankful for the opportunity to hear from two different types of leaders, who ultimately share a lot of the same visions for my school and for leadership in general. While both subjects shared a similar definition of leadership, theirRead MoreLeadership And Leadership Of Leadership1711 Words   |  7 Pages7. Facilitative Leadership Facilitative leadership is dependent on quantities and outcomes – not a skill, though it takes much skill to master. The efficiency of a group is directly related to the effectiveness of its process. If the group is high operational, the facilitative leader uses a light hand on the procedure. 8. Laissez-faire Leadership Laissez-faire leadership gives expert to workers. According to AZ central, sections or subordinates are acceptable to work as they choose with nominal.Read MoreLeadership : Leadership And Leadership1605 Words   |  7 PagesLeadership Examined There have been many great leaders down through history. Leaders that have influenced change throughout many aspects of society. Great leaders have great influence. The effectiveness of a leader is determined by his leadership style. With so many styles to choose from, and the fact that not one style fits all situations, becoming an effective leader is a challenging task. One reflective note is that it is important to cultivate good leadership skills. One must evaluate personalRead MoreLeadership : Leadership And Leadership1225 Words   |  5 Pages with their team. Leadership is â€Å"the behavior of an individual . . . directing the activities of a group toward a shared goal† (Hemphill Coons, (1957). p. 7). †¢ Leadership is â€Å"the influential increment over and above mechanical compliance with the routine directives of the organization† (Katz Kahn, (1978). p. 528). †¢ Leadership is â€Å"the process of influencing the activities of an organized group toward goal achievement† (Rauch Behling, (1984). p. 46). †¢ â€Å"Leadership is about articulatingRead MoreLeadership : Leadership And Leadership1476 Words   |  6 PagesLeadership Effectiveness There are many different approaches to leadership, which can be dependent upon the task at hand. Some leaders are authoritative, making all the decisions for group members and allowing no space for error or input. There are those who may opt to take the President Obama approach with a more democratic leadership role, inviting the ideas of others and encouraging open communication and staff participation. Then there are the servant leaders whom are largely respected and followedRead MoreLeadership And Leadership Theory Of Leadership1341 Words   |  6 PagesLeadership Application Jesus Cabral Brandman University The author of this paper has been in various leadership positions for over 20 years. Some positions held by the author have been team leader, Operations Manager, General Manager, and Director of Operations. The author has lead teams with various degrees of education and from many cultures. The leadership theories studied during OLCU 400 will assist this leader though experienced to become much more effective and focus on thoseRead MoreLeadership, Leadership And Leadership Development915 Words   |  4 Pagestake a leadership role in any activity. They are more valued by followers and have higher performing teams. (Cherry 2014) However, leaders are the ones that use their leadership skills to make a difference in this world, such as presidents, teachers, or even college graduates. Leadership is not something you can learn from a book, but you have to gain this skill through experiences such as holding an office, organizing an event, speaking in front of people, or participating in a leadership programRead MoreLeadership Theories Of Leadership And Leadership Essay771 Words   |  4 PagesLeadership is important because it helps followers reach a common goal. There have been several studies based on effective leadership. The definition of leadership has evolved over the years adapting to the different views of the world. Leaders have influence on their followers they are looked up to. Being a leader means there will be interaction with different behaviors and personalities. Effective leaders sometimes have to adjust their style approach by the situation they are in. Every followersRead MoreTransformational Leadership : Leadership And Leadership887 Words   |  4 PagesThe idea of leadership has transformed throughout the years to encompass varying aspects of leadership approaches, leadership types and the like. According to Summerfield (2014), C. F. Rauch and O. Behling, quote leadership as: Leadership is the process of influencing the activities of an organized group toward goal achievement. Given its broad definition, leadership is understood to have different meanings when applied to diverse situations. For example, there are different types of approaches toRead MoreOrganizational Leadership : Leadership And Leadership1568 Words   |  7 PagesConceptualizing Leadership Leadership is different in the eyes of each and every individual. What one person considers great leadership may be viewed by another as too demanding. Ultimately, time, place, situation, and people involved are considered some of the view variables for which type of leadership will be most effective. Through taking the Gallup Strengths Finder survey, I have been able to cement some of my top strengths and see how they come into play in my daily life. Learning about strengths

Tuesday, May 12, 2020

The Monroe Doctrine Post Monroe Essay - 1310 Words

The Monroe Doctrine Post-Monroe Application after its Declaration by Monroe President John Tyler After Monroe’s proclamation (in the following decades), American lawmakers did not resort to the doctrine critical of the Great Powers of Europe in spite of their sporadic military ‘involvements’ in Latin America. The primary concern for Monroe had been making certain that European commercialism would not be re-enforced on a territory of growing economic and ideological significance to the US. However, when President Tyler made use of the doctrine in justifying the annexation of Texas in 1842, a newspaper in Venezuela criticized the move; the newspaper cautioned other republics in Latin America against the US, by stating: â€Å"Beware, brother, the wolf approaches the lambs† . During Tyler’s presidency, the US for the foremost time tried to really employ the Monroe Doctrine, when the British and the French had intervened in Texas after it had triumphed in 1836 in gaining independence from Mexico in the wake of an insurgence of US colonists there. Bri tain and France were involved in a power game in the territory, trying to counteract its occupation by the US. President Tyler thus resorted to reiterating the tenets of the Doctrine and admonishing the European powers to disengage themselves from matters concerning the Western Hemisphere. Tyler asserted (during his 1842 address) that, â€Å"Carefully abstaining from all interference in questions exclusively referring themselves to theShow MoreRelatedHistorical Significance of the Monroe Doctrine Essay1192 Words   |  5 Pages President James Monroe declared â€Å"In the discussions to which this interest has given rise and in the arrangements by which they may terminate the occasion has been judged proper for asserting, as a principle in which the rights and interests of the United States are involved, that the American continents, by the free and independent condition which they have assumed and maintain, are henceforth not to be considered as subjects for future colonization by any European powers,† on December 2, 1823Read MoreDefensive Foreign Policies865 Words   |  4 Pagespolicy was primarily acts of neutrality and refusal to be involved with European affairs that came out of a defensive reaction to perceived threats from Europe. Two of these policies in include Washington’s Proclamation of Neutrality and the Monroe Doctrine. Both of these policies expressed the neutrality of the United States in European affairs and helped the new country to develop without the constant threat of war. President George Washington issued the Proclamation of Neutrality in 1793 asRead MoreThe Political Affairs Of Latin America925 Words   |  4 Pagespolitical affairs of Latin America. Since the dawn of the Monroe Doctrine of 1823, the U.S. has taken a firm and direct stance when dealing with the continent to ensure that its [United States] interests are always upheld. This being said, the U.S. uses formal and informal manipulation and intervention to ensure that Latin American governments are compatible with U.S. interests. This dates back, as previously said, to the time of the Monroe doctrine, when the United-States wanted to protect its expansionRead MoreThe Politics Of The United States World Revolution923 Words   |  4 PagesAfter the independence of the United States in 1776, the new nation needed to establish and decide their global affairs. Since then the leaders of the United States from 1789 George Washington to 1823 James Monroe, which included the Federalists and the Jeffersonian parties. The United States was designed in their policies to ultimately benefit the United States and promote themselves in international affairs, to be considered as a new power in the world. The united states world policies were designedRead More The Monroe Doctrine: Origin and Early American Foreign Policy2078 Words   |  9 PagesThe essay discusses the creation of the Monroe Doctrine and how its birth shaped U.S. foreign policy from the 1820’s through the American Civil War. It also looks at the key players and their various motives in creating this important document. The Monroe Doctrine History during the early 1800’s found the American people very interested in the situation involving the Latin American countries found in central, South America, and Mexico. Though these Latin countries tried to establish theirRead MoreImperialism As An Economic Death Sentence Since Columbus Landed1572 Words   |  7 Pagesgoal was the justification that people from lands far away should be ruled over and it is the empire’s duty to rule over uncivilized peoples, as they believed them to be. This fact in regards to Latin America can be seen in the evolution of the Monroe Doctrine, which I want to get into further along this paper. After providing the reasoning behind imperialism in Latin America, the first question to answer is, why do countries choose to exploit a foreign land and pillage all its resources. Why mustRead MoreTheodore Roosevelt s Influence On Latin America1290 Words   |  6 PagesHis policies in Latin America were beneficial for the rapidly growing United States, but left an impact on Latin America that was both positive and negative. President Roosevelt took past policies of the United States, such as the unenforced Monroe Doctrine of 1823, and amended them to have the backing of the Navy in the early 1900s. One particular amendment was called the Platt Amendment of 1901, and opened the door to United States expansionism under President Roosevelt. It was under this amendmentRead More James Monroe Essay examples2365 Words   |  10 Pages James Monroe was born in the quiet town of Westmoreland County, Virginia on April 28, 1758. His father, Spencer Monroe, was married to Elizabeth Jones i n 1752. Spencer Monroe was a circuit judge and a farmer for the town (Kane 40). Monroe was the oldest of five. There were four other children; Andrew, Joseph, and Elizabeth. His third brother had died in his early childhood. He attended grammar school at a small academy for boys. This school had a reputation for serving the best of men, like GeorgeRead MoreEssay on US foreign Policy 1865-1914, expansionist or isolationist881 Words   |  4 Pageshad now arrived in full force. Throughout 1865 to 1914, the United States foreign policy was primarily expansionism. Through the strong foundation of Isolationism and the policies enacted such as The Monroe Doctrine, the United States played a largely neutral role in foreign affairs. The Monroe Doctrine passed in 1823 was to forbid European nations from colonizing any territory in the Western Hemisphere. Although the United States didn’t have the authority or firepower to back this up, it resultedRead MoreUs Foreign Policy 1865-1914, Expansionist or Isolationist Essay894 Words   |  4 Pagesnow arrived in full force. Throughout 1865 to 1914, the United States foreign policy was primarily expansionism. Through the strong foundation of Isolationism and the policies enacted such as The Monroe Doctrine, the United States played a largely neutral role in foreign affairs. The Monroe Doctrine passed in 1823 was to forbid European nations from colonizing any territory in the Western Hemisphere. Although the United States didnt have the authority or firepower to back this up, it resulted

Wednesday, May 6, 2020

The Teacher That Taught Me the Most Free Essays

What Teacher Got Me Hooked on Learning? I’ve gone through a bunch of teachers that have helped me learn. There is only one though, that woman has gone through so much stress in her life. Her name is Mrs. We will write a custom essay sample on The Teacher That Taught Me the Most or any similar topic only for you Order Now Jackson. She was my Eighth Grade Advanced Language Arts teacher. She really made me want to learn. Mrs. Jackson really tried hard to get kids to learn but she would not make any kid do a thing. She taught me how to be more independent on my work and that I have to take things like my class work and not just be handed things already done. Mrs. Jackson made me take things into my own hands with my learning. She made things to where you had to do thing son your own, and showed me the way of doing independent work. Like giving us an essay project, telling us to do it then just giving us a prompt and just letting our creative minds roam free. Her criticism let there still have room for our idea to still be there but nevertheless have it portrayed in a professional way. I remember that if there was a fragment in my essay she would give it back and say there’s a fragment fix it. I loved that she would tell me where it is but didn’t tell me how to fix it, giving me room to learn and grow from my mistake. If you asked her if we had to do something she would say no but if you want to pass my class I suggest you do. Her humor was there but she was still stern. I think she is didactic; she gave me that vibe of that already known respect that she didn’t have to ask for it was just already there. It was a given in her class what she wanted and that made me feel more respect for her, he leadership made me respect her more. She would only respect you if you respected her that was just how it was. I always had the utmost respect for her and that is still there even though I don’t have her class anymore. To be totally honest I’m using some of her techniques to write this essay at the moment. She taught things that stuck for me. She always reminded us to not repeat so many words in our essay, sometimes it is really noticeable. She really helped with my whole view on English, my reading and everything in that area. Everything was so much easier after her class. The books we read in her class were marvelous. I loved her class and she really got me into learning. How to cite The Teacher That Taught Me the Most, Essay examples

Saturday, May 2, 2020

Disaster Recovery Plan for Organization †Free Samples to Students

Question: Discuss about the Disaster Recovery Plan for Organization. Answer: Introduction Disaster Recovery Plan can be expressed as a document process or set of procedures to recover and to protect the details of business organization and its IT infrastructure to be aware of tending a disaster or disruptive events. Disaster can be arising in any situation i.e. power failures, IT system crashes, supply chain problems or it can be a natural calamity. Organizations sometimes ignore or avoid the disruptive event but it cannot be ignored for long term. In an organisation to avoid the disasters careful planning and strategies should be planned to get it minimized from the disaster. The main object to avoid the disaster is the data loss of an organisation and downtime (Phillips, 2015).It is necessary to maintain the business continuity plan which is generally falls within the several issues regarding costs constraints. It generally arises where business lack in achieving the targets and the requirements regarding to the main objects which need to be fulfilled. An organization n eed to set the objectives and the strategies to avoid the disaster. The best solution making in consideration is to keep the initial cost, transferring of data cost and the cost of data. To identify the type of risk occurred and to evaluate need to be identified the cost failures and how to identify the profitability from the occurrence of a disaster. In the given time it is essential to restore the services and to recover the valuable data. It plays a vital role to perform an action so as to avoid the disaster occurred. A well designed format should be created so that significant future event or problems may not arise in future. It is also known as Plan B, as it is an alternative action being taken if the expected targets generally fails to materialize. It has become the necessity to make the contingency plan as the main priority. Answer 1: Threats that could impact any of the important data are: Misuse of staff details Organization database Information regarding customer records and billing information Progress report or dashboards of individual user. Unwanted files from access to an internet usability Department of Finance which is affected because of its billing information and their financial reports. Answer 2:The applications and files that you will backup: Recovery of Database- It is also termed as Application- aware backups. It is necessary and good for backing up of applications and databases. It usually includes various features like independent retention schedules and automated maintenance. Tape Drives- These are the most commonly used, because of the most common backup devices. It basically uses magnetic tape cartridges to store data, but it has one disadvantage that these are not highly reliable. Tape drivers are fairly slow, and eve its selling price is low in cost (Cook, 2015). System Imaging-It captures everything that is placed on servers disk like files, programs, operating system, etc. The best suitable way of restoring the system images is when it is capable before is shut down and restore the whole data on daily basis. Recovery of Desktop security- The protection of business security offers simple and affordable desktop security which usually aims to protect against the malicious activities and dangerous threats. It basically shields desktops and servers from virus attacks and quickly recovers desktops from the event of failures (Haddow, Bullock, and Coppola, 2017). Backup of Dashboards- When you are logged out from the web; you can directly log in and can recover the files present on the dashboard like document file or report. This allows you to recover all the work and allow you to analyse and editing to a file. Backup of Storage Location-It keeps the copy of disinfected or those files who are not harmed from any threat or malicious activities. Its main feature is to restore all the possible data to process the objects at any time. Answer 3:Which, if any, data listed as important may be subject to legislative or regulatory constraints All organization has some data collection and release activities that are followed by some legal rules, regulations and legislative authorizations. Some of these are: Administrative Barriers- They are basically bounded with the limited data sharing. Security concerns and regulations required to use the specific software for circulating of information so that these requirements can secure data and computer systems. Data Incompatibilities- Various resources are needed for the manipulation, coding and transmitting of data within the organization. It can be limited because of user friendly data information tools or it can be detailed documentation. Data Sharing Guidance- Lack of standard languages are processed related to the sharing of the data. It has legally served the data policies and procedures within the organizations because of its inconsistent data researched by the policy community. Its main roles is to provide the data to its partners for the public purposes because of balancing of privacy concerns but some are not familiar with it so Experts and resources have the knowledge for creating the agreements with then but are not easily shared (Snedaker, 2013). Day Tape # B/Up Type Location Comment Archive Monday HPC7976A Full Backup USA Backup Type Daily Tuesday HPC7975A Incremental USA Backup Type Daily Wednesday 6935G Incremental USA Backup Type Daily Thursday 6920G Incremental USA Backup Type Daily Friday 6935Z Incremental USA Backup Type Daily Saturday HPC7976A Incremental USA Backup Type Daily Sunday HPC7975A Incremental USA Backup Type Daily Day Tape # B/Up Type Location Comment Archive Monday 6935G Full Back UK Storage Type Weekly Tuesday 6920G Incremental UK Storage Type Weekly Wednesday HPC7975A Full Back UK Storage Type Weekly Thursday 6935Z Incremental UK Storage Type Weekly Friday HPC7975A Full Back UK Storage Type Weekly Saturday 6935Z Incremental UK Storage Type Weekly Sunday HPC7976A Full Back UK Storage Type Weekly Day Tape # B/Up Type Location Comment Archive Monday 6935Z Incremental USA Backup Type Daily Tuesday HPC7975A Full Back USA Backup Type Daily Wednesday 6920G Full Back USA Backup Type Daily Thursday 6935G Incremental USA Backup Type Daily Friday HPC7976A Full Back USA Backup Type Daily Saturday 6935Z Full Back USA Backup Type Daily Sunday 6935G Incremental USA Backup Type Daily Day Tape # B/Up Type Location Comment Archive Monday HPC7976A Full Back UK Storage Type Monthly Tuesday 6935G Full Back UK Storage Type Monthly Wednesday 6935Z Full Back UK Storage Type Monthly Thursday 6920G Full Back UK Storage Type Monthly Friday HPC7976A Full Back UK Storage Type Monthly Saturday 6920G Full Back UK Storage Type Monthly Sunday HPC7975A Incremental UK Storage Type Monthly (Source: By Author) Saving Time and Spending Time- It need patience for solid backup plans which requires the investment of time and money, but its cost is very less according to the tasks which are not sure even that whether backup exist or not. It usually happens that the organization generally face the power and internet failures. Data loss is very expensive- Data loss of organisation is a big loss specially to the small business, because the resources and the technology required to recover the data is very expensive so it is essential to prevent the data for future occurrences Nature is Unpredictable- It is unpredictable to the nature, so it is necessary to plan the business continuity because if any kind of disaster calamity occurred, it would be bad impact on business profit, productivity loss etc. It is necessary to have a regular backup for the vital insurance against a data loss. Network backup or server only- It is necessary to back up the entire network and it must have the storage devices attached to certain servers where a business organisation can keep or an individual user can copy their important files from it. If it has more clients attached to the single server than the costs of the backup recovery would be less. Server upgrades are a snap- To restore a full system every couple of years, we can just use the system state backup to restore to a new operating system. It is easy to access and restore the business critical data in one project (Young, 2016). Reasonableness- A Company need not to manage every possible risk just because not all the risk is reasonable to manage. It is basically the test that can be applied to determine whether the risk can be controlled. It is derived from the reasonable person standard in law. Balancing Risk and Cost-The cost charge to manage or control the risk must be balanced against the impact value. The cost is basically balanced according to its actual monetary values if they are available. While there is another form to balance the costs using relative values such as low, medium and high (Strong and, 2011). Role-based Perceptions of Risk-In personnel not all the ideal people will understand the threat to a companys if risk is not managed. One of the most challenging tasks with the effective risk management is to achieve a proper balance between the security and usability. Balanced security and Usability- One of the challenging task with effective risk management is to achieve a proper balancing of system usability and security. Risk Identification Techniques- It is basically to learn and identify the risks occurred. Losses are occurred generally when the threat exposes vulnerability. There are 3 steps: Identify Threats Identification of vulnerabilities Calculate the Livelihood of a threat References Cook, J., 2015. A six-stage business continuity and disaster recovery planning cycle.SAM Advanced Management Journal,80(3), p.23. Haddow, G., Bullock, J. and Coppola, D.P., 2017.Introduction to emergency management. Butterworth-Heinemann. Johnson, L.A. and Mamula-Seadon, L., 2014. Transforming governance: how national policies and organizations for managing disaster recovery evolved following the 4 September 2010 and 22 February 2011 Canterbury earthquakes.Earthquake Spectra,30(1), pp.577-605. Phillips, B.D., 2015.Disaster recovery. CRC press. Sahebjamnia, N., Torabi, S.A. and Mansouri, S.A., 2015. Integrated business continuity and disaster recovery planning: Towards organizational resilience.European Journal of Operational Research,242(1), pp.261-273. Snedaker, S., 2013.Business continuity and disaster recovery planning for IT professionals. Newnes. Strong C. K and Shane S. J, 2011. RISK MITIGATION STRATEGIES FOR OPERATIONS AND MAINTENANCE ACTIVITIES. Available at: https://www.intrans.iastate.edu/reports/tr-627_risk_mit_w_cvr2.pdf Accessed on:30th August 2017 Young, S., 2016. Disaster Recovery 5 and Business Continuity of SCADA.Handbook of SCADA/Control Systems Security, p.115

Sunday, March 22, 2020

John Locke Theory Of Property Essays - Libertarian Theory, Property

John Locke Theory Of Property Perhaps one of, if not the, most historically influential political thinkers of the western world was John Locke. John Locke, the man who initiated what is now known as British Empiricism, is also considered highly influential in establishing grounds, theoretically at least, for the constitution of the United States of America. The basis for understanding Locke is that he sees all people as having natural God given rights. As Gods creations, this denotes a certain equality, at least in an abstract sense. This religious back drop acts as a the foundation for all of Lockes theories, including his theories of individuality, private property, and the state. The reader will be shown how and why people have a natural right to property and the impact this has on the sovereign, as well as the extent of this impact. Locke was a micro based ideologist. He believed that humans were autonomous individuals who, although lived in a social setting, could not be articulated as a herd or social animal. Locke believed person to stand for, ... a thinking, intelligent being, that has reason and reflection, and can consider itself as itself, the same thinking thing in different times and places, which it only does by that consciousness which is inseparable from thinking. This ability to reflect, think, and reason intelligibly is one of the many gifts from God and is that gift which separates us from the realm of the beast. The ability to reason and reflect, although universal, acts as an explanation for individuality. All reason and reflection is based on personal experience and reference. Personal experience must be completely individual as no one can experience anything quite the same as another. This leads to determining why Locke theorized that all humans, speaking patriarchially with respect to the time why all men, have a natural right to property. Every man is a creation of Gods, and as such is endowed with certain individual abilities and characteristics as gifts from God. Not being able to know Gods exact wishes for man, Locke believed that all men have an obligation to develop and caress these gifts. In essence, each man was in charge of his own body and what was done with his body. Of course, for Locke, each man would do the reasonable thing and develop his natural skills and potentials to the best of his abilities, in the service of God. The belief in God given abilities and the obligations that follow are not totally deterministic. Man, endowed with reason, could choose not to develop these abilities. Having the ability to choose the development of his potential, each man is responsible for that potential and consequently is responsible for his own body. The development, or lack therein, is a consequence of individual motivation and is manifested through labor. In keeping with the theory of ones body is ones own, a mans property can be explained in terms of the quantifying forces of his labors. Physical labor or exercisation of his mind, to produce fruits for this persons labor, is then his own property. Locke believed that one did not need the consent of a sovereign, as far as property was concerned, because it is the melding of labor and nature that makes anything owned. Yolton articulates this when he states, (b)y mixing my work, my energy with some object, (nature), I particulise that object, its commonness becomes particular Locke believed that as long as there was plenty for others, consent was pointless, irrelevant and would merely be an overzealous exercision of power. Pointless because as long as there was more for others in the common store, one was not infringing on anothers natural rights. Irrelevant because property production or the use of labor was completely individualistic and one should not be able to control anothers labo r as it is an infringement on their natural rights. There are however limits, as far as property and labor are concerned. One limit is that of non destruction. God did not create anything for man to destroy. The amount produced by any man should be kept in check by his level of destruction. For example, there is a big difference between the cutting of

Friday, March 6, 2020

Multithreaded Delphi Database Queries With dbGo (ADO)

Multithreaded Delphi Database Queries With dbGo (ADO) By design, a Delphi application runs in one thread. To speed up some parts of the application you might want to decide to add several simultaneous paths of execution in your Delphi application. Multithreading in Database Applications In most scenarios, database applications you create with Delphi are single threaded- a query you run against the database needs to finish (processing of the query results) before you can fetch another set of data. To speed up data processing, for example, fetching data from the database to create reports, you can add an additional thread to fetch and operate on the result (recordset). Continue reading to learn about the 3 traps in multithreaded ADO database queries: Solve: CoInitialize was not called.Solve: Canvas does not allow drawing.Main TADoConnection cannot be used! Customer Order Scenario In the well-known scenario where a customer places orders containing items, you might need to display all the orders for a particular customer along the total number of items per each order. In a normal single threaded application you would need to run the query to fetch the data then iterate over the recordset to display the data. If you want to run this operation for more than one customer, you need to sequentially run the procedure for each of the selected customers. In a multithreaded scenario you can run the database query for every selected customer in a separate thread- and thus have the code execute several times faster. Multithreading in dbGO (ADO) Lets say you want to display orders for 3 selected customers in a Delphi list box control. type   Ã‚  TCalcThread class(TThread)  Ã‚  private   Ã‚  Ã‚  Ã‚  procedure RefreshCount;  Ã‚  protected   Ã‚  Ã‚  Ã‚  procedure Execute; override;  Ã‚  public   Ã‚  Ã‚  Ã‚  ConnStr : widestring;   Ã‚  Ã‚  Ã‚  SQLString : widestring;   Ã‚  Ã‚  Ã‚  ListBox : TListBox;   Ã‚  Ã‚  Ã‚  Priority: TThreadPriority;   Ã‚  Ã‚  Ã‚  TicksLabel : TLabel;   Ã‚  Ã‚  Ã‚  Ticks : Cardinal;   Ã‚  end; This is the interface part of a custom thread class we are going to use to fetch and operate on all the orders for a selected customer. Every order gets displayed as an item in a list box control (ListBox field). The ConnStr field holds the ADO connection string. The TicksLabel holds a reference to a TLabel control that will be used to display thread executing times in a synchronized procedure. The RunThread procedure creates and runs an instance of the TCalcThread thread class. function TADOThreadedForm.RunThread(SQLString: widestring; LB:TListBox; Priority: TThreadPriority; lbl : TLabel): TCalcThread;var   Ã‚  CalcThread : TCalcThread; begin   Ã‚  CalcThread : TCalcThread.Create(true) ;   Ã‚  CalcThread.FreeOnTerminate : true;   Ã‚  CalcThread.ConnStr : ADOConnection1.ConnectionString;   Ã‚  CalcThread.SQLString : SQLString;   Ã‚  CalcThread.ListBox : LB;   Ã‚  CalcThread.Priority : Priority;   Ã‚  CalcThread.TicksLabel : lbl;   Ã‚  CalcThread.OnTerminate : ThreadTerminated;   Ã‚  CalcThread.Resume;   Ã‚  Result : CalcThread; end; When the 3 customers are selected from the drop down box, we create 3 instances of the CalcThread: var   Ã‚  s, sg: widestring;   Ã‚  c1, c2, c3 : integer; begin   Ã‚  s : SELECT O.SaleDate, MAX(I.ItemNo) AS ItemCount   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   FROM Customer C, Orders O, Items I   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   WHERE C.CustNo O.CustNo AND I.OrderNo O.OrderNo ;   Ã‚  sg : GROUP BY O.SaleDate ;   Ã‚  c1 : Integer(ComboBox1.Items.Objects[ComboBox1.ItemIndex]) ;   Ã‚  c2 : Integer(ComboBox2.Items.Objects[ComboBox2.ItemIndex]) ;   Ã‚  c3 : Integer(ComboBox3.Items.Objects[ComboBox3.ItemIndex]) ;   Ã‚  Caption : ;   Ã‚  ct1 : RunThread(Format(%s AND C.CustNo %d %s,[s, c1, sg]), lbCustomer1, tpTimeCritical, lblCustomer1) ;   Ã‚  ct2 : RunThread(Format(%s AND C.CustNo %d %s,[s, c2, sg]), lbCustomer2, tpNormal,lblCustomer2) ;   Ã‚  ct3 : RunThread(Format(%s AND C.CustNo %d %s,[s, c3, sg]), lbCustomer3, tpLowest, lblCustomer3) ; end; Traps and Tricks With Multithreaded ADO Queries The main code goes in the threads Execute method: procedure TCalcThread.Execute;var   Ã‚  Qry : TADOQuery;   Ã‚  k : integer; begin  Ã‚  inherited;  Ã‚  CoInitialize(nil) ; //CoInitialize was not called   Ã‚  Qry : TADOQuery.Create(nil) ;  Ã‚  try// MUST USE OWN CONNECTION // Qry.Connection : Form1.ADOConnection1;   Ã‚  Ã‚  Ã‚  Qry.ConnectionString : ConnStr;   Ã‚  Ã‚  Ã‚  Qry.CursorLocation : clUseServer;   Ã‚  Ã‚  Ã‚  Qry.LockType : ltReadOnly;   Ã‚  Ã‚  Ã‚  Qry.CursorType : ctOpenForwardOnly;   Ã‚  Ã‚  Ã‚  Qry.SQL.Text : SQLString;   Ã‚  Ã‚  Ã‚  Qry.Open;   Ã‚  Ã‚  Ã‚  while NOT Qry.Eof and NOT Terminated do   Ã‚  Ã‚  Ã‚  begin   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  ListBox.Items.Insert(0, Format(%s - %d, [Qry.Fields[0].asString,Qry.Fields[1].AsInteger])) ;   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  //Canvas Does NOT Allow Drawing if not called through Synchronize   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Synchronize(RefreshCount) ;   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Qry.Next;   Ã‚  Ã‚  Ã‚  end;  Ã‚  finally   Ã‚  Ã‚  Ã‚  Qry.Free;   Ã‚  end;   Ã‚  CoUninitialize() ; end; There are 3 traps you need to know how to solve when creating multithreaded Delphi ADO database applications: CoInitialize and CoUninitialize must be called manually before using any of the dbGo objects. Failing to call CoInitialize will result in the CoInitialize was not called exception. The CoInitialize method initializes the COM library on the current thread. ADO is COM.You *cannot* use the TADOConnection object from the main thread (application). Every thread needs to create its own database connection.You must use the Synchronize procedure to talk to the main thread and access any controls on the main form.

Tuesday, February 18, 2020

Information System in Tesco Case Study Example | Topics and Well Written Essays - 3000 words

Information System in Tesco - Case Study Example In addition it operates 39 stores in China, 111 in Japan, 13 in Malaysia, 62 in South Korea, 6 in Taiwan, 219 in Thailand, 35 in Czech Republic, 87 in Hungary, 105 in Poland, 91 in Republic of Ireland, 37 in Slovakia and 8 in Turkey. (Tesco Annual Review, 2006). It therefore operates round the clock which demonstrates its commitment to create customer value which in turn generates superior value for its shareholders. Tesco denotes its success to an aggressive global strategy of geographical diversification, product differentiation, cost focus and it use of umbrella branding. In its attempt to renew the brand and keep it in sync with changing customer tastes, Tesco follows various strategies including international diversification, providing value to customers, product diversification, innovation, acquisition and umbrella branding. In the context of today's global competition, businesses and firms no-longer compete as individual companies but try to corporate with other businesses in their activities (Wu & Chien 2007). These researchers further argue that, this strategy is now quite common in many businesses including the retail chain stores. In the retail chain stores, the conventional vertical integrated company based business model is gradually being replaced by collaborative relationship between many fragmented, but complementary and specialized value stars and constellation (Wu & Chien 2007). In the present state of events, information and information systems are increasingly being used as many corporations as a competitive and strategic weapon. It helps organization such as Tesco to be a goal getter, as it redefines, their activities in terms of cost and time. Information systems help Tesco to adapt and be at the forefront of new product development and change. The management of Tesco by incorporating information and information systems have improved and increase on the processes, improving current operation efficiency and turning it into a competitive advantage. Through Tesco information and information systems, suppliers are interconnected through an electronic data interphase to develop strategic, administrative and operational planning. 2.0 Tesco Environmental Analysis LAYOUT OF THE PESTEL FRAMEWORK According to Johnson et. al (2005), the PESTEL framework can only be used to look at the future

Monday, February 3, 2020

Bulding a Second Runway at Gatwick Airport Assignment

Bulding a Second Runway at Gatwick Airport - Assignment Example Considering an increase in average aircraft size, the Airport Management predicts that approximately 40million passengers will be handled by the year 2015 86% of whom would start or end their journeys at Gatwick. Such an action will be associated with a number of benefits and costs. The benefits include creation of jobs. According to the Gatwick Airport Management, workers at the airport will benefit from an extra income of 7million a week. In addition, it is anticipated that 27,300 jobs will be created at Gatwick in 2015. The noise from Aircrafts as well as the emission of Nitrogen oxide is a cause for concern. In addition, the operation of the airport itself has a range of environmental impacts over and above those of the aircraft that use it. A second runway at Gatwick is therefore characterised by a number of costs and benefits. According to Friends of the Earth (2003) government's intention to build more airports, terminals and runways, is likely to have a disastrous effect on the world's climate. One of the main causes of climate change is Carbon dioxide pollution and aviation is one of the fastest growing sources. (Friends of the Earth, 2003). ... 1.1.2 Objectives and Hypothesis of Work Considering that a second runway has both opportunities and threats and well as costs and benefits, the objectives and hypothesis of this paper are defined within the following. The main objective of this paper will be to evaluate and analysed the problems being faced by the Gatwick Airport by operating on a single runway. In addition, the paper will analyse and find out whether a second runway at Gatwick is worth the trouble. The paper finally looks into the consequences should the government fails to construct a second runway 2.0 Summary of Literature According to Friends of the Earth (2003) government's intention to build more airports, terminals and runways, is likely to have a disastrous effect on the world's climate. One of the main causes of climate change is Carbon dioxide pollution and aviation is one of the fastest growing sources. (Friends of the Earth, 2003). According to another report by Friends of the Earth titled "Aviation and Global Climate Change", Air travel emissions are already responsible for 3.5 per cent of man-made climate change and if the industry continues to grow at its current rate it will go up to 15 per cent by 2050. (http://www.foe.co.uk/resource/reports/aviation_climate_change.pdf). In June 2003 the Royal Commission on Environmental Pollution (RCEP) wrote to the Government and noted that "even with the most conservative figures for growth in air travel, by 2020 aviation will be contributing 10 per cent of the UK's carbon dioxide emissions. (Friends of the Earth, 2003). In another research conducted by Friends of the Earth (FOE), it was postulated that, research, based on the Government's own data, shows how the present

Sunday, January 26, 2020

Factors Affecting Web Applications Maintenance

Factors Affecting Web Applications Maintenance Chapter 1 1.1 Introduction Software engineering [PRE01] is the process associated with industrial quality software development, the methods used to analyze, design test computer Software, the management techniques associated with the control monitoring of Software projects the tools used to support process, methods, techniques. In Software Development Life Cycle, the focus is on the activities like feasibility study, requirement analysis, design, coding, testing, maintenance. Feasibility study involves the issues like technical/economical/ behavioral feasibility of project. Requirement analysis [DAV93] emphasizes on identifying the needs of the system producing the Software Requirements Specification document (SRS), [JAL04] that describes all data, functional behavioral requirements, constraints, validation requirements for Software. Software Design is to plan a solution of the problem specified by the SRS document, a step in moving from the problem domain to the solution domain. The output of this phase is the design document. Coding is to translate the design of the system into code in a programming language. Testing is the process to detect defects minimize the risk associated with the residual defects. The activities carried out after the delivery of the software comprises the maintenance phase. 1.2 Evolution of Software Testing Discipline The effective functioning of modern systems depends on our ability to produce software in a cost-effective way. The term software engineering was first used at a 1968 NATO workshop in West Germany. It focused on the growing software crisis. Thus we see that the software crisis on quality, reliability, high costs etc. started way back when most of todays software testers were not even born. The attitude towards Software Testing [BEI90] underwent a major positive change in the recent years. In the 1950s when Machine languages were used, testing was nothing but debugging. When in the 1960s, compilers were developed, testing started to be considered a separate activity from debugging. In the 1970s when the software engineering concepts were introduced, software testing began to evolve as a technical discipline. Over the last two decades there has been an increased focus on better, faster and cost-effective software. Also there has been a growing interest in software safety, protection and security and hence an increased acceptance of testing as a technical discipline and also a career choice. Now to answer, What is Testing? we can go by the famous definition of Myers [MYE79], which says, Testing is the process of executing a program with the intent of finding errors. According to Humphrey, software testing is defined as, the execution of a program to find its faults. Testing is the process to prove that the software works correctly [PRA06]. Software testing is a crucial aspect of the software life cycle. In some form or the other it is present at each phase of (any) software development or maintenance model. The importance of software testing and its impact on software cannot be underestimated. Software testing is a fundamental component of software quality assurance and represents a review of specification, design and coding. The greater visibility of software systems and the cost associated with software failure are motivating factors for planning, through testing. It is not uncommon for a software organization to spend 40-50% of its effort on testing. During testing, the software engineering produces a series of test cases that are used to rip apart the software they have produced. Testing is the one step in the software process that can be seen by the developer as destructive instead of constructive. Software engineers are typically constructive people and testing requires them to overcome preconceived concepts of correctness and deal with conflicts when errors are identified. A successful test is one that finds a defect. This sounds simple enough, but there is much to consider when we want to do software testing. Besides finding faults, we may also be interested in testing performance, safety, fault-tolerance or security. Testing often becomes a question of economics. For projects of a large size, more testing will usually reveal more bugs. The question then becomes when to stop testing, and what is an acceptable level of bugs. This is the question of good enough software. Testing is the process of verifying that a product meets all requirements. A test is never complete. When testing software the goal should never be a product completely free from defects, because its impossible. According to Peter Nielsen, The average is 16 faults per 1000 lines of code when the programmer has tested his code and it is believed to be correct. When looking at a larger project, there are millions of lines of code, which makes it impossible to find all present faults. Far too often products are released on the market with poor quality. Errors are often uncovered by users, and in that stage the cost of removing errors is large in amount. 1.3 Objectives of Testing Glen Myers [MYE79] states a number of rules that can serve well as testing objectives: Testing is a process of executing a program with the intent of finding an error. A good test is one that has a high probability of finding an as yet undiscovered error. A successful test is one that uncovers an as yet undiscovered error. The objective is to design tests that systematically uncover different classes of errors do so with a minimum amount of time effort. Secondary benefits include Demonstrate that Software functions appear to be working according to specification. That performance requirements appear to have been met. Data collected during testing provides a good indication of Software reliability some indication of Software quality. Testing cannot show the absence of defects, it can only show that Software defects are present. 1.4 Software Testing Its Relation with Software Life Cycle Software testing should be thought of as an integral part of the Software process an activity that must be carried out throughout the life cycle. Each phase in the Software lifecycle has a clearly different end product such as the Software requirements specification (SRS) documentation, program unit design program unit code. Each end product can be checked for conformance with a previous phase against the original requirements. Thus, errors can be detected at each phase of development. Validation Verification should occur throughout the Software lifecycle. Verification is the process of evaluating each phase end product to ensure consistency with the end product of the previous phase. Validation is the process of testing Software, or a specification, to ensure that it matches user requirements. Software testing is that part of validation verification associated with evaluating analysing program code. It is one of the two most expensive stages within the Software lifecycle, the other being maintenance. Software testing of a product begins after the development of the program units continues until the product is obsolete. Testing fixing can be done at any stage in the life cycle. However, the cost of finding fixing errors increases dramatically as development progresses. Changing a Requirements document during the first review is inexpensive. It costs more when requirements change after the code has been written: the code must be rewritten. Bug fixes are much cheaper when programmers find their own errors. Fixing an error before releasing a program is much cheaper than sending new disks, or even a technician to each customers site to fix it later. It is illustrated in Figure 1.1. The types of testing required during several phases of Software lifecycle are described below: Requirements Requirements must be reviewed with the client; rapid prototyping can refine requirements accommodate changing requirements. Specification The specifications document must be checked for feasibility, traceability, completeness, absence of contradictions ambiguities. Specification reviews (walkthroughs or inspections) are especially effective. Design Design reviews are similar to specification reviews, but more technical. The design must be checked for logic faults, interface faults, lack of exception handling, non-conformance to specifications. Implementation Code modules are informally tested by the programmer while they are being implemented (desk checking). Thereafter, formal testing of modules is done methodically by a testing team. This formal testing can include non-execution-based methods (code inspections walkthroughs) execution-based methods (black-box testing, white-box testing). Integration Integration testing is performed to ensure that the modules combine together correctly to achieve a product that meets its specifications. Particular care must be given to the interfaces between modules. The appropriate order of combination must be determined as top-down, bottom-up, or a combination thereof. Product Testing The functionality of the product as a whole is checked against its specifications. Test cases are derived directly from the specifications document. The product is also tested for robustness (error-handling capabilities stress tests). All source code documentation are checked for completeness consistency. Acceptance Testing The Software is delivered to the client, who tests the Software on the actual h/w, using actual data instead of test data. A product cannot be considered to satisfy its specifications until it has passed an acceptance test. Commercial off-the-shelf (or shrink-wrapped) Software usually undergoes alpha beta testing as a form of acceptance test. Maintenance Modified versions of the original product must be tested to ensure that changes have been correctly implemented. Also, the product must be tested against previous test cases to ensure that no inadvertent changes have been introduced. This latter consideration is termed regression testing. Software Process Management The Software process management plan must undergo scrutiny. It is especially important that cost duration estimates be checked thoroughly. If left unchecked, errors can propagate through the development lifecycle amplify in number cost. The cost of detecting fixing an error is well documented is known to be more costly as the system develops. An error found during the operation phase is the most costly to fix. 1.5 Principles of Software Testing Software testing is an extremely creative intellectually challenging task. The following are some important principles [DAV95] that should be kept in mind while carrying Software testing [PRE01] [SUM02]: Testing should be based on user requirements: This is in order to uncover any defects that might cause the program or system to fail to meet the clients requirements. Testing time resources are limited: Avoid redundant tests. It is impossible to test everything: Exhaustive tests of all possible scenarios are impossible, because of the many different variables affecting the system the number of paths a program flow might take. Use effective resources to test: This represents use of the most suitable tools, procedures individuals to conduct the tests. Only those tools should be used by the test team that they are confident familiar with. Testing procedures should be clearly defined. Testing personnel may be a technical group of people independent of the developers. Test planning should be done early: This is because test planning can begin independently of coding as soon as the client requirements are set. Test for invalid unexpected input conditions as well as valid conditions: The program should generate correct messages when an invalid test is encountered should generate correct results when the test is valid. The probability of the existence of more errors in a module or group of modules is directly proportional to the number of errors already found. Testing should begin at the module: The focus of testing should be concentrated on the smallest programming units first then expand to other parts of the system. Testing must be done by an independent party: Testing should not be performed by the person or team that developed the Software since they tend to defend the correctness of the program. Assign best personnel to the task: Because testing requires high creativity responsibility only the best personnel must be assigned to design, implement, analyze test cases, test data test results. Testing should not be planned under the implicit assumption that no errors will be found. Testing is the process of executing Software with the intention of finding errors. Keep Software static during test: The program must not be modified during the implementation of the set of designed test cases. Document test cases test results. Provide expected test results if possible: A necessary part of test documentation is the specification of expected results, even though it is impractical. 1.6 Software Testability Its Characteristics Testability is the ability of Software (or program) with which it can easily be tested [PRE01] [SUM02]. The following are some key characteristics of testability: The better it works, the more efficient is testing process. What you see is what you test (WYSIWYT). The better it is controlled, the more we can automate or optimize the testing process. By controlling the scope of testing we can isolate problems perform smarter retesting. The less there is to test, the more quickly we can test it. The fewer the changes, the fewer the disruptions to testing. The more information we have, the smarter we will test. 1.7 Stages in Software Testing Process Except for small programs, systems should not be tested as a single unit. Large systems are built out of sub-systems, which are built out of modules that are composed of procedures functions. The testing process should therefore proceed in stages where testing is carried out incrementally in conjunction with system implementation. The most widely used testing process consists of five stages that are illustrated in Table 1.1. Errors in program components, say may come to light at a later stage of the testing process. The process is therefore an iterative one with information being fed back from later stages to earlier parts of the process. The iterative testing process is illustrated in Figure 1.2 and described below: Unit Testing: Unit testing is code-oriented testing. Individual components are tested to ensure that they operate correctly. Each component is tested independently, without other system components. Module Testing: A module is a collection of dependent components such as an object class, an abstract data type or some looser collection of procedures functions. A module encapsulates related components so it can be tested without other system modules. Sub-system (Integration) Testing: This phase involves testing collections of modules, which have been integrated into sub-systems. It is a design-oriented testing is also known as integration testing. Sub-systems may be independently designed implemented. The most common problems, which arise in large Software systems, are sub-systems interface mismatches. The sub-system test process should therefore concentrate on the detection of interface errors by rigorously exercising these interfaces. System Testing: The sub-systems are integrated to make up the entire system. The testing process is concerned with finding errors that result from unanticipated interactions between sub-systems system components. It is also concerned with validating that the system meets its functional non-functional requirements. Acceptance Testing: This is the final stage in the testing process before the system is accepted for operational use. The system is tested with data supplied by the system client rather than simulated test data. Acceptance testing may reveal errors omissions in the systems requirements definition (user-oriented) because real data exercises the system in different ways from the test data. Acceptance testing may also reveal requirement problems where the system facilities do not really meet the users needs (functional) or the system performance (non-functional) is unacceptable. 1.8 The V-model of Testing To test an entire software system, tests on different levels are performed. The V model [FEW99], shown in figure 1.3, illustrates the hierarchy of tests usually performed in software development projects. The left part of the V represents the documentation of an application, which are the Requirement specification, the Functional specification, System design, the Unit design. Code is written to fulfill the requirements in these specifications, as illustrated in the bottom of the V. The right part of the V represents the test activities that are performed during development to ensure that an application corresponding to its requirements. Unit tests are used to test that all functions and methods in a module are working as intended. When the modules have been tested, they are combined and integration tests are used to test that they work together as a group. The unit- and integration test complement the system test. System testing is done on a complete system to validate that it corresponds to the system specification. A system test includes checking if all functional and all non-functional requirements have been met. Unit, integration and system tests are developer focused, while acceptance tests are customer focused. Acceptance testing checks that the system contains the functionality requested by the customer, in the Requirement specification. Customers are usually responsible for the acceptance tests since they are the only persons qualified to make the judgment of approval. The purpose of the acceptance tests is that after they are preformed, the customer knows which parts of the Requirement specification the system satisfies. 1.9 The Testing Techniques To perform these types of testing, there are three widely used testing techniques. The above said testing types are performed based on the following testing techniques: Black-Box testing technique Black box testing (Figure 1.4) is concerned only with testing the specification. It cannot guarantee that the complete specification has been implemented. Thus black box testing is testing against the specification and will discover faultsofomission, indicating that part of the specification has not been fulfilled. It is used for testing based solely on analysis of requirements (specification, user documentation). In Black box testing, test cases are designed using only the functional specification of the software i.e without any knowledge of the internal structure of the software. For this reason, black-box testing is also known as functional testing. Black box tests are performed to assess how well a program meets its requirements, looking for missing or incorrect functionality. Functional testing typically exercise code with valid or nearly valid input for which the expected output is known. This includes concepts such as boundary values. Performance tests evaluate response time, memory usage, throughput, device utilization, and execution time. Stress tests push the system to or beyond its specified limits to evaluate its robustness and error handling capabilities. Reliability tests monitor system response to represent user input, counting failures over time to measure or certify reliability. Black box Testing refers to analyzing a running program by probing it with various inputs. This kind of testing requires only a running program and does not make use of source code testing of any kind. In the security paradigm, malicious input can be supplied to the program in an effort to cause it to break. If the program breaks during a particular test, then a security problem may have been discovered. Black box testing is possible even without access to binary code. That is, a program can be tested remotely over a network. All that is required is a program running somewhere that is accepting input. If the tester can supply input that the program consumes (and can observe the effect of the test), then black box testing is possible. This is one reason that real attackers often resort to black box techniques. Black box testing is not an alternative to white box techniques. It is a complementary approach that is likely to uncover a different type of errors that the white box approaches. Black box testing tries to find errors in the following categories: Incorrect or missing functions Interface errors Errors in data structures or external database access Performance errors, and Initialization and termination errors. By applying black box approaches we produce a set of test cases that fulfill requirements: Test cases that reduce the number of test cases to achieve reasonable testing Test cases that tell us something about the presence or absence of classes of errors. The methodologies used for black box testing have been discussed below: 1.9.1.1 Equivalent Partitioning Equivalence partitioning is a black box testing approach that splits the input domain of a program into classes of data from which test cases can be produced. An ideal test case uncovers a class of errors that may otherwise before the error is detected. Equivalence partitioning tries to outline a test case that identifies classes of errors. Test case design for equivalent partitioning is founded on an evaluation of equivalence classes for an input condition [BEI95]. An equivalence class depicts a set of valid or invalid states for the input condition. Equivalence classes can be defined based on the following [PRE01]: If an input condition specifies a range, one valid and two invalid equivalence classes are defined. If an input condition needs a specific value, one valid and two invalid equivalence classes are defined. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined. If an input condition is Boolean, one valid and invalid class is outlined. 1.9.1.2 Boundary Value Analysis A great many errors happen at the boundaries of the input domain and for this reason boundary value analysis was developed. Boundary value analysis is test case design approach that complements equivalence partitioning. BVA produces test cases from the output domain also [MYE79]. Guidelines for BVA are close to those for equivalence partitioning [PRE01]: If an input condition specifies a range bounded by values a and b, test cases should be produced with values a and b, just above and just below a and b, respectively. If an input condition specifies various values, test cases should be produced to exercise the minimum and maximum numbers. Apply guidelines above to output conditions. If internal program data structures have prescribed boundaries, produce test cases to exercise that data structure at its boundary. White-Box testing technique White box testing (Figure 1.5) is testing against the implementation as it is based on analysis of internal logic (design, code etc.) and will discover faultsofcommission, indicating that part of the implementation is faulty. Designing white-box test cases requires thorough knowledge of the internal structure of software, and therefore the white-box testing is also called the structural testing. White box testing is performed to reveal problems with the internal structure of a program. A common goal of white-box testing is to ensure a test case exercises every path through a program. A fundamental strength that all white box testing strategies share is that the entire software implementation is taken into account during testing, which facilitates error detection even when the software specification is vague or incomplete. The effectiveness or thoroughness of white-box testing is commonly expressed in terms of test or code coverage metrics, which measure the fraction of code exercised by test cases. White box Testing involves analyzing and understanding source code. Sometimes only binary code is available, but if you decompile a binary to get source code and then study the code, this can be considered a kind of white box testing as well. White box testing is typically very effective in finding programming errors and implementation errors in software. In some cases this activity amounts to pattern matching and can even be automated with a static analyzer. White box testing is a test case design approach that employs the control architecture of the procedural design to produce test cases. Using white box testing approaches, the software engineering can produce test cases that: Guarantee that all independent paths in a module have been exercised at least once Exercise all logical decisions Execute all loops at their boundaries and in their operational bounds Exercise internal data structures to maintain their validity. There are several methodologies used for white box testing. We discuss some important ones below. 1.9.2.1 Statement Coverage The statement coverage methodology aims to design test cases so as to force the executions of every statement in a program at least once. The principal idea governing the statement coverage methodology is that unless a statement is executed, we have way of determining if an error existed in that statement. In other words, the statement coverage criterion [RAP85] is based on the observation that an error existing in one part of a program cannot be discovered if the part of the program containing the error and generating the failure is not executed. However, executed a statement once and that too for just one input value and observing that it behaves properly for that input value is no guarantee that it will behave correctly for all inputs. 1.9.2.2 Branch Coverage In branch coverage testing, test cases are designed such that the different branch conditions are given true and false values in turn. It is obvious that branch testing guarantees statement coverage and thus is a stronger testing criterion than the statement coverage testing [RAP85]. 1.9.2.3 Path Coverage The path coverage based testing strategy requires designing test cases such that all linearly independents paths in the program are executed at least once. A linearly independent path is defined in terms of the control flow graph (CFG) of the program. 1.9.2.4 Loop testing Loops are very important constructs for generally all the algorithms. Loop testing is a white box testing technique. It focuses exclusively on the validity of loop constructs. Simple loop, concatenated loop, nested loop, and unstructured loop are four different types of loops [BEI90] as shown in figure 1.6. Simple Loop: The following set of tests should be applied to simple loop where n is the maximum number of allowable passes thru the loop: Skip the loop entirely. Only one pass thru the loop. Two passes thru the loop. M passes thru the loop where m N-1, n, n+1 passes thru the loop. Nested Loop: Beizer [BEI90] approach to the nested loop Start at the innermost loop. Set all other loops to minimum value. Conduct the simple loop test for the innermost loop while holding the outer loops at their minimum iteration parameter value. Work outward, conducting tests for next loop, but keeping all other outer loops at minimum values and other nested loops to typical values. Continue until all loops have been tested. Concatenated loops: These can be tested using the approach of simple loops if each loop is independent of other. However, if the loop counter of loop 1 is used as the initial value for loop 2 then approach of nested loop is to be used. Unstructured loop: This class of loops should be redesigned to reflect the use of the structured programming constructs. 1.9.2.5 McCabes Cyclomatic Complexity The McCabes Cyclomatic Complexity [MCC76] of a program defines the number of independent paths in a program. Given a control flow Graph G of a program, the McCabes Cyclomatic Complexity V(G) can be computed as: V(G)=E-N+2 Where E is the number of edges in the control flow graph and N is the number of nodes of the control flow graph. The cyclomatic complexity value of a program defines the number of independent paths in the basis set of the program and provides a lower bound for the number of test cases that must be conducted to ensure that all statements have been executed at least once. Knowing the number of test cases required does not make it easy to derive the test cases, it only gives an indication of the minimum number of test cases required. The following is the sequences of steps that need to be undertaken for deriving the path coverage based test case of a program. Draw the CFG. Calculate Cyclomatic Complexity V(G). Calculate the basis set of linearly independent paths. Prepare a test case that will force execution of each path in the basis set. 1.9.2.6 Data Flow based Testing The data flow testing method chooses test paths of a program based on the locations of definitions and uses of variables in the program. Various data flow testing approaches have been examined [FRA88] [NTA88] [FRA93]. For data flow testing each statement in program is allocated a unique statement number and that each function does not alter its parameters or global variables. For a statement with S as its statement number, DEF(S) = {X| statement S contains a definition of X} USE(S) = {X| statement S contains a use of X} If statement S is if or loop statement, its DEF set is left empty and its USE set is founded on the condition of statement S. The definition of a variable X at statement S is live at statement S, if there exists a path from statement S to S which does not contain any condition of X. A definition-use chain (or DU chain) of variable X is of the type [X,S,S] where S and S are statement numbers, X is in DEF(S), USE(S), and the definition of X in statement S is live at statement S. One basic data flow testing strategy is that each DU chain be covered at least once. Data flow testing strategies are helpful for choosing test paths of a program including nested if and loop statements 1.9.3 Grey-Box testing technique Grey box testing [BIN99] designs test cases using both responsibility-based (black box) and implementation-based (white box) approaches. To completely test a web application one needs to combine the two approaches, White-box and Black-box testing. It is used for testing of Web based applications. The Gray-box testing approach takes into account all components ma Factors Affecting Web Applications Maintenance Factors Affecting Web Applications Maintenance Chapter 1 1.1 Introduction Software engineering [PRE01] is the process associated with industrial quality software development, the methods used to analyze, design test computer Software, the management techniques associated with the control monitoring of Software projects the tools used to support process, methods, techniques. In Software Development Life Cycle, the focus is on the activities like feasibility study, requirement analysis, design, coding, testing, maintenance. Feasibility study involves the issues like technical/economical/ behavioral feasibility of project. Requirement analysis [DAV93] emphasizes on identifying the needs of the system producing the Software Requirements Specification document (SRS), [JAL04] that describes all data, functional behavioral requirements, constraints, validation requirements for Software. Software Design is to plan a solution of the problem specified by the SRS document, a step in moving from the problem domain to the solution domain. The output of this phase is the design document. Coding is to translate the design of the system into code in a programming language. Testing is the process to detect defects minimize the risk associated with the residual defects. The activities carried out after the delivery of the software comprises the maintenance phase. 1.2 Evolution of Software Testing Discipline The effective functioning of modern systems depends on our ability to produce software in a cost-effective way. The term software engineering was first used at a 1968 NATO workshop in West Germany. It focused on the growing software crisis. Thus we see that the software crisis on quality, reliability, high costs etc. started way back when most of todays software testers were not even born. The attitude towards Software Testing [BEI90] underwent a major positive change in the recent years. In the 1950s when Machine languages were used, testing was nothing but debugging. When in the 1960s, compilers were developed, testing started to be considered a separate activity from debugging. In the 1970s when the software engineering concepts were introduced, software testing began to evolve as a technical discipline. Over the last two decades there has been an increased focus on better, faster and cost-effective software. Also there has been a growing interest in software safety, protection and security and hence an increased acceptance of testing as a technical discipline and also a career choice. Now to answer, What is Testing? we can go by the famous definition of Myers [MYE79], which says, Testing is the process of executing a program with the intent of finding errors. According to Humphrey, software testing is defined as, the execution of a program to find its faults. Testing is the process to prove that the software works correctly [PRA06]. Software testing is a crucial aspect of the software life cycle. In some form or the other it is present at each phase of (any) software development or maintenance model. The importance of software testing and its impact on software cannot be underestimated. Software testing is a fundamental component of software quality assurance and represents a review of specification, design and coding. The greater visibility of software systems and the cost associated with software failure are motivating factors for planning, through testing. It is not uncommon for a software organization to spend 40-50% of its effort on testing. During testing, the software engineering produces a series of test cases that are used to rip apart the software they have produced. Testing is the one step in the software process that can be seen by the developer as destructive instead of constructive. Software engineers are typically constructive people and testing requires them to overcome preconceived concepts of correctness and deal with conflicts when errors are identified. A successful test is one that finds a defect. This sounds simple enough, but there is much to consider when we want to do software testing. Besides finding faults, we may also be interested in testing performance, safety, fault-tolerance or security. Testing often becomes a question of economics. For projects of a large size, more testing will usually reveal more bugs. The question then becomes when to stop testing, and what is an acceptable level of bugs. This is the question of good enough software. Testing is the process of verifying that a product meets all requirements. A test is never complete. When testing software the goal should never be a product completely free from defects, because its impossible. According to Peter Nielsen, The average is 16 faults per 1000 lines of code when the programmer has tested his code and it is believed to be correct. When looking at a larger project, there are millions of lines of code, which makes it impossible to find all present faults. Far too often products are released on the market with poor quality. Errors are often uncovered by users, and in that stage the cost of removing errors is large in amount. 1.3 Objectives of Testing Glen Myers [MYE79] states a number of rules that can serve well as testing objectives: Testing is a process of executing a program with the intent of finding an error. A good test is one that has a high probability of finding an as yet undiscovered error. A successful test is one that uncovers an as yet undiscovered error. The objective is to design tests that systematically uncover different classes of errors do so with a minimum amount of time effort. Secondary benefits include Demonstrate that Software functions appear to be working according to specification. That performance requirements appear to have been met. Data collected during testing provides a good indication of Software reliability some indication of Software quality. Testing cannot show the absence of defects, it can only show that Software defects are present. 1.4 Software Testing Its Relation with Software Life Cycle Software testing should be thought of as an integral part of the Software process an activity that must be carried out throughout the life cycle. Each phase in the Software lifecycle has a clearly different end product such as the Software requirements specification (SRS) documentation, program unit design program unit code. Each end product can be checked for conformance with a previous phase against the original requirements. Thus, errors can be detected at each phase of development. Validation Verification should occur throughout the Software lifecycle. Verification is the process of evaluating each phase end product to ensure consistency with the end product of the previous phase. Validation is the process of testing Software, or a specification, to ensure that it matches user requirements. Software testing is that part of validation verification associated with evaluating analysing program code. It is one of the two most expensive stages within the Software lifecycle, the other being maintenance. Software testing of a product begins after the development of the program units continues until the product is obsolete. Testing fixing can be done at any stage in the life cycle. However, the cost of finding fixing errors increases dramatically as development progresses. Changing a Requirements document during the first review is inexpensive. It costs more when requirements change after the code has been written: the code must be rewritten. Bug fixes are much cheaper when programmers find their own errors. Fixing an error before releasing a program is much cheaper than sending new disks, or even a technician to each customers site to fix it later. It is illustrated in Figure 1.1. The types of testing required during several phases of Software lifecycle are described below: Requirements Requirements must be reviewed with the client; rapid prototyping can refine requirements accommodate changing requirements. Specification The specifications document must be checked for feasibility, traceability, completeness, absence of contradictions ambiguities. Specification reviews (walkthroughs or inspections) are especially effective. Design Design reviews are similar to specification reviews, but more technical. The design must be checked for logic faults, interface faults, lack of exception handling, non-conformance to specifications. Implementation Code modules are informally tested by the programmer while they are being implemented (desk checking). Thereafter, formal testing of modules is done methodically by a testing team. This formal testing can include non-execution-based methods (code inspections walkthroughs) execution-based methods (black-box testing, white-box testing). Integration Integration testing is performed to ensure that the modules combine together correctly to achieve a product that meets its specifications. Particular care must be given to the interfaces between modules. The appropriate order of combination must be determined as top-down, bottom-up, or a combination thereof. Product Testing The functionality of the product as a whole is checked against its specifications. Test cases are derived directly from the specifications document. The product is also tested for robustness (error-handling capabilities stress tests). All source code documentation are checked for completeness consistency. Acceptance Testing The Software is delivered to the client, who tests the Software on the actual h/w, using actual data instead of test data. A product cannot be considered to satisfy its specifications until it has passed an acceptance test. Commercial off-the-shelf (or shrink-wrapped) Software usually undergoes alpha beta testing as a form of acceptance test. Maintenance Modified versions of the original product must be tested to ensure that changes have been correctly implemented. Also, the product must be tested against previous test cases to ensure that no inadvertent changes have been introduced. This latter consideration is termed regression testing. Software Process Management The Software process management plan must undergo scrutiny. It is especially important that cost duration estimates be checked thoroughly. If left unchecked, errors can propagate through the development lifecycle amplify in number cost. The cost of detecting fixing an error is well documented is known to be more costly as the system develops. An error found during the operation phase is the most costly to fix. 1.5 Principles of Software Testing Software testing is an extremely creative intellectually challenging task. The following are some important principles [DAV95] that should be kept in mind while carrying Software testing [PRE01] [SUM02]: Testing should be based on user requirements: This is in order to uncover any defects that might cause the program or system to fail to meet the clients requirements. Testing time resources are limited: Avoid redundant tests. It is impossible to test everything: Exhaustive tests of all possible scenarios are impossible, because of the many different variables affecting the system the number of paths a program flow might take. Use effective resources to test: This represents use of the most suitable tools, procedures individuals to conduct the tests. Only those tools should be used by the test team that they are confident familiar with. Testing procedures should be clearly defined. Testing personnel may be a technical group of people independent of the developers. Test planning should be done early: This is because test planning can begin independently of coding as soon as the client requirements are set. Test for invalid unexpected input conditions as well as valid conditions: The program should generate correct messages when an invalid test is encountered should generate correct results when the test is valid. The probability of the existence of more errors in a module or group of modules is directly proportional to the number of errors already found. Testing should begin at the module: The focus of testing should be concentrated on the smallest programming units first then expand to other parts of the system. Testing must be done by an independent party: Testing should not be performed by the person or team that developed the Software since they tend to defend the correctness of the program. Assign best personnel to the task: Because testing requires high creativity responsibility only the best personnel must be assigned to design, implement, analyze test cases, test data test results. Testing should not be planned under the implicit assumption that no errors will be found. Testing is the process of executing Software with the intention of finding errors. Keep Software static during test: The program must not be modified during the implementation of the set of designed test cases. Document test cases test results. Provide expected test results if possible: A necessary part of test documentation is the specification of expected results, even though it is impractical. 1.6 Software Testability Its Characteristics Testability is the ability of Software (or program) with which it can easily be tested [PRE01] [SUM02]. The following are some key characteristics of testability: The better it works, the more efficient is testing process. What you see is what you test (WYSIWYT). The better it is controlled, the more we can automate or optimize the testing process. By controlling the scope of testing we can isolate problems perform smarter retesting. The less there is to test, the more quickly we can test it. The fewer the changes, the fewer the disruptions to testing. The more information we have, the smarter we will test. 1.7 Stages in Software Testing Process Except for small programs, systems should not be tested as a single unit. Large systems are built out of sub-systems, which are built out of modules that are composed of procedures functions. The testing process should therefore proceed in stages where testing is carried out incrementally in conjunction with system implementation. The most widely used testing process consists of five stages that are illustrated in Table 1.1. Errors in program components, say may come to light at a later stage of the testing process. The process is therefore an iterative one with information being fed back from later stages to earlier parts of the process. The iterative testing process is illustrated in Figure 1.2 and described below: Unit Testing: Unit testing is code-oriented testing. Individual components are tested to ensure that they operate correctly. Each component is tested independently, without other system components. Module Testing: A module is a collection of dependent components such as an object class, an abstract data type or some looser collection of procedures functions. A module encapsulates related components so it can be tested without other system modules. Sub-system (Integration) Testing: This phase involves testing collections of modules, which have been integrated into sub-systems. It is a design-oriented testing is also known as integration testing. Sub-systems may be independently designed implemented. The most common problems, which arise in large Software systems, are sub-systems interface mismatches. The sub-system test process should therefore concentrate on the detection of interface errors by rigorously exercising these interfaces. System Testing: The sub-systems are integrated to make up the entire system. The testing process is concerned with finding errors that result from unanticipated interactions between sub-systems system components. It is also concerned with validating that the system meets its functional non-functional requirements. Acceptance Testing: This is the final stage in the testing process before the system is accepted for operational use. The system is tested with data supplied by the system client rather than simulated test data. Acceptance testing may reveal errors omissions in the systems requirements definition (user-oriented) because real data exercises the system in different ways from the test data. Acceptance testing may also reveal requirement problems where the system facilities do not really meet the users needs (functional) or the system performance (non-functional) is unacceptable. 1.8 The V-model of Testing To test an entire software system, tests on different levels are performed. The V model [FEW99], shown in figure 1.3, illustrates the hierarchy of tests usually performed in software development projects. The left part of the V represents the documentation of an application, which are the Requirement specification, the Functional specification, System design, the Unit design. Code is written to fulfill the requirements in these specifications, as illustrated in the bottom of the V. The right part of the V represents the test activities that are performed during development to ensure that an application corresponding to its requirements. Unit tests are used to test that all functions and methods in a module are working as intended. When the modules have been tested, they are combined and integration tests are used to test that they work together as a group. The unit- and integration test complement the system test. System testing is done on a complete system to validate that it corresponds to the system specification. A system test includes checking if all functional and all non-functional requirements have been met. Unit, integration and system tests are developer focused, while acceptance tests are customer focused. Acceptance testing checks that the system contains the functionality requested by the customer, in the Requirement specification. Customers are usually responsible for the acceptance tests since they are the only persons qualified to make the judgment of approval. The purpose of the acceptance tests is that after they are preformed, the customer knows which parts of the Requirement specification the system satisfies. 1.9 The Testing Techniques To perform these types of testing, there are three widely used testing techniques. The above said testing types are performed based on the following testing techniques: Black-Box testing technique Black box testing (Figure 1.4) is concerned only with testing the specification. It cannot guarantee that the complete specification has been implemented. Thus black box testing is testing against the specification and will discover faultsofomission, indicating that part of the specification has not been fulfilled. It is used for testing based solely on analysis of requirements (specification, user documentation). In Black box testing, test cases are designed using only the functional specification of the software i.e without any knowledge of the internal structure of the software. For this reason, black-box testing is also known as functional testing. Black box tests are performed to assess how well a program meets its requirements, looking for missing or incorrect functionality. Functional testing typically exercise code with valid or nearly valid input for which the expected output is known. This includes concepts such as boundary values. Performance tests evaluate response time, memory usage, throughput, device utilization, and execution time. Stress tests push the system to or beyond its specified limits to evaluate its robustness and error handling capabilities. Reliability tests monitor system response to represent user input, counting failures over time to measure or certify reliability. Black box Testing refers to analyzing a running program by probing it with various inputs. This kind of testing requires only a running program and does not make use of source code testing of any kind. In the security paradigm, malicious input can be supplied to the program in an effort to cause it to break. If the program breaks during a particular test, then a security problem may have been discovered. Black box testing is possible even without access to binary code. That is, a program can be tested remotely over a network. All that is required is a program running somewhere that is accepting input. If the tester can supply input that the program consumes (and can observe the effect of the test), then black box testing is possible. This is one reason that real attackers often resort to black box techniques. Black box testing is not an alternative to white box techniques. It is a complementary approach that is likely to uncover a different type of errors that the white box approaches. Black box testing tries to find errors in the following categories: Incorrect or missing functions Interface errors Errors in data structures or external database access Performance errors, and Initialization and termination errors. By applying black box approaches we produce a set of test cases that fulfill requirements: Test cases that reduce the number of test cases to achieve reasonable testing Test cases that tell us something about the presence or absence of classes of errors. The methodologies used for black box testing have been discussed below: 1.9.1.1 Equivalent Partitioning Equivalence partitioning is a black box testing approach that splits the input domain of a program into classes of data from which test cases can be produced. An ideal test case uncovers a class of errors that may otherwise before the error is detected. Equivalence partitioning tries to outline a test case that identifies classes of errors. Test case design for equivalent partitioning is founded on an evaluation of equivalence classes for an input condition [BEI95]. An equivalence class depicts a set of valid or invalid states for the input condition. Equivalence classes can be defined based on the following [PRE01]: If an input condition specifies a range, one valid and two invalid equivalence classes are defined. If an input condition needs a specific value, one valid and two invalid equivalence classes are defined. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined. If an input condition is Boolean, one valid and invalid class is outlined. 1.9.1.2 Boundary Value Analysis A great many errors happen at the boundaries of the input domain and for this reason boundary value analysis was developed. Boundary value analysis is test case design approach that complements equivalence partitioning. BVA produces test cases from the output domain also [MYE79]. Guidelines for BVA are close to those for equivalence partitioning [PRE01]: If an input condition specifies a range bounded by values a and b, test cases should be produced with values a and b, just above and just below a and b, respectively. If an input condition specifies various values, test cases should be produced to exercise the minimum and maximum numbers. Apply guidelines above to output conditions. If internal program data structures have prescribed boundaries, produce test cases to exercise that data structure at its boundary. White-Box testing technique White box testing (Figure 1.5) is testing against the implementation as it is based on analysis of internal logic (design, code etc.) and will discover faultsofcommission, indicating that part of the implementation is faulty. Designing white-box test cases requires thorough knowledge of the internal structure of software, and therefore the white-box testing is also called the structural testing. White box testing is performed to reveal problems with the internal structure of a program. A common goal of white-box testing is to ensure a test case exercises every path through a program. A fundamental strength that all white box testing strategies share is that the entire software implementation is taken into account during testing, which facilitates error detection even when the software specification is vague or incomplete. The effectiveness or thoroughness of white-box testing is commonly expressed in terms of test or code coverage metrics, which measure the fraction of code exercised by test cases. White box Testing involves analyzing and understanding source code. Sometimes only binary code is available, but if you decompile a binary to get source code and then study the code, this can be considered a kind of white box testing as well. White box testing is typically very effective in finding programming errors and implementation errors in software. In some cases this activity amounts to pattern matching and can even be automated with a static analyzer. White box testing is a test case design approach that employs the control architecture of the procedural design to produce test cases. Using white box testing approaches, the software engineering can produce test cases that: Guarantee that all independent paths in a module have been exercised at least once Exercise all logical decisions Execute all loops at their boundaries and in their operational bounds Exercise internal data structures to maintain their validity. There are several methodologies used for white box testing. We discuss some important ones below. 1.9.2.1 Statement Coverage The statement coverage methodology aims to design test cases so as to force the executions of every statement in a program at least once. The principal idea governing the statement coverage methodology is that unless a statement is executed, we have way of determining if an error existed in that statement. In other words, the statement coverage criterion [RAP85] is based on the observation that an error existing in one part of a program cannot be discovered if the part of the program containing the error and generating the failure is not executed. However, executed a statement once and that too for just one input value and observing that it behaves properly for that input value is no guarantee that it will behave correctly for all inputs. 1.9.2.2 Branch Coverage In branch coverage testing, test cases are designed such that the different branch conditions are given true and false values in turn. It is obvious that branch testing guarantees statement coverage and thus is a stronger testing criterion than the statement coverage testing [RAP85]. 1.9.2.3 Path Coverage The path coverage based testing strategy requires designing test cases such that all linearly independents paths in the program are executed at least once. A linearly independent path is defined in terms of the control flow graph (CFG) of the program. 1.9.2.4 Loop testing Loops are very important constructs for generally all the algorithms. Loop testing is a white box testing technique. It focuses exclusively on the validity of loop constructs. Simple loop, concatenated loop, nested loop, and unstructured loop are four different types of loops [BEI90] as shown in figure 1.6. Simple Loop: The following set of tests should be applied to simple loop where n is the maximum number of allowable passes thru the loop: Skip the loop entirely. Only one pass thru the loop. Two passes thru the loop. M passes thru the loop where m N-1, n, n+1 passes thru the loop. Nested Loop: Beizer [BEI90] approach to the nested loop Start at the innermost loop. Set all other loops to minimum value. Conduct the simple loop test for the innermost loop while holding the outer loops at their minimum iteration parameter value. Work outward, conducting tests for next loop, but keeping all other outer loops at minimum values and other nested loops to typical values. Continue until all loops have been tested. Concatenated loops: These can be tested using the approach of simple loops if each loop is independent of other. However, if the loop counter of loop 1 is used as the initial value for loop 2 then approach of nested loop is to be used. Unstructured loop: This class of loops should be redesigned to reflect the use of the structured programming constructs. 1.9.2.5 McCabes Cyclomatic Complexity The McCabes Cyclomatic Complexity [MCC76] of a program defines the number of independent paths in a program. Given a control flow Graph G of a program, the McCabes Cyclomatic Complexity V(G) can be computed as: V(G)=E-N+2 Where E is the number of edges in the control flow graph and N is the number of nodes of the control flow graph. The cyclomatic complexity value of a program defines the number of independent paths in the basis set of the program and provides a lower bound for the number of test cases that must be conducted to ensure that all statements have been executed at least once. Knowing the number of test cases required does not make it easy to derive the test cases, it only gives an indication of the minimum number of test cases required. The following is the sequences of steps that need to be undertaken for deriving the path coverage based test case of a program. Draw the CFG. Calculate Cyclomatic Complexity V(G). Calculate the basis set of linearly independent paths. Prepare a test case that will force execution of each path in the basis set. 1.9.2.6 Data Flow based Testing The data flow testing method chooses test paths of a program based on the locations of definitions and uses of variables in the program. Various data flow testing approaches have been examined [FRA88] [NTA88] [FRA93]. For data flow testing each statement in program is allocated a unique statement number and that each function does not alter its parameters or global variables. For a statement with S as its statement number, DEF(S) = {X| statement S contains a definition of X} USE(S) = {X| statement S contains a use of X} If statement S is if or loop statement, its DEF set is left empty and its USE set is founded on the condition of statement S. The definition of a variable X at statement S is live at statement S, if there exists a path from statement S to S which does not contain any condition of X. A definition-use chain (or DU chain) of variable X is of the type [X,S,S] where S and S are statement numbers, X is in DEF(S), USE(S), and the definition of X in statement S is live at statement S. One basic data flow testing strategy is that each DU chain be covered at least once. Data flow testing strategies are helpful for choosing test paths of a program including nested if and loop statements 1.9.3 Grey-Box testing technique Grey box testing [BIN99] designs test cases using both responsibility-based (black box) and implementation-based (white box) approaches. To completely test a web application one needs to combine the two approaches, White-box and Black-box testing. It is used for testing of Web based applications. The Gray-box testing approach takes into account all components ma