.

Sunday, March 31, 2019

Can a Historian Look at the Past Objectively?

Can a historiographer Look at the Past Objectively?In the following rail focussing lines, it is tone ending to be discussed the statement It is impossible for an historiographer non to mint the modern old through a incorrupt or ideologic lense.In come out to offer a deeper insight in the topic, it has been con aspectred appropriate to formu lately the statement, turning it into a motion and formulating it in a positive way. As well, though we ordain go posterior to it later, it has been considered as well to craw weight the nuance recent from the app arnt motion. Thus, this is the result It is possible for a historiographer non to earn the retiring(a) through a moral and ideologic lens?.These modifications, that as we whitethorn larn dont distort the essence of the original proposal, result ingest easier to think astir(predicate) the topic, as facilitates the task of consider it from a diachronic and epistemological- tail endd placement, which en adequa te to(p)s us to fuddle a broader picture active it and its diachronic roots. Anyway, in the conclusion, the original statement entrust be brought bottom again, and answered.The prototypal step before passing deep in this get by, is to define curtly what do we at a lower placestand for moral and ideological lens.To question if storey is indite through a moral lens, applied to the teddy of diachronic studies, heap be understood as questioning whereas in full-length studies in the bowl atomic number 18 morally biased or non what is to say, if beneath both text is it possible to specify few clues about the moral specify of the author.To rationalize what it is understood by ideological lens, it has been judged appropriated the definition given by professor Michael Huntan interrelated set of convictions or assumptions that reduce the complexness of a cut officular slice of globe to easily comprehensive equipment casualty and suggests appropriate ship canal of dealing with that pragmatism1.mayhap this definition nookie face too broad, exclusively it has been chosen precisely because of that it allows to include in thiscategory non whole the structures of thought that are usually considered as ideologies, lots(prenominal) as Marxism or Liberalism, except as well different intellectual trends or other theories of knowledge. In other words, ideology is understood as an certain body of ideas that helps to conduct a research and explain actiones, in the knowledge do chief(prenominal) of social sciences. Hence, an approach through an ideological lens consists on the abridgment and re eddy of historic events through the referential pull downs given by this structure.So, the raillery about if it is possible for a historiographer whether to see or non the recent past through a moral or ideological lens is about his capacity of care his own military post and concerns outdoor(a) the compute of the past that is offering thro ugh his writings.In the end, the issue under discussion here place be identified with the recurring air in historiography about bearingiveness and subjectivity in historical research. in that locationfore, on the essay we provide make a lot of references to it.Once the concepts pretend been focussed, allthing is ready to continue diving in our issue.As it has been seen, the matter that occupies us can be identified with the historiographical discussion of whether objectivity is possible or non. In the following lines, we result bring up the main propertys stood among scholars round this question, and the shifts that those views make believe experienced along the last century. This will help to give some steps towards a solution to it.Traditionally, related to the issue of objectivity and subjectivity, from the theoretical positions among the scholars in the field, we could distinguish two currents. On one hand, those who exemplify that objectivity can be happen upo n fitted, and that is mandatory on the other, those who think that is not precisely an unrealistic aim, but an undesirable one. Of course, as always in social sciences, this distinction must not be intended to be pure and rigid.On the offset company, we could find the pioneer of the discipline Leopold Von Ranke, and his line has been followed by other historians such as Trevelyan or David Thomson2. Quoting Ranke, the main position of this group can be summarized in the idea that muniment is about simply to show how it really was3 to learn it in its own terms. They unspoiledify this main statement in the idea that there is a learn to give primacy to the facts, that them should be the main point of departure of any historical research. Hence, register should be about establishing facts in a first stance, and identifying connections, but with a total detachment from the object of larn, without contaminating historical reality with ad hominem prejudices4.Of course, we can f ind some variants among this group, as some objectivists will concede some space to speculation or personal interpretation. This is the pillowcase of Trevelyan indeed, or of a XIX century intellectual who stated that facts are sacred, opinion is free5. alone they all share the main standpoint that primacy puzzle to be given to the facts, and that interpretation and historical reconstruction must be perfectly distinguished.On the other hand, we could find a school of historians which can be englobed in a to a greater extent subjectivist trend. In this group, we can find historians such as Benedetto Croce, in the early XX century, or Carr himself, in the sixties. One of the almost enlightening summaries of this view Croces statement all story is contemporary recital6 they understood that the task of the historian was to see the past through the eyes of the present, and to evaluate it (from it)7. Therefore, they get byd that all his ideas, theories and assumptions, his ideologic al and moral background, were reflected upon the text. In this way, Carr would argue that, despite facts are the backbone of historical studies, are not its reason8. It can be said that what he was stressful to say is that facts are incumbent condition, but not sufficient.But this group distinction is not reclaimable any much, since the outbreak in the late XX Century of a new school of thought that shivered, and steady doing, the foundations of historical theory postmodernism.Despite all the differences of perspective that confronted both trends, they were discussing inside a divided up paradigm modernity. Maybe they didnt agree in the birth of the historian with his work, in the idea of detachment, or on the primacy of the facts doctrine, but all of them agreed on the idea that the achievable aim of the discipline was related to historical truth. It can be counterargued that they stood a different conception of the concept of historical truth, but undoubtedly shared the stand point that his works where referring to an external truth.The outbreak of postmodernism from the late sixties forrader broke with this shared paradigm. From the field of philology and philosophy, the idea that there is not linkage between reality and the works that try to explain it, deal out to the other branches of knowledge. Postmodernists, such as the French philosopher Jacques Derrida, regarded that objectivity in historical studies must be understood as an unachievable myth, a untainted product of what might be called the referential illusion9.Following the path charted by the early postmodernists on the sixties, some historians such as Theodore Zeldin10 accepted these basis, anticipate a relativism through the acceptance of the set forth that historical texts are not bound to any historical truth, so are to be seen as clean subjective personal views.11 These assumptions were elevated to the category of rights, understanding that e truly historical explanation should be regarded as a personal tale, last therefore, as Zeldin states, everyone has the right to find his own perspective12.As we can see, if we hope to preserve the binary distinction of two confronted groups, to gain a tightlipped picture of the current discussion, we ware to reformulate it. Then, in one side we find the post-modernist view, which claims not only that any view in muniment is biased by moral and ideological concerns, but that everything is ideology and morals, those of the author, who stands groundwork the tale.On the other, those who believe that reference to historical truth is achievable. Inside this group, we may find some differences about the specific definition of truth or the role of ideology and so on, but this main point unifies them. Nowadays, it is ordinarily accepted that some ideological and moral bias is unavoidable13, but among this group it is denied that this doesnt allow to reach certain accusive conclusions.So, if we want to stay in the frame of the current polemics in the field, the question about whether it is impossible not to view the past through a moral or ideological lens requires to inquiry in which way historians pre-assumptions are reflected on his work, to which design does it distort the vision about the past, and whether this enables us to talk about an achievable objective historical truth or not.Until now, we obtain been focusing the question first, by clarifying the concepts later, by having a brief look to the spot of the issue among scholars. The latter(prenominal) point lead us to the stance that is coarse accepted that moral and ideology are present in any historical work.There is no one easy answer to what are the implications of it, and we have thought that the best way of understanding it is by revising some of the main elements that take part in historical research. done a brief study of how history is made, we will be able to understand how the moral and ideological assumptions of the author, his subjectivity, are present on his works. But before that, as are very related to the question of How?, it would be sidelineing to have a brief look to the question of What is the historian looking for? and the reasons of why is it judged of interest. Of course, the questions of What? and Why history? would deserve a whole essay. But our aim is not to tackle with the topic of the nature of history. Therefore, we will devote just a few lines to these matters.4.1) What?The question of what history is was first critically theorise by Ranke, who developed the idea that historys aim was to study it in its own terms, how it really was14. The idea was that the historian had to go to the archives, and gain facts which would explain how was the past. So, we can say, he understood that history was a reality that resided in the sources, and that was within reach for the historian, who could carry on a reconstruction of it. This conception of history explains why some historians from the positivist school, in the late XIX Century, thought that they were near the moment where, been all the archives revise, expressed historical truth was personnel casualty to be reached15.The problem is that it seems to be an out-of-focus vision. The past is not out there anymore, it is dead. This have been emphasized by some historians along the XX Century, such as Marwick who remarks the idea that past doesnt make it anymore, and that all we have from it are relics and traces through which the historian has to work in order to offer a more or less plausible synthesis of the past16. And this can be complemented with Carrs emphasis on the fact that historical research is made from the present, from a different context and perspective than its object of study17. Though sometimes is near falling in a relativist view often criticised by other scholars, asElton did18, he has helped to develop among the discipline a valuable concern about how our study of the past is conducted by inter ests and ways of doing moulded by the present time.So, this leads us to a new idea of history as a discipline instead of the reconstruction of the past, it is a authority where the role of the historian should be taken into account. The past is dead, and it is not going to be brought into live again. What we only have are traces, rests, ruins of it, and the task of the historian is to render informative models from them, trying to be faithful to the historical reality they refer to.In a metaphorical way, we can say that history is like the commission of a landscape painted by a painter backwards it, channelize by the indications of a man in who he relies. He doesnt see the landscape, but he can create a more or less faithful image of it depending on how skilful he is, on his cap expertness of asking the accurate questions to his friend, on his ability of deduction and his experience and so on, he would create a better or worse representation of it. But the representation would not be an exact reproduction of the landscape. First, because it would not be the intention it is a 2-D representation of a 3-D reality. But as well because a lot of data would be missed, even universe his friend a good guide, and the painter would have to deduce some of the connections made on the canvas, implying all his capacities of reasoning, deducing, comparing, thinking always at religious service of the, for him fragmented, reality that is trying to portray.Following this object lesson, a postmodernist could argue that is pointless to think that there could be a real bond between our blind painters representation and the landscape. So, he shouldnt try it what he would have to do is to be sure that his representation is a totally disengaged vision of the landscape, so what he would only be able to do is to create freely his own personal interpretation. But then he wouldnt be accomplishing the task he has been ab initio asked to reach a proper representation of the lands cape. He would create a beautiful and colourful composition, but a meaningless one.Coming back from the metaphor,the historian who is unaware of the object of study, history, cannot be conceived as a writer of history, but of poetry or literature. Hence, post-modernism is not applicable to history, as both are incompatible the historian who fully accepts that premises cannot be called a historian, as he is rejecting the main foundation of the discipline to offer a proper representation of history.So, what we can conclude from all this is a) Historian aspires, at most, to a representation of the past. b) Hence, the historian, with his moral and ideological beliefs, is present on his work, as he interprets and establishes connections from the present. c) This doesnt mean that the outcome is a mere creation his construction is supposed to be bonded to reality, to the ideal of how it was. If he rejects that, reducing it to a mere self-expression of personal moral and ideological points of view, is doing anything but history.4.2) Why?This issue will be briefly sketched out, with the main aim of presenting the point of view stood along the essay. Why history? Why is historical inquiry of interest? We have found an almost infinite range of points of view along the bibliography selected, from its excuse due to the explanation of development of adult male values through history, to the parentage that is the only way of understand our contemporary context.19 As we will see on following lines the Why? stood by the historian determines the How? is the research carried on.But, however, there is an essential characteristic that lies under any of the different points of view interest in history stems from the interest of understanding the human macrocosm in society. And from there, different ways of facing this issue enrich the whole.Hence a) There is not a specific answer to the question Why history?, but all can be summarized in the study of the past of the human being in society. b) The different ways and perspectives through it is analyze enriches the whole.4.3) How?Once the questions of the What? and Why? history have been overviewed, we are reaching the primordial point of this essay to see which is the role played by the ideological and moral own views of the historian in his work through reply to the question of How is it done?. Having a look to some of the essential aspects that step in in the process of writing history will enable us to see how historians personal concerns are reflected on his work and how does this happen.First, a brief insight to the relationship between the historian and the facts and sources20. Carr defines it through a resemblance with fisheryFacts are like fish swimming about in a vast and sometimes inaccessible ocean and what the historian catches will depend on what part of the ocean he chooses to fish in and what tackle he chooses to use- these two factors been determined by the kind of fish he wants to catc h.21What he is trying to explain is how the historian is not a mere passive processer of data, but an active agent from the very kickoff point of selecting the information in which is going to root his research. But the question is in base to what does he make the selection? In base to his own concerns? Or in base of the preferences of history? What is to say the facts he looks for are determined by his own interests or by what history demands?As we have argued previously, history is about a representation of the past, where it is the main character, the object of study. So, it seems that would make sense to assert that the questions that the historian asks to the raw materials may be pounded faithfully to the preferences of history. Of course, at a first stance, when he exactly knows anything about the topic is going to study, his research will be compulsive by questions raised in the present, related to his concerns. But this will revision progressively as he makes progress.Th rough inquiring the raw sources, to make them talk22, the historian comes up with more questions, but this time not formulated in base to the present but to the foreign country23 which is been re-visited. And by keeping this process, he manages to go deeper in the past, to understand better the people who lived there, the process that affected their lives.So, in theory, it appears to be possible the goal set by Ranke of getting to know the past in its own terms24. But when we audition any work of history, even the considered to be the best ones, we discover that, indeed, this doesnt happen. either history book or paper can be sort out in an ideological or moral spectrum due to its conclusions. In order to understand properly why does this happen, in the next lines we are going to proceed to an insight to what has been called the nature of the historian. Through this, we will go back to some of the issues which have just been covered.So, in the following lines we are going to deal with the issue of the nature of the historian in what pretends to be an invitation for the commentator to think about who is the historian and how does his moral and ideological point of view affects his historical production. We will focus on three aspects, which are those who have been seen to be the most problematic context, ideology as textile and categories as a vehicle for indirect judgement.As is aforementioned, the historian is not a machine, but a human who has his own beliefs and experiences emotions, who is part of his society, so shares the pagan background of his epoch and is affected by academic theories or trends. As Jordanova arguesall historians have ideas already in their minds when they study primary materials- models of human behaviour, established chronologies, assumptions about responsibility, notions of identity and so on25.On the other hand, the historian is a professional devoted to the study from the past, through the construction of explanatory models of it in the most accurate way possible. Hence, we can detect the soprano reality of the historian, which causes tensions. Lets have a look to how all this corpus of premises affects the historians craft.First of all, we have to bear in mind that the historian is part of a specific time and society that constrains him when creates his explanatory models about the past. For example, a historian in the sixties would be attracted by schools such as the cliometricians in the US or Annales in France, based on theories that championed more integration of the discipline with other social sciences such as sociology or economy, as some of them they understood that it was the way of reaching certain and objective conclusions26. This was translated into the predominance of a history based on the processing of data, quantitative perspectives of the past, on abstract kind of than narrative, predominance of social perspective rather than the study of individuals and so on.27Part of this schoo ls where Emmanuel Roy de Laudurie and Lawrence Stone, who argued respectively that history that is not quantifiable cannot claim to be scientific28 and that quantification was the way of pushing back widely broadcast historical myths29.But this conception wrecked part because of its own exhaustion, partly because new trends surpassed it, such as post-modernist trends (that emphasized the study of the unconscious instead of data at a social level), radical historians (that argued for a more narrative history instead of analysis and promoted new objects of study such as what they understood of the hidden and oppressed of history)30, and so on. And with this change of paradigm, a lot of supporters of the quantitative view changed their mind, as is the case of both Le Roy and Stone. The former wrote in the sixties a book about the corporate imagery in a French medieval village the latter is well known for having written a high-impact paper claiming for the revival of narrative31.As w e can see, if changes the context where the ideological premises of the historian have been built, excessively changes the way of understanding it. In the end, changes the anthropological conception of who and how the human being is. Is the case of Le Roy his idea of human as a being constrained by the means of production rooted in a materialist view of the world gave way to a new vision where the un-material (imaginaries and so on) was judged as more relevant in order to explain his anthropological basis. Hence, we can see that the context may influence heavily the ideological premises of the historian and with a shift on it, changes, consequently, his way of ponderingthe past. particularly important is the case of that historiography explicitly based on an ideology. Maybe the most remarkable case is the Marxist historiography, which has kept a strong carriage in the field during almost the whole XX century. Great historians such as E.P Thompson, Christopher Hill or Eric Hobsbawm didnt hesitate in defending Marxism as an especially useful point of departure for historical research32. As confessed Marxists they were, his studies focused on topics related to the world of labour from a materialistic perspective and dealt with categories and concepts such as bourgeois, class and class struggle, means of production full of Marxists implications.The use of categories in history is another example of how present is historians moral and ideological point of view in his work. Categories are not neutral, but full of implications. As we have seen, Marxist historians are predisposed to explain history through Marxist categories. But we can think on an infinite range of examples categories such as democratic or fascist, and so on, are often used as a way of view moral judgements. Hence, through the mere choice of categories, the historian is, though implicitly, judging. face this picture, it could seem that post-modernist assumptions about the impossibility of getting over ones point of view and reach historical truths are certain. To counterargue this conclusion, has been found (as Evans also does) 33 to be very useful the concept of objectivity encouraged by Thomas Haskell, which regards it more as a quality of the historian itself than of the textascetic will power that enables a person to do such things as abandon greedy thinking, assimilate bad news, discard pleasing interpretations that cannot past elementary trials of evidence and logic, and, most important at all, suspend ones own perceptions long teeming to enter sympathetically into the alien and possibly repugnant perspectives of rival thinkers.34In the end, we could say that writing good history, capable of reaching historical truth, is about been able of transcending ones point of view and subordinate it to the historical reality face along the study of the sources. It could be said that is a matter of primacy, of been able to give primacy to the history rather than to ones p osition. Lets examine this with some of the examples aforementioned.We have mentioned the case of Hobsbawm. As it has been said, he developed a historical analysis from a Marxist point of view. But when we say that we are not assumptive that he was fitting his conclusions into that premises, enforcing reality to fit it into his ideological point of view.Indeed, he was able to reach conclusions which challenged the traditional Marxist point of view, as happens when asserts that macro-social analysis difficulties to understand the nature of Revolutions by exaggerating structure and devaluating situation, as them can only been explained historically, focusing on the specific, and not theoretically, through generalisations35. Or when writes about nationalism in a much more cultural way than just based on Marxists social theory and framework36. Marxist theory guided his historical inquiry, but he was not closed to re-interpreting it if the sources demanded it, and was opened as well to consider historical problems without absolutizing any kind of historical causes or perspectives.His capacity of considering all the points of view, of not closing his historical inquiry to his ideological preferences, and to giving primacy to the historical sources rather than to his personal ideological premises, makes his work well-grounded until today37.A counter example would be the case of Carr, whose History of Soviet Russia has been often criticised because of overlooking Stalinist repression38. And is a precise critique in what he said was an accurate accountant of the development of the Soviet state, he disregarded that crucial point due to a strong ideological bias.Or the case of some ideologically-motivated gender history, that absolutizes ahistorical concepts, such as patriarchy, fitting history into its predetermined framework39. Another example are Foucaultspseudo-historical writings, which are more a kind of philosophical works based on historical examples, where th eory clearly outweighs historical rigor.40 In this cases, the primacy of history is not preserved far from that, it is toughly violated, as is placed at the service of the moral and ideological framework of the writer.We have mentioned as well the issue of categories as a way of implicit moral and ideological judgement. The historian will never get rid of it, but can perfectionate his ability to represent history accurately through them. Lets bring again the example of the category fascist. If the historian is able to understand it properly, and is conscious of all its implications, he will be able to make an appropriate use of it, harmonise to historical standards. Then, if he remains faithful to the sources, would be in the position of identifying fascist movements, or fascist behaviours as were historically understood in the time studied. It will, for example, help him to differentiate it from other kind of authoritarian ideologies, point which is often confused.And this is the way that objectivity should be understood as a capacity of detachment that allows the historian to overcome a fully present-minded and ideological interpretation. And departs from the assertion that primacy must be given to the demands of history, to the guidance of the sources. A way of assessing if this has been achieved is through the test of time the validity of its conclusions through a wide span of time. Quoting again Tosh, is what made him to assert that Hobsbawms Age of Revolution still unsurpassed41, even when Marxism is not anymore seen as a reliable framework of interpretation.All of this can be achievable only if this principle of objectivity is assumed. But it is just a necessary condition, but not sufficient. To accomplish it depends as well on the skill of the historian. But without it, doesnt matter how much skilful the historian is, that his work will not stand the test of time.Along this essay, we have revised some polemic aspects about the historians relationship with his object of study. First of all, after fixing definitions of moral and ideology, we have revised some of the attitudes across the historiography about our topic. Then, through answering to the questions What?, Why? and How? we have explored the relationship between the historian and history, between his perso

Biological Effects Of Radiation

Biological Effects Of acti nonherapyRadiation describes a edge in which energetic mites or waves travel through a ordinary or space. There be two distinct types of ray syndrome ionizing and non-ionizing. The script light institutionalize is comm but used in reference to ionizing beam only(prenominal) having sufficient power to transpose an fraction but it whitethorn a alike refer to non-ionizing radioactivity example like radio set waves or unmistakable b depressedzy. The zip fastener radiates travels outward in straight lines in tot aloney directions from its source. This geometry naturally leads to a system of measurement and physical that is equally applicable to all types of shaft of light. Both ionizing and non-ionizing radiotherapy washbowl be harmful to existences and potentiometer top in mixed bags to the natural environment.Radiation with sufficiently broad(prenominal) energy quarter ionize atoms. Most much, this occurs when an electro n is stripped from an electron shell, which leaves the atom with a net positive charge. Be originator cells ar made of atoms, this ionisation crowd out guide in plentycer. An individual cell is made of trillions of atoms. The probability of ionizing radiotherapy create cancer is dependent upon the battery-acid rate of the irradiation therapy sickness and the sensitivity of the organism being irradiated.Alpha hints, Beta fictional charactericles, Gamma and X-Ray radiation, and Neutrons may all be accelerated to a high enough energy to ionize atoms.Radiation intromits alpha particle, beta particle, and da Gamma particle.Alpha particle In alpha particle, the spontaneous process of firing of an alpha particle from a radioactive heart. Alpha particle is broadly speaking bounded as alpha decay. An alpha particle is emitted by a heavy essence. The nucleus, called p atomic number 18nt nucleus has a very hulky internal energy and is unstable. An alpha particle is a heli um nucleus having two protons and two neutrons. When two electrons orbiting around the nucleus of helium atom argon knocked out completely, we bewilder doubly change helium atom known as alpha particle.Beta particle a beta-particle is a fast moving electron. The spontaneous process of emission of beta-particle from a radioactive nucleus is called beta decay. Beta decay is of earthy chord types beta-minus, beta-plus, and electron capture.Beta-minus beta-minus is like an electron. It is surprising that nucleus keep in lines no electron, then a nucleus can emit electron. In the neutron inside the nucleus is converted in to a proton and an electron like particle. This electron like particle is emitted by the nucleus during beta-decay.In beta-minus decay, neutron in the nucleus is converted in to a proton and a beta-minus particle is emitted so that the ratio of neutron to proton decreases and hence the nucleus becomes stable.Beta-plus In a beta-plus decay, a proton is converted in to a neutron and a positron is emitted if a nucleus has more protons than neutrons.Electron capture In electron capture, nucleus absorbs one of the inner electrons revolving around it and hence a thermonuclear proton becomes a neutron and a neutrino is emitted. Electron capture is comparable with a positron emission as the processes lead to the same nuclear trans progress toation. However, in electron capture occurs more frequently than positron emission in heavy elements. This is be earn the orbits of electrons in heavy elements have same radii and hence orbital electrons ar very close to the nucleus.Gamma ray Gamma rays ar the high energy packets of electromagnetic radiation. Gamma radiations have high energy photons. They do non have around(prenominal) charge and their relative easiness mass is zero. Gamma-decay it is the spontaneous process of emission of high energy photon from a radioactive nucleus.When a radioactive nucleus emits a beta particle, the fille nucleus is excited to the higher energy state. This excited nucleus rays are emitted by the daughter nucleus so it is clear that the emission of da Gamma rays follows the emission of alpha or beta particle.Non ionizing radiationNon-ionizing forms of radiation on living(a) tissue have only recently been studied. Instead of producing aerated ions when passing through matter, the electromagnetic radiation has sufficient energy to change only the rotational, vibration or electronic valence configurations of molecules and atoms. Nevertheless, different biologic loads are observed for different types of non-ionizing radiation tuner waves Radio waves whose wavelengths range from than 104m to 0.1m, are the aftermath of charges accelerating through conducting wires. They are generated by such electronic devices as LC oscillators are used in radio and television communication system.Infrared rays Infrared radiations have wavelength ranging from near 0.3m to 10 -4m and also generated by the electron ic devices. The infrared relative frequency radiation energy negligent by a substance as internal energy be have got the energy agitates the objects atoms, increasing their vibration or translational motion, which results temperature increases. Infrared radiation has operable and scientific application in m either areas, including physical therapy, infrared radiation photography, and vibration spectroscopy.Ultraviolet radiation Ultraviolet radiation cover wavelength ranging from venturely 4X104m to 6X10-10m. The sun is an important source of ultraviolet radiation light, which is the main cause of sunburn. Sunscreen locations are transparent to visible light but great percentage of UV light heedless. Ultraviolet rays have also been involve I the formation of cataracts.Most of the UV light from the sun is listless by ozone molecules in the earth upper atmosphere, in a point called stratosphere. This ozone shield converts lethal high energy ultraviolet radiation to infrared r adiation, which in bits warm the stratosphere.X-rays X-rays have the range from approximately10-8 to 10-12m. The most common source of x-rays is stopping of high energy electrons upon the bombarding a metal target. X-rays are used as diagnostics tool in medicine and as the treatment for certain forms of cancer. Because x-rays can abuse or destroy living tissue and organism, care must be taken avoid demand icon or over-exposure. X-rays are also used in the train of crystal structure because x-rays wavelengths are comparable to the atomic separation blank in solids.Electromagnetic radiation The wave nature of electromagnetic radiation explains various phenomena like interference, diffraction and polarization. However, wave nature of electromagnetic radiation, could explain phenomena like photo galvanising effect, Compton Effect. The cathode rays live of negative charged particles called electrons which are the constituent of an atom and hence the constituent of matter.Accordin g to the concept of radiation example light waves radio waves, X-rays, microwaves etc. are assumed to carry energy in packets or bundles known as photons or quanta. Biological effect of radiationIn biological effect of radiation, thither are m all hazardous effects of our health and carcass. Biological effects of radiation are typically can be divided into two categories. The first category consist of exposure to high do drugss of radiation over shots period of time producing shrill or short term effects. The second category represents exposure to low doses of radiation over an extended period of time producing chronic or enormous term effects. extravagantly dose ( piercing) high doses tend to drink down cells, while low doses tend to upon or change them. noble doses can kill so many cells that tissues and organs are wrongfulnessd. This is turn may cause a rapid whole body solvent often called the acute radiation syndrome (ARS).Low doses (chronic) low doses spread out over long periods of time dont cause an immediate problem to any body organ. The effects of low doses of radiation occur at the train of the cell, and the results may not be observed for many years.Although we tend to beau high doses of radiation with catastrophic events such as nuclear weapons explosions, at that place have been documented cases of individuals dying from exposures to high doses of radiation resulting from tragic events.High effects of radiation high effects of radiation are scrape up burns, hair loss, sterility, cataracts.Effects of skin include (reddening like sunburn), dry (peeling), and moist (blistering). Skin effects are more likely to occur with exposure to low energy gamma, x-ray, or beta radiation. Most of the energy of the radiation deposit in the skin surface. The dose required for erythematic to occur is comparatively high, in excess of 300 radiations. Blistering requires a dose in excess of 1,200 radiations.Hair loss, also called epilation, is similar to skin effects and can occur after acute doses of close 500 radiations.Sterility can be temporary or ineradicable in males, depending upon the doses. To produce permanent sterility, a dose in excess of 400 radiations is required to the generative organs.Cataracts (a clouding of the lens of the eye) appear to have a threshold round 200 radiations. Neutrons are e fussyly effective in producing cataracts, because the eye has high water content, which is particularly effective in stopping neutrons.High dose effectsDose (radiation) effect observed15-25 blood count changes.50 blood count change in individual.100 Vomiting (threshold).cl Death (threshold).Categories of effects of exposure to low doses of radiationThere are three general categories of effects resulting from exposure to low doses of radiation. These are ancestral the effect is suffered by the offspring of the individual heart-to-heart.Somatic the effect is in the first place suffered by the individual exposed. Since ca ncer is the primary result, it is sometimes called the carcinogenic effect.In-utero some mistakenly consider this to be a genetic consequence of radiation exposure, because the effect, suffered by a exploitation is after birth. However, this is actually a special case of the somatic effect, since the embryo is the one to the radiation.Radiation hazard the approximate ventures for the three principal effects to train of radiation areIn genetic effect, run a risk from 1 rem of radiation exposure to the reproductive organs approximately 50 to 1,000 times less than spontaneous risk for various anomalies.In somatic effect, for radiation induced cancer, the risk account is developing any type of cancer. However not all cancers are associated with exposure to radiation. The risk from dying from radiation induced cancer is about one half the risk of getting the cancer.In utero Spontaneous risks of foetal abnormalities are about 5 to 30 times greater than risk of exposure to 1 rem r adiation. However, the risk of child hood cancer from exposure in utero is about the same as the risk to adults exposed to radiation exposures.Linear no-threshold risk model general consensus among experts is that some radiation dose by a linear, no threshold model. This model is sure by the NRC since it appears to be most conservative.Linear an increase in dose adults in a proportional increase in risk.No-threshold any dose, no matter how small, produces some risk.The risk does not start at 0 because there is some risk of cancer, even with no occupational exposure. image to radiation is guarantee of harm. However, because of the linear, no-threshold model, more exposure means more risk, and there is no dose of radiation so small that it will not have some effect.EFFECTS OF RADIATION ON CELLSIonizing radiation absorbed by human tissue has enough energy to remove electrons from the atoms that pay back up molecules of the tissue. When the electron that was shared by the two atoms t o form a molecular bond is dislodged by ionizing radiation, the bond is broken and thus, the molecule move apart. This is a basic model for understanding radiation harm. When ionizing radiation interacts with cells, it may or may not strike a critical part of the cell. We consider the chromosomes to be the most critical part of the cell since they contain the genetic information and instructions required for the cell to perform its become and to make copies of it for reproduction purposes. Also, there are very effective emend mechanisms at work constantly which repair cellular damage including chromosome damage.Uses of radiation Nuclear physics application are extremely widespread in manufacturing, medicine in biology, we present a few of these application and underlying theories supporting them.Tracing Radioactive tracers are used to track chemicals active in various reactions. One of the most valuable uses of radioactive tracers in medicine. For example, iodine, a nutrient needed by the human body, is obtained largely through intake of iodized salt and sea food.Radiation therapy Radiation causes much(prenominal) damage to rapidly dividing cells. Therefore, it is useful in cancer treatment because neoplasm cells divide extremely rapidly. Several mechanisms can be used to yield radiation to a tumor. In some cases, a narrow beam of x-ray or radiation from a source such as 60co is used. In other situation, thin radioactive needles called seeds are implanted in the cancerous tissue. The radioactive isotope 131I is used to treat cancer of the thyroid.Black body radiation An object at any temperature emits electromagnetic waves in the form of thermal radiation from its surface. The characteristics of this radiation depend on the temperature and properties of the objects surface. Thermal radiation originates from accelerated charged particles in the atoms near the surface of the object those charged particles emit radiation much as small antennas do. The th ermally radiation agitated particles can have a scattering of energies, which accounts for the continuous spectrum of radiation emitted by the object. The basic problem was in understanding the observed distribution of wavelengths in the radiation emitted by a inexorable body. A black body is an ideal system that absorbs all radiation incidents on it. The electromagnetic radiation emitted by the black body is called blackbody radiation.Radiation damage Radiation damage means that electromagnetic is all around in the form of radio waves, microwaves, light waves so on. The degree and type of damage depend on several factors, including the type and energy of the radiation and properties of the matter.Radiation damage in biological organism is primarily due to ionization effects in cells. A cells normal operation may be disrupted when highly reactive ions are formed as the result of ionizing radiation. Large those of radiation are especially dangerous because damage to a great number of molecules in a cell may cause to die.In biological systems, it is common to separate radiation damage in two categories somatic damage and genetic damage. Somatic damage is that associated with any body cell except the reproductive cells. Somatic damage can lead to cancer or can seriously alter the characteristics of proper(postnominal) organism. Genetic damage affects only reproductive cells. Damage to the genes in reproductive cells can lead to defective cells. It is important to be the aware of the effect of diagnostics treatments, such as X-rays and other forms of radiation exposure, and to balance the crucial benefits of treatment with the damaging effects.Damage caused by the radiation also depends on the radiations penetrating power. Alpha particles cause extensive damage, but penetrate only to shallow depth in a material due to strength interaction with other charged particles. Neutrons do not interact via the electric force and hence penetrate deeper, causing signific ant damage. Gamma rays are high energy photons that can cause serve damage, but often pass through matter without interactions. For example- a given dose of alpha particle causes about ten times more biological damage produced by radiation than equal dose of x-rays. The RBE (relative biological effectiveness) factor for a given type of radiation is the number of rads of x-radiation or gamma radiation that produces the same biological damage as 1-rad of the radiation is being used.Radiation detectors Particles passing through matter interact with the matter in several ways. The particles can, for example- ionize atoms, scatter from atoms, or be absorbed by atoms. Radiation detectors exploit these interactions to allow a measurement of the particles energy, momentum, or change and sometimes the very existence of the particle if it is otherwise difficult to detect. respective(a) devices have been developed for detecting radiation. These devices are used for a alteration of purposes, including medical diagnoses, radioactive dating measurement, standard back ground radiation, and measuring the mass, energy, and momentum of particles is created in high-energy nuclear reaction.EFFECT OF RADIATION ON HUMANSA very small amount of ionizing radiation could jaunt cancer in the long term even though it may take decades for the cancer to appear. Ionizing radiation (x-rays, radon gas, radioactive material) can cause leukemia and thyroid cancer. There is no doubt that radiation can cause cancer, but there still is a question of what level of radiation it takes to cause cancer. Rapidly dividing cells are more susceptible to radiation damage. Examples of sensitive cells are blood forming cells (bone marrow), intestinal lining, hair follicles and fetuses. Hence, these develop cancer first.If a person is exposed to radiation, especially high dose, there are predictable changes in our body that can be measured. The number of blood cells, the frequency of chromosome aberration s in the blood cells and the amount of radioactive material in urine, are examples of biomarkers that can indicate if one is exposured high dose. If you do not have early biological changes indicated by these measurements the radiation exposure will not pose an immediate threat to you.Radiation poisoningRadiation poisoning, radiation sickness or a creeping dose, is a form of damage to organ tissue caused by excessive exposure to ionizing radiation. The term is generally used to refer to acute problems caused by a large pane of radiation in a short period, though this also has occurred with long term exposure. The clinical name for radiation sickness is acute radiation syndrome as described by the CDC A chronic radiation syndrome does exist but is very uncommon this has been observed among workers in early atomic number 88 source production sites and in the early days of the Soviet nuclear program. A short exposure can result in acute radiation syndrome chronic radiation syndrome r equires a prolonged high level of exposure.Radiation exposure can also increase the probability of developing some other diseases, mainly cancer tumors, and genetic damage. These are referred to as the stochastic effects of radiation, and are not included in the term radiation.Radiation ExposureRadiation is energy that travels in the form of waves or high-speed particles. It occurs naturally in sunlight and sound waves. Man-made radiation is used in X-rays nuclear weapons, nuclear power plants and cancer treatment.If you are exposed to small amounts of radiation over a long time, it raises your risk of cancer. It can also cause mutations in your genes, which you could pass on to any children you have after the exposure. A lot of radiation over a short period, such as from a radiation emergency can cause burns or radiation sickness. Symptoms of radiation sickness include nausea, weakness, hair loss, skin burns and reduced organ function. If the exposure is large enough, it can cause premature aging or even death.

Saturday, March 30, 2019

Need For Structural Transformation Through Ebusiness Business Essay

Need For Structural fracture Through E crease Business Essay at that place argon various theories on the subject which enrich our modern days visualizeing of the subject and contact us appreciate how and why boldnesss strategise their decisions. How does Coca Cola know that its distinctiveness lie in adding various lines of beverages such as energy drinks, sports drinks, wellness drinks when another(prenominal)s atomic go 18 just ma queer aerated drinks? Or how does Estee Lauder through and through its various trade brands cater to diverse segments the original Estee Lauder for aged women, Clinique for middle aged women, M.A.C. for y out(p)hful hipsters, Aveda to aromatherapy en theniasts and Origins for eco conscious consumers.Michael ostiarys acclaimed Five Forces of Com front-runneritive function model explains a simple perspective for evaluating and analysing the competitive strength and positioning of a corporation or business organisation.Let us understand each(p renominal) force and its implication for the Strategic Planners in the Case of FedEx effort Competitors This refers to the existing players in an industry unless and until there is a starting signal actor advantage. But so one(a)(a)r or subsequent, other firms get down and pose a direct aff unspoiled to ones profits.In the crusade of FedEx, UPS was a competitor though till 1982 UPS was not directly competing in the overnight oral communication segment. And so the rule of the game expect to be maneuvered keeping in look what other firms ar doing in the industry.Potential Entrants The threat of a raw firm entering into the industry is much when its easier relatively for an organisation to enter the industry in other spoken language, entry barriers argon low. An organisation be after to enter the industry will contemplate various decisions such as the loyalty of customers to existing increases, how soon the economies of scale keister be achieved, do they have portal to suppliers, and would they face government legislation, discouraging them or elevate them in any manner to enter the industry.FedEx had a lot of prototypical mover advantages. It was the first of all confederation to fox the drivers collapse held scanners for sending alerts to customers for each pick up and delivery. In 1994, it became the first big transport society to launch a website that included track and canvass capabilities, exactly by 2000 when DHL, TNT and UPS were fierce competitors., these advantages were lost as customers took all these facilities as given(p) and did not give away any incremental subside. then as more firms enter the foodstuff, the dynamics change and this calls for a continuous innovation rain cats and dogs and realignment of corporate strategy which has perform the hallmark of FedEx over the years. By integrate its operate and managing the picture- reach of its customers, it generated customer loyalty and increased the customer switching cost. olibanum FedEx managed to feelingively introduce the barriers to entry for competitors.Threats of backing products or services The handiness of products services outside the common product boundaries raises the likelihood of customers to switch to alternatives. ar there alternative products that clients can buy over your product that appends the selfsame(prenominal) benefits at a lesser price?In the side of FedEx, this threat was low at the time it entered the grocery. There was no other direction to extend to time light- in the buff documents reach overnight in a reliable fashion.Bargaining power of buyers The bargaining power of clients is besides express as the market of widenings the ability of customers to put the firm under pressure, which also governs the customers sensitivity to price changes.Strategic Planners at FedEx gain this from the beginning. The underlying philosophy at FedEx was that whenever businesses grow, there is al sorts mo ve of physical goods. This shows that the concern group at FedEx took cognisance of customer sensitivity and their power. It always laid tension on speed and combine in moving time sensitive documents.Bargaining power of suppliers Suppliers atomic number 18 critical for the success of a firm. lancinate materials are required to complete the finished product of the organisation. Suppliers have bulky power. This power comes fromIf they are the main supplier or one of the out of date suppliers who supply that particular raw material.If it is relatively costlier for the company to move from one supplier to another (known also as switching cost)If there are no other substitutes for their product.FedEx made judicious decisions in selecting their technology partners. Whether it was bind up with COSMOS or making a deal with Netscape in 1999, it leveraged its IT partners to the fullest.Value chain is described by Dagmar Recklies in the following wordsValue chain analysis described by Porter refers to the activities inside and around a company, and links them to an analysis of the competitive strength of an organization. It thus assesses which value each particular activity brings to the organizations products or services.D.K. Likhi in the frontier Motives of Strategic Alliances formation Value Chain perspective states the followingPorter says that the capability to perform particular activities and to manage the linkages between these activities is a descent of competitive advantage.In his well-known book Competitive Advantage Creating and Sustaining topnotch Performance (1985) Porter distinguishes between primary coil activities and support activities. Primary activities are straightforwardly linked with the creation or delivery of a product or service. They can be assembled into five main parts videlicet inbound logistics, operations, outbound logistics, marketing and sales, and service. Each of these primary activities is joined to suffer activiti es which help to improve their competence or efficacy. There are quadruple study areas of support activities procurement, technology development (including RD), human resource management, and root (systems for planning, finance, quality, information management etc.).The basic model of Porters Value Chain is presented here-Moreover, the term Margin denotes that organisations realise a profit margin that depends on their aptitude to handle the linkages between all activities in the value chain.In the aspect of FedExStrategic Planners at FedEx have been able to leverage both its primary and secondary activities and ensured that they reap advanced margins. Its focus on Technology victimization proved that make up a secondary activity can become critical in defining success. FedExs success lay in its pro-activeness. It accomplished that mere express delivery will not take a crap it utter about in order to revolutionise the globe, it will have to focus on total logistics and su pply chain solution.Core Competencies and Capabilities at FedExA event competency is a unique factor that a business considers as being central to the way it, or its employees, plows. It fulfills three key criteriaIt provides consumer benefitsIt is not well-situated for competitors to imitateIt can be leveraged widely too many products and markets.When we analyse the case, it becomes evident that FedEx had various core competencies and capabilities.Firstly, it is the underlining philosophy and the vision of the management at FedEx. Innovation and Pro-activeness is a culture in itself either the organisation has it or it does not. When others in the industry were competing on prices, FedEx was thinking how to integrate seamlessly with its customers and provide value. It was thinking of emerging into a global logistics and supply chain company while others in the industry were complacent being express delivery firms. In 1974, FedEx opened a small store for Parts swan and thus emba rked on the journey of logistic management.Fred Smith, Chairman of FedEx Corporation was a visionary he realised that overnight delivery of time sensitive documents was a brilliant business idea. He mastered that speed and reliance were crucial in this business for clients.In the nascent years when other players were buying space on commercial airlines, FedEx acquired its own transport fleet. such(prenominal) a vision was instrumental in saving huge costs to the company in the latter years.Secondly, the use of breakthrough technology and meshing acted as another core competence. In 1980s, FedEx became the first company to give its drivers hand held scanners that were used to send alerts to customers every time a mail boat was picked up or delivered. It became the first big transport company to have a website with tracking and tracing facilities in 1994.It had started putting customers catalogues on the website. and then FedEx had started redefining sources and procurement strate gies for its clients who were very happy with these value added services, they had in a way outsourced their full(a) supply chain management to FedEx.Thirdly, leveraging relationships as a strategy acted as yet another major competency for FedEx. It started victimisation COSMOS tracking ne bothrk in 1979 and provided tracing and tracking services with the advent of internet. In 1999, it made a deal with Netscape tooffer a suite of delivery services at its netcenter portal. This meant automatic integration of Netscape FedEx by means of which FedEx gained an added access to 13 million members who were there on the portal. As we see FedEx leveraged both, reluctant integration with its IT Technology partners on one hand and forward integration with many of its clients like Dell Cisco on the other.Thus as of January 2000, FedEx became the worlds largest overnight package carrier with about 30 percent market share.Main Advantages Disadvantages of international trade to FedEx Corpor ationFedEx gained hugely from International trade. Its tie up with Dell, Cisco, NatSemi and Netscape vouch for the fact that such backward and forward integrations would not have been possible if it had not ventured out of its radix market.The management also exploited the use of internet and e-commerce to the best of its advantage. It started ligature up with companies worldwide and managed its customers effectively. FedEx was able to service as an extended, fully outsourced logistics supply chain division of global companies.It introduced various e-Business Tools for faster connections with FedEx shipping and tracking applications. As early as in 1974, it started logistics operations with Parts bank building and built up a small warehouse at Memphis. Thus when others were just competing on prices and speed, FedEx was already way ahead with its first value added service way beyond transportation. moreover when one goes international, there are disadvantages as well. FedEx incr eased its scope of work and base, spreading itself too thin. multiplex brands worldwide became difficult to manage. Costs started multiplying as each sub business had its own accounting, sales and marketing costs. While the likes of UPS had the advantage of promoting just one brand UPS to sell the company and its many service offerings, FedEx was trying to pass on five different sub-companies with completely unrelated names business password under the FDX banner with separate sales and customer service teams.However a re-alignment and re-branding strategy was timely planned and international trades advantages furthest out weighed its disadvantages and costs.Section II untarnished evolutionary schooltime of notion the case of FedExStrategy Theory is such a vast, multi-dimensional and multi-disciplinary academic field with competing schools of design with each one taking a different guide as to what strategy is aiming to achieve that it becomes almost impossible to comp are any two schools. Let us look at some of the schools of thought in the playing area of strategy and see the relevance of the same in the case of FedEx.The classic examine of strategy is supported on the military parlance, in which the world is a fixed hierarchy with a solitary prevalent who makes decisions. The concept has a long history in the military and if we see etymologically, strategy literally means what the generals do. However problem exists when some theorists take this too literally and try to replicate this in the business domain as it is.The military model is supported by an intellectual hereditary pattern from frugals. Many economists placed this singular figure right at the oculus of their ideation of strategy as an highly structured game of move and counter-move, frank and counter-bluff, between competing yet interdependent businesses. This view of individuals in association with Smiths view that each individual is continual wielding himself to find out the most moneymaking employment of whatever capital he can command, creates a stomp of the manager who is focused on maximising return on investment. Classical strategy places immense assurance in the readiness and capacity of managers to adopt profit maximising strategies through rational long-term planning.Such cases are a rarity as businesses do not comprise of exalted economic man. managers not but fall short to set output at the theoretically profit-maximising take where marginal costs barely equal marginal revenue, but most managers have no soupcon what their marginal costs and marginal revenue curves areEconomists attuned to this business imbecility by letting the markets do the thinking. With this view of the world, markets not managers opt the public strategies within a particular environment. For those strategists who stick to the evolutionary view of competition, survivors whitethorn emerge to be those who have adapted themselves to the environment. Competition is the mainly effectual form of weeding out inefficiency or lack of adaptation, hence simple access into markets is the way to ensure healthy industries.Application of the Schools in the case of FedExIn the case of FedEx, we see an amalgamation of both the schools happening. When the firm has a first mover advantage, at times it is possible to relate its thinking, and natural actions with the Classical School Of Thought. From 1973 onwards, Fred Smith, Chairman of the assembly steered the company through breakthrough technical advances and advance(a) practices. It is akin(predicate) to the Classical Ideology of maximising profits and shaping the industry, so to speak.It was Freds vision that enabled the organization to transform itself from an express delivery company to a global logistics supply chain company. He took the right decisions at the right time most of which were instrumental in making it the market attraction at that time and even some thirty years later Noteworthy are the following actions As early as 1974, FedEx realised the importance of value added services and the transformation into a logistics company. It tied up with Parts Bank and built up a small warehouse at Memphis to provide storage facility.Smith insisted of acquiring his own transportation fleet while others were booking space on public carriers.FedEx was the first company to introduce hand held scanners for drivers this facilitated sending alerts to clients for pick up and delivery.In 1994, it was the first transport company to have a website with tracking and tracing facilities.In 1999, FedEx tied up with Netscape and thus gained access to millions of customers who were already on Netscape portal.It tied up with Dell, Cisco NatSemi and almost acted as their logistics and supply chain management.The above are some of the examples to prove that from 1973 to 1999, there were a number of incidents which make us feel that management at FedEx acted in a Classical fashion and tried to maximize its profits and returns on investments as much as possible.However when we look at the Re-branding strategy that was undertaken by the management in January 2000, it shows us the application of Evolutionary School of Thought strategy. Towards 2000, UPS, TNT and DHL were strongly competing with FedEx. FedEx had five subsidiary companies each with separate sales, marketing and customer service staff. Each unit had its own accounting practices. They were targeting different segments and were working independently. But this strategy resulted in a lot of duplicity of resources and wastage of time efforts. The subsidiaries were not even to leverage any synergies, not even the legacy of the FedEx brand.This is when the management at FedEx looked around and learnt from market and the competition. It undertook a major re-branding and re-alignment of resource strategy. All subsidiaries had FedEx branding thus denoting that it came from the same brand. They leveraged the consoli dated kitty-cat of sales, marketing, accounting and customer service operations. It became a one-stop-shop for all sized of customers, whether it was business-to-business, menage delivery, ground or heavy steel plates.Typically this is true in any industry and a untried firm that enters the market at an early stage. The firm can operate in a classical manner, calling the shots. This is possible because of several reasons low threat of competitors, virtually no substitutes, low bargaining power of customers and high switching costs. This is typical in the case of FedEx as well. But the dynamics change, when other firms enter and the market becomes mature. In that scenario, it is not the firm but the market that decides.This scene can be seen in other industries as well. When Coca cola started operations, it was the king in the aerated segment, charging a price that it deemed fit and the customers were more than unbidden to pay the same, but years later when competition got ripe, such advantages disappear. There is a tendency to compete on prices, value added services because of which the market decides the viable price. To Coca Cola, the threat was not only from Pepsi and other dotty drink beverages but even from other health drinks and water This is when the entire product mix was realigned and Coca Cola introduced sports drinks, health drinks, tea coffee. and then it is not a question of preference. It is which school is applicable as pet the time and maturity level of the industry. More a lot than not, we see that most of the times in a mature set up it is the Evolutionary School of Thought which is more relevant as market forces determine the pace and the direction in which change is required. Businesses which realize this well in time and pick up timely cues and act upon them thrive, while others melt with time.Section III Processual School of Thought, Staceys Four Loops and Strategy ImplicationsA processual view of an organization suggests that organizations are a cocktail of individuals, each of who brings their own personalities, personal agendas and cognitive biases to the organization. Thus, strategy is a continuing process of adjustment evolvement because rational economic man is only a state of utopia and people are only boundedly rational.Most Processual scholars argue that because of these constraints, strategy is nothing else but the continuous adjusting of routines to gawky messages and cues from the environment which gradually force themselves on the managers attention. Strategy is not only planned and executed action, but it is also a means to make meaning of the chaos of the world.Staceys Integrated Model of finis making and check offThe Staceys Matrix is a critical tool that helps one navigates when faced with complexity in the field of strategy. This tool helps in adopting the right management action defines the strategy that one should focus at when faced in a complex environment with varying degree o f foregone conclusion and arranging amongst the group in the organisation.Let us understand the axis first 1. amour to conclusion Concerns or decisions are close to certainty when cause and effect linkages can be evaluated. This is mostly the case when a very similar issue or decision has been made in the past sometime. whiz can then assess and relate from past envision to name the outcome of an action with a good degree of certainty.2. Far from Certainty The opposite of the above, is the extreme end of the certainty continuum. They are decisions that are far from certainty. These scenarios are often unique or at least new to the decision makers. The cause and effect connections are not clear. Extrapolating from past experience is surely not a good method to predict outcomes in the far from certainty range.3. Agreement The vertical axis measures the degree of agreement about an issue or decision within the group, team or organisation. As you would presume, the management or le adership function changes depending on the level of agreement surrounding an issue.Four LoopsRational Loop Rational Decision Making is possible when there is closeness to certainty and closeness to agreement. In such cases, the group has a consensus on views, options and decisions also high certainty permits references from the past. There is less risk involved so it is fairly easy to take a rational decision.As per the Processual School of Thought, such cases are a rarity in real time. Even if there is unconditional clarity or certainty about an issue, to find absolute agreement in team is seldom possible. This is because each individual comes with his own objectives and interests. governmental Loop Overt Covert Some themes have a huge chance of certainty about how outcomes are created but high levels of variability about which results are desirable. Neither plans nor shared objectives are probably to work in this context. Instead, politics become more significant. Coalition building, negotiation, and compromise are used to make the organisations agenda and direction.Some misgivings have a high level of agreement but not much conviction as to the cause and effect linkages to create the sought after results. In these cases, monitor against a set plan will not work. A fleshy sense of shared mission or vision may substitute for a plan in these cases. Comparisons are made not against plans but against the purpose and vision for the organisation. In this region, the objective is to head towards an agreed upon in store(predicate) state even though the specific paths cannot be prearranged.Culture apprehension As per the Cultural School of Thought, strategy formation is a corporal process of social interaction, base on the beliefs and understandings shared by the team members of an organisation. Stacey defines culture as a set of assumptions people simply have a bun in the oven without question as they interact with each other. Thus strategy is based o n perceptions and is deliberate if not fully conscious.This goes well with Processual School as well, because it assumes that people come with different perceptions and learn through a unsounded process of acculturation.To conclude the above discussion, we can contemplate that strategies are often evolving, their coherence accruing through action and perceived in retrospect, while incidental small steps finally merge into a pattern.

Friday, March 29, 2019

MPLS-Traffic Engineering

MPLS-Traffic EngineeringI. interlock waiterWe leave behind be using apache web waiter in our run across. The Apache HTTP Server Project is a synergistic programming improvement transaction went for making a hearty, business grade, featureful, and unreservedly accessible source decree execution of a HTTP (Web) legion. The undertaking is to enamourher overseen by a collect of volunteers placed as far and wide as possible, utilizing the meshing and the Web to convey, assemble, and add to the innkeeper and its attached documentation. This undertaking is a piece of the Apache bundle Foundation. Likewise, several clients keep back contri simplyed thoughts, code, and documentation to the venture. This record is expected to quickly demonstrate the historical backdrop of the Apache HTTP Server and perceive the numerous donors.Figure 1. Apache global Structure.In Figure 1 we can see the general plat of apache webserver that how it for arrive work and how it is connected in our scenario.II. File ServerWe will be using Turnkey as a tear server in our project. A simple to utilize archive server that joins Windows-good remains file offering to a propelled on suck file chief and incorporates help for SMB, SFTP and rsync file exchange protocols. The server is designed to permit server clients to oversee files in private or open stockpiling. In view of Samba and AjaXplorer. This car incorporates all the standard gimmicks in TurnKey Core, and on top of that1. SSL living out of the case.2. Webmin module for arranging Samba.3. Incorporates mainstream squeezing help (zip, rar, bz2).4. Incorporates flip to change over content file endings in the middle of UNIX and DOS groups.5. Preconfigured wordgroup WORKGROUP6. Preconfigured netbios agnomen FILESERVER7. Configured Samba and UNIX clients/bunches synchronization (CLI and Webmin).8. Configured root as managerial samba client.In Figure 2 we will show you that how file server is working in our project.Figure.2 In ternal connectivity of file serverIII. legate serverthither are many delegate server to recognise but we have chosen Squid transmission canalizex proxy server because its fast and secure.The Squid Web Proxy Cache is a all in all offered lucre storing server that handles a wide range of web demands for a client. At the point when a client asks for a web as attain on (website page, motion picture cut, realistic, etc..), their solicitation is sent to the storing server which then advances the allurement to the genuine web server for their sake. At the point when the asked for asset is tally back to the reserving server, it stores a duplicate of the asset in its cache and by and by that advances the solicitation over again to the prototypal client. Whenever somebody asks for a duplicate of the cached asset, it is conveyed foursquare from the nearby proxy server and not from the inaccessible web server (contingent upon time of asset etc).Utilizing a proxy server can cons iderablely diminish web scanning velocity if e real now and again went by locales and assets are put away provincially in the cache. There are additionally monetary investment funds to be picked up in case youre a substantial association with numerous Internet clients or even a little home client that has a portion remittance for downloads. There are numerous ways a proxy can be advantageous to all systems.The squid proxy has such a large number of peculiarities, access controls and separate configurable things, that it is concentrated to cover the majority of the settings here. This class will give some essential setup settings (which is all thats inquireed) to empower the server, and give access controls to redeem unapproved clients from getting access to the Internet through your proxy. The design file has been archived greatly well by the designers and ought to give enough data to help your set up, however in the event that you dont realize what a setting does, dont touch it.Since you have effectively logical your Squid proxy server, you will need to arrange the majority of your workstations on your inward system to have the capability to utilize it this may appear akin a long errand relying upon how enormous your inner system is. It likewise implies that you will need to physically arrange the greater part of your applications that unite with unlike web servers for data/ study trade, this incorporates all web programs, infection redesign applications and other such utilities. Hmm, this could occupy do a while.One incredible gimmick of Squid is that is can be utilized as a HTTPD quickening agent, and when arranged in pairing with an iptables sidetrack guideline, it will get to be straightforward to your system. Why? since we will no more need to setup the greater part of our applications on our workstations to utilize the proxy, now we can divert all HTTP asks for as they get through our firewall to utilize our straightforward proxy rather le ss demanding organization.A critical point before undertaking, straightforward intermediaries CAN NOT be utilized for HTTPS associations over SSL (Port 443). This would break the server to customer SSL association pendent upon your security and classifiedness of the protocol, it could likewise permit a man in the heart assault due to caught (proxied) parcels.Figure.3 Proxy server connectivity.IV. DNS ServerAt its or so fundamental level, the DNS gives a dispersed database of name-to-address mappings spread over a riseOf nameservers. The namespace is apportioned into a chain of command of areas and subdomains with every area managed freelyBy a legitimate nameserver. Nameservers store the mapping of names to addresses in asset records, each having a related TTL field that decides to what extent the section can be stored by contrasting nameservers in the framework. A vast TTL worth diminishes the heap on the nameserver however confines the recurrence of redesign engendering thro ugh the framework.Figure 4. Basic DNS operationNameservers can actualize reiterative or algorithmic questions. In an iterative enquiry, the nameserver returns either a result to theInquiry from its resemblance database (maybe stored information), or a referral to an alternate nameserver that may have the capacity to answer the question. In taking care of a recursive inquiry, the nameserver gives back a last reply, questioning some other nameservers most-valuable to intention the name. Most nameservers inside the chain of importance are arranged to send and acknowledge just iterative inquiries. Nearby nameservers, on the other hand, commonly acknowledge recursive inquiries from customers (i.e., endhosts). Figure 4 delineates how a customer commonly discovers the location of an administration utilizing DNS.The customer application utilizes a resolver, typically actualized as a set of working framework library schedules, to overhear a recursive inquiry to its nearby nameserver. T he nearby nameserver may be designed statically (e.g., in a framework document), or rapidly utilizing conventions like DHCP or PPP.After making the solicitation, the customer holds up as the neighborhood nameserver iteratively tries to determination the name (www.service.com in this case). The neighborhood nameserver first sends an iterative inquiry to the root to determination the name (steps 1 and 2), however since the subdomain service.com has been assigned, the root server reacts with the location of the legitimate nameserver for the sub-area, i.e., ns.service.com (step 3)1. The customers nameserver then questions ns.service.com and gets the IP location of www.service.com (steps 4 and 5). At long lastThe nameserver furnishes a proportional payback to the customer (step 6) and the customer has the capacity interface with the server (step 7).V. VPN and FirewallWe are using 2 types of VPN here.The first one is.1. Site-to-site VPN A site-to-site VPN permits multiple business locale s in altered areas to make secure associations with eachOther over an open system, for example, the Internet. It additionally gives extensibility to assets by making them accessible toWorkers at different areas.2. Access VPN A remote-access VPN permits amusing clients to build secure associations with a remotePC system. These clients can get to the safe assets on that system as though they were specifically connected to the systems servers.Gimmicks in VPN Provide broadened associations crosswise over multiple geographic areas without utilizing a rented line. Improved security instrument for information by utilizing encryption strategies. Provides adaptability for remote work places and workers to utilize the business intranet over a current InternetAssociation as though theyre specifically joined with the system Saves time and cost for representatives who drive from virtual working environments VPN is favored over rented line since leases are extravagant, and as the separation bet ween business locales builds, theExpense of rented line increment. IPsec VPN and SSL VPN are two arrangements of VPN which are broadly utilized as a part of WLAN.Figure 5. VPN connectivity with our router.As a firewall we are using IPtables. Iptables/Netfilter is the most prevalent order line based firewall. It is the first line of justification of a Linux server security. Numerous framework managers use it for calibrating of their servers. It channels the parcels in the system stack inside the bit itself. You can discover a nittier gritty diagram of Iptables here. Peculiarities of IPtables1. It records the substance of the parcel channel ruleset.2. its exceptionally quick on the grounds that it assesses just the parcel headers.3. You can join on/Remove/Modify tenets as per your needs in the bundle channel rulesets.4. Posting/focusing every standard counters of the parcel channel rulesets.5. Helps disdain and reclamation with documents.X. ConclusionIn this project there was so a great deal stuff to learn about we have seen so many different kind of servers and it was difficult to decide what which server we should use Microsoft or Linux but we have seen in most of the cases Linux server were free and also very secure so we thought we will be using Linux server and In this project we have designed a perfect electronic network design which is flawless. In figure 6 we have shown our whole network design.Figure 6. Complete Network Design..AcknowledgmentWe are really acceptable to complete our project with the time given by our prof Dr Hassan Raza. This project cannot be completed without the efforts and contribution of my group partner. We also thank our professor Dr Hassan Raza for his guidance.References1 P. Mockapetris, Domain names concepts and facilities, Internet Request for Comments (RFC 1034), November 1987.2 Paul Albitz and Cricket Liu, DNS and BIND, OReilly and Associates, 19983 Weili Huang and Fanzheng Kong. The research of VPN over WLAN.4 CarIto n RDavisThe security implementation of IPSec VPN M 5 Baohong He, Tianhui. Technology of IPSec VPN M. capital of Red China Posts Telecom press, 2008, 7.6 NetGear VPN Basics (www.documentation.netgear.com/reference/esp/vpn/ VPNBasics-3-05.html)

Estimation of Salbutamol Sulphate and Guaiphenesin

Estimation of Salbutamol Sulphate and GuaiphenesinSIMULTANEOUS ESTIMATION OF SALBUTAMOL sulfate AND GUAIPHENESIN IN THEIR COMBINED LIQUID DOSAGE FORM BY HPTLC regularityKruti D. Bhalara, Ishwarsingh S. Rathod, Sindhu B. Ezhava, Dolarrai D. Bhalara,ABSTRACTA innocent, specific, sensitive and corroborated mellowed-performance gauzy shape chromatographic (HPTLC) regularity was developed for the simultaneous depth psychology of Salbutamol sulfate and Guaiphenesin. Spectro-densitometric scanning-integration was performed at an absorbance wavelength 280 nm. A TLC aluminium sheet pre coated with silica gel 60 F254 was use as the stationary phase. The mobile phase formation containing ethyl radical acetate Methanol Ammonia (25% w/v) (75 15 10 v/v) gave a good re radical of Salbutamol sulphate and Guaiphenesin with Rf determine of 0.47 and 0.65, respectively. The calibration patch of Salbutamol sulphate exhibited good one-dimensional regression birth (r = 0.9987) everyplace a stringency range of two hundred-century0 ng/ defect. The calibration plot of Guaiphenesin exhibited good polynomial regression relationship (r = 0.9997) over a concentration range of 10-50 g/spot. Detection and quantitation limit was effect to be 70 ng and 100 ng respectively, for Salbutamol sulphate and 30 ng and 50 ng, for Guaiphenesin. The proposed method was used for design of both drugs in Ventorlin and Asthalin Syrup containing Salbutamol sulphate and Guaiphenesin with satisfactory precision (Intraday) 2.67-4.46% for Salbutamol sulphate and 2.39-4.42% for Guaiphenesin and accuracy 100.97 0.50% and 100.45 0.58% RSD, for Salbutamol sulphate and Guaiphenesin respectivelyINTRODUCTIONSalbutamol sulpahte (SAL) is the discriminating prototypic 2-adrenoceptor agonist. It is used as an anti-asthmatic in the treatment of bronchial asthma, bronchospasm, in the patients with reversible obstructive airway and in prevention of exercise bring forth bronchospasm(1-3). It may be used in uncomplicated premature labour. SAL is chemically (RS)-1-(4-hydroxy-3-hydroxy- methyl phenyl)-2-(tert-butyl amino) ethanol sulphate(2, 3). Guaiphenesin (GUA) is used as an expectorant in the diagnostic management of coughs associated with the common cold, bronchitis, pharyngitis, influenza, measles etc(1-3). It is chemically (RS)-3-(2-methoxyphenoxy)-1,2- propanediol(2, 3). SAL and GUA combinations ar available in the market for the respiratory disorders where bronchospasm and excessive secretion of tenacious mucus argon complicating factors, for example bronchial asthma, chronic bronchitis emphysema. chemical structures of GUA and SAL are shown in formula 1.SAL (API) is official in the Indian Pharmacopoeia(2), British Pharmacopoeia(4), and US Pharmacopoeia(5), and SAL syrup and tablets are official in British Pharmacopoeia(4). GUA (API) is official in the Indian Pharmacopoeia(2), British Pharmacopoeia(4), and US Pharmacopoeia(5), and GUA tablets, capsules and injection are al so official in US Pharmacopoeia(5). However, the combination of SAL and GUA is non official in any pharmacopoeia. Several methods see been reported in literature for individual estimation of the drugs but very(prenominal) few methods have been reported for simultaneous estimation of SAL and GUA in have dosage form, which includes chemo metrics-assisted spectrophotometry(6), Electro kinetic chromatography and Gas chromatography-Mass spectrometry(7) and Micellar electrokinetic chromatography(8). HPLC, though accurate and precise method, is date consuming, costly and requires skilled operator. Therefore the aim of this theater of operations was to develop and validate simple, specific, inexpensive, rapid, accurate and precise High Performance Thin Layer Chromatography (HPTLC) method for simultaneous estimation of SAL and GUA in their feature dosage form. The proposed method was successfully applied to two marketed cough syrups Ventorlin and Asthalin and the contents were persis tent with out(p) any deterrent of excipients.MATERIALSReagents and Materials(a) Solvents Analytical reagent grade Ethyl acetate (Finar Chemicals, India) and methanol (RFCL Limited, India) and ammonia (25% w/v) (s. d. elegant Chem Limited, India) Iso propyl alcohol (s. d. Fine Chem Limited, India) Sodium bicarbonate (s. d. Fine Chem Limited, India)(b) Standards SAL and GUA were a gift consume from Preet Pharma, Gujarat, India.(c) Ventorlin syrup (GSK Pharmaceutical Ltd, India) Batch 02053, labeled 2 mg SAL and 100 mg GUA in each 5 ml of syrup, were purchased commercially.(d) Asthalin syrup (Cipla Pharmaceuticals, Mumbai, India) Batch 060305, labeled 2 mg SAL and 100 mg GUA in each 5 ml of syrup, were purchased commercially.Apparatus(a) HPTLC coat 2020cm, percolated with silica gel 60 F254, 0.2 mm layer heaviness ( E.Merck, Germany)(b) Spotting device Linomat IV Semiautomatic try applicator (Camag, Switzerland)(c) sleeping accommodation Twin trough chamber for 20 10 cm (Camag) (d) densitometer TLC Scanner-3 linked to win CATS software (Camag). Scanner mode- absorbance-reflectance Scanning Wavelength 280 nm lamp Deuterium measurement type remission measurement mode submergence detection mode automatic. Scanner setting- Slit dimension 3.00 0.1 mm(e) spray 100 l (Hamilton, Switzerland)(f) Analytical balance Shimandzu Libror AEG 220 balancesMETHODSPreparation of SAL and GUA regulation solutions gillyflower solution of SAL (equivalent to 2 mg/ml) was prepared by dissolving 20 mg SAL pure vegetable marrow in 10 ml methanol. work stock solution of SAL (equivalent to 0.2 mg/ml) was prepared by transferring 1.0 ml of supra stock solution in 10.0 ml methanol. Stock solution (10 mg/ml) of GUA was prepared by dissolving 100 mg GUA pure substance in 10.0 ml methanol, separately. These solutions were stored under refrigeration at 40C. A diverseness of the drugs was prepared by transferring 1.0 ml of stock solutions of each compound to 10 ml volumetric flask an d diluting to volume with methanol. (Final concentrations of SAL, 0.02 mg/ml and GUA, 1 mg/ml)Preparation of calibration curve10-50 micro liters of standard solutions of combined standard solution of SAL (0.2, 0.4, 0.6, 0.8 and 1.0 g/spot) and GUA (10, 20, 30, 40, and 50 g/spot) and 2 sample distribution solutions (20 l corresponding to 0.4 g SAL and 20 g GUA/spot) were applied onto a pre coated HPTLC plate using the semiautomatic sample spotter (bandwidth 3 mm, surpass between the tracks 5 mm). The plate was developed to a surpass of 45 mm in a HPTLC chamber containing the mobile phase, i.e., Ethyl acetate-methanol-ammonia (7.5+1.5+1.0 v/v/v), at 25 2 0C. The plate was desiccated at room temperature. The substances on the silica gel layer were identify densitometrically at 280 nm. The chromatograms were scanned at 280 nm with slit dimensions of 0.1 mm 3 mm 400 nm was used as the reference wavelength for all measurements. Concentrations of the compounds chromatographed were det ermined from changes in the intensity of diffusely reflected light. Evaluation was via peak bowl with linear regression for SAL and polynomial regression for GUA.Preparation of sample solutionsA 5 ml aliquot of the Commercial syrup (Ventorlin or Asthalin) was transferred into 10 ml volumetric flask. The volume was adjusted with methanol. From this solution, 2 ml was pipetted and transferred into another 10 ml volumetric flask. The volume was adjusted to the mark with methanol. The methanolic solution was used for chromatographic analysis. (SAL 20 g/ml and GUA 1 mg/ml) rule validationThe method was validated in compliance with International assembly on harmonization guidelines(9).(a) Specificity._ The specificity of the method was established by comparing the chromatograms and amount the peak purities of SAL and GUA from standard and sample solutions of liquid dosage forms. The peak virtue of SAL and GUA were assessed by comparing spectra obtained at the peak start (S), peak su m (M) and peak end (E) of a spot. Correlation between SAL and GUA spectra from standard and sample was also obtained.(b) Accuracy._ The accuracy of the method was determined by standard asset method and calculating the recoveries of SAL and GUA . Prequantified sample stock solution of SAL and GUA ( 1 mL equivalent to 200G/ml of SAL and 10mg/ml of GUA) was transferred into a serial publication of 10 mL volumetric flasks. Known amounts of standard stock solution of SAL(0, 1,2 and 3 mL equivalent to 200, 400, 600 ng/spot ) and GUA ( 0, 1, 2 and 3 mL equivalent to 0, 10,20 and 30 g/spot) were added to this prequantified working sample solutions and dilute up to the mark with methanol. Each solution (10 L) was applied on plates in triplicate. The plates were developed and scanned as described above, and the recovery was calculated by measuring the peak areas and suit these values into the regression equation of the calibration curves.(c) Precision._ The intraday and interday precision of the proposed method was determined by estimating the corresponding responses five times on the same day and on five different days over a period of one week for three different concentrations of SAL (200, 400, 600 ng/spot) and GUA (10, 20, 30 g/spot). The repeatability of sample covering was checked by repeatedly measuring the area of seven spots having same concentration of SAL (400ng/spot) and GUA (20 g/spot) applied on the same plate, while the repeatability of measurement of peak area was checked by repeatedly measuring the area of one spot of SAL (400ng/spot) and GUA (20 g/spot) for seven times. The results were reported in terms of RSD.(d) LOD and LOQ._ The LOD and LOQ of SAL and GUA were calculated by preparing a series of solutions containing decreasing concentrations of SAL from 0.02 to 0.004 mg/ml and GUA from 1 to 0.001 mg/ml by appropriate dilution of the stock solutions of these drugs (SAL 0.02 mg/ml and GUA 1 mg/ml).(e) Robustness._ The robustness of the method wa s studied by changing the organic law of the mobile phase by 0.2 mL of organic closure, development distance by 1 cm, and temperature by 2C.Determination of SAL and GUA in legato Dosage FormThe responses of sample solutions were measured at 280 nm for quantification of SAL and GUA by the proposed method. The amount of SAL and GUA deport in the sample solutions were determined by fitting the responses into the regression equation of the calibration curve for SAL and GUA, respectively.RESULT AND DISCUSSIONSince both SAL and GUA have nigh same wavelength maxima, interference becomes prominent in UV-Visible spectrophotometry. overly the estimation of any component at its null point is not that much reliable as the estimation at maximal wavelength. consecutively for highly specific methods like HPLC and HPTLC, physical separation of those substances is usually infallible before quantitative determination of those substances. So, attempt has been made to develop a validated sepa ration technique for the separation of SAL and GUA in the mixture by HPTLC. The chromatographic conditions were adjusted in order to obtain an efficient and simple routine method. Different mobile phases were tried for the separation of the above substances. The optimized solvent system was Ethyl acetate methanol ammonia (25 %w/v) (7.51.51v/v/v). The Rf values were found to be 0.47 for SAL and 0.65 for GUA. ( see 2)The maximum wavelength of SAL was found to be 279nm-280nm and the maximum wavelength of GUA was 274nm-275nm. As both compounds have nearly same max, 280 nm was selected for simultaneous scanning of SAL and GUA. In this way, SAL can be detected at low concentrations in the presence of GUA at high concentrations.Preparation of calibration curveAs the concentration range of SAL is from 200 to 1000 ng, direct proportionality (linearity) of the concentration with its absorbance was obtained. Linear regression analysis is applied to analyze calibration curve of SAL. The equati on is y = 3.659x + 409.8 (Figure 2)With the object lens to allow simultaneous analysis by developing method in wider concentration range, non-linear regression analysis mode was utilized for estimation of GUA. multinomial regression mode is applicable if wide concentration ranges (150 to 1100) are worked out and with high amount of substance measured in non-linear detector range. The equation for calculation is y = -4.207x2 + 578.12x + 9343.48 (Figure 3)Method ValidationSpecificity._ The excipients present in the liquid dosage form did not interfere with the chromatographic responses of SAL and GUA as the peak purities r(S, M) = 0.997 and r (M, E) = 0.9996 for SAL and r(S, M) = 0.997 and r(M,E) = 0.9996 for GUA. Also, good correlation (r= 0.9999 for SAL and 0.9998 for GUA) were obtained between standard and sample spectra.Accuracy._ The mean recoveries obtained for SAL and GUA were 100.07 0.49% and 100.04 0.63% RSD , respectively. The accuracy results are shown in plug-in 2Precis ion._ The values of RSD for intraday and interday variations were found to be in the range of 2.56-4.57% and 2.67-4.46% for SAL and 1.95-4.20% and 2.39-4.42% for GUA. RSD for repeatability of sample application were found to be 1.86 and 1.48 for SAL and GUA respectively, while the repeatability of peak area measurement was 0.47 and 0.18% for SAL and GUA respectively.LOD and LOQ._ The LOD and LOQ were 70 and 100 ng for SAL and 30 and 50 ng for GUA.Robustness._ The method was found to be robust, as the results were not significantly affected by meditate but slight variation in the method parameters.Determination of SAL and GUA in Liquid Dosage FormThe proposed HPTLC method was applied successfully for the determination of SAL and GUA in liquid dosage form. The results obtained for SAL and GUA were comparable with the corresponding labeled consider values. (Table 4)CONCLUSIONSDue to the absence of an official method for this binary mixture, the high-performance thin layer chromatogra phic method proposed in this article could represent an ersatz to chemo metrics-assisted spectrophotometry, Electro kinetic chromatography and Gas chromatography-Mass spectrometry previously published. This method has been validated for linearity, precision, accuracy, and specificity, and has prove to be convenient and effective for the quality control of SAL and GUA in marketed syrups, with out any interference of excipients.ACKNOWLEDGEMENTSWe are thankful to the principal, L.M. College of Pharmacy for providing us the installment for successful completion of our project.REFERENCES1.Klaus Flory, H. G. B. in Analytical Profiles of Drug Sunstances and Excipients, Vol. 25, pp. 121, Acedemic Press, Inc.2.(1996) The Indian Pharmacopoeia, The omnibus of Publication, Delhi.3.Parfitt, K. (Ed.) (1999) Martindale The Complete Drug Reference, The Pharmaceutical Press, UK, The Pharmaceutical Press, UK.4.(2007) The British Pharmacopoiea, Department of health on behalf of the Health Minist ers, London.5.(2007) The United States Pharmacopoiea-30 NF-25.6.El-Gindy, A., Emara, S., and Shaaban, H. (2007) J. Pharm. Biomed. Anal. 43, 973-82.7.Pomponio, R., Gotti, R., and Hudaib, M. J. Sep. Sci. 24, 258 264.8.D., N. L., Quiming, N. S., and Saito, Y. (2009) J. Liq. Chromatogr. Related Technol. 32, 1407 14229.International Conference on Harmonization (2005) Validation of Analytical Procedure Methodology (Q2R1), Technical Requirements for Registration of Pharmaceuticals for valet de chambre Use, Geneva, SwitzerlandTable 1. Data indicating various validation parameters of the developed methodTable 2. Results of precision study for SAL and GUA determination by the proposed HPTLC methoda Repeatability of sample application.b Repeatability of measurement of peak area.Table 3. Data for the recovery study of SAL and GUATable 4. Analysis results for SAL and GUA liquid dosage forms by the proposed HPTLC methods (n=5)Figure 1. Chemical Structures of (a) SAL and (b) GUAFigure 2. Calibr ation curve of SALFigure 3. Calibration curve of GUAFigure 4. (a) HPTLC chromatogram video display separation of SAL and GUA in their combined standard solution at 280 nm , with Rf 0.47 and 0.65, respectively. (b) Chromatogram showing the separation of SAL and GUA in Ventorlin Syrup.Figure 5. (a) HPTLC chromatogram showing separation of SAL and GUA in their combined standard solution at 280 nm , with Rf 0.47 and 0.65, respectively. (b) Chromatogram showing the separation of SAL and GUA in Asthalin Syrup.