Applied Research and Basic Research
The context of the study and the research rationale differentiate applied research from basic research. The purpose of a basic study is to increase knowledge in administration and business processes while applied research seeks to improve the understanding of a given business or managerial problem. While basic research leads to the development of universal principles that apply to processes, including their link to results, applied research leads to the development of solutions to a particular problem (Saunders, Lewis, and Thornhill 9). Hence, knowledge is only limited to the scope of the problem under investigation.
The findings of basic research are significant. They add value to the general population (society). In the case of applied research, the findings are helpful and pertinent to managers of organizations. The context of basic research is in universities in which people who undertake it select research topics and objectives. It is executed with flexible timelines. Applied research may be conducted in a variety of contexts, including organizations and institutions of higher learning. The research originator negotiates for the objectives. It is executed within rigid scales of time (Saunders, Lewis, and Thornhill 9).
Research Ideas and Topics
Various study thoughts comprise a good research topic. However, it is important to follow an idea that sustains one’s interest throughout the research process. Indeed, an idea constitutes a good research topic if the researcher can formulate into study questions and aims that guarantee clarity of the idea (Saunders, Lewis, and Thornhill 20). Without clarity, it becomes difficult to establish a plan for researching the idea.
Techniques that help to Refine Research Ideas
Different techniques can help in the establishment and refining of research thoughts. A research idea must be within one’s capabilities and skills that can be acquired within a specified time. For example, learning a new foreign language to help in researching within a limited time makes a research idea inappropriate where such skills are mandatory. A researcher’s capability is limited to the available financial and time resources. Therefore, a research idea is inappropriate where financial and time requirements exceed the available resources. The researcher needs to select an idea that he or she can have access to data. It should also link to theory, research questions, aims, and objectives. Saunders, Lewis, and Thornhill reckon that a research idea also needs to lead to a topic with “symmetry of potential outcomes” (23).
The Role of Theory
Theory plays an important role in deciding one’s approach to research design. It helps in the demonstration of knowledge on relevant literature on the topic under study. Thus, through theory, a scholar will consider various approaches used by other researchers in the same area of study. The scrutiny of various theories allows the scholar to seek an alternative approach to research design to avoid duplication of past methodologies. This move ensures that the research fills a gap in the study area or helps to confirm some findings accomplished using other research designs. Through theory, one can demonstrate a departure in research methodologies of past research. Theory helps in determining whether a research design takes deductive or inductive approaches.
Assessing Research Proposal
A research proposal can be assessed based on the appropriateness of objectives and research questions, and time and financial resources. The research rationale needs to encompass previous research publications, including the appropriate theories. The topic areas should help in informing objectives and research questions. Therefore, the method in a proposal should be developed following the research objectives and questions. The financial resources together with allocated time should also measure up to the methodology deployed.
Evaluating the Content of a Critical Literature Review
A critical literature review helps in developing a thorough understanding of past researches that relates to the study objectives and questions. The critical literature review involves logically identifying the authors’ arguments, followed by highlighting various areas that require additional insights. The assessment involves noting bias and omissions, including how they relate to research goals and questions (Saunders, Lewis, and Thornhill 63). The content of a critical literature review encompasses academic theories in a given area of research. The review demonstrates one’s knowledge in an updated body of knowledge, including the clarity and accuracy of references. A critical review means commenting on language, tradition, influence, and objectivity (Saunders, Lewis, and Thornhill 64).
Evaluating the Structure of a Literature Review
Evaluating the structure of the literature review can be accomplished by determining whether:
- Narrows down from general to specifics of the research objective and questions.
- Provides a synopsis of crucial themes and ideas.
- Contrasts compare and summarize key researches of writers in the area of study.
- Highlights past researches relevant to the topic under research.
- Provides detailed accounts of past researches and evidence of how they relate to the current study.
- Identifies aspects where the current research offers new insights.
- Leads readers to the next sections of the report by exploring the above issues.
- Explains the manner of obtaining selected works under review to enhance transparency.
The most efficient search strategy should ensure the location of up-to-date literature. The search strategy should help in eliminating overloads of information. Therefore, it is necessary to guarantee clarity when defining research questions and objectives. Before the actual commencing of a literature search, planning is necessary. The researcher has to define research parameters, search terms, and keywords that he or she intends to use in the research. The researcher should also define databases and search engines before specifying the criteria to be deployed in the selection of the appropriate and useful studies from a list of potential search results (Saunders, Lewis, and Thornhill 75).
A systematic review involves examining literature based on logical relationships as opposed to only beliefs. It involves reviewing text through a preplanned comprehensive search strategy. Systematic review reporting is accomplished through the following five stages:
- Clear criteria for assessment coupled with selecting articles for review.
- Articles selected based on their quality in research together with findings.
- Studies have been synthesized by deploying clear frameworks.
- Findings are balanced.
- Findings are impartial and comprehensive.
Qualitative, Quantitative, and Mixed Methods
A research strategy is executed in the context of the prescribed standard research techniques that are adopted in a given field. Research may use qualitative, quantitative, and mixed methods. For the quantitative research, the scholar collects numerical data for analysis to determine any relationships, correlations, differences, and/or similarities among other findings. Qualitative research does not use numerical data. Mixed methods use both quantitative information and numerical data. Therefore, it has qualitative and quantitative aspects.
Exploratory, Descriptive, and Explanatory Study
Exploratory research aims at establishing new insights, ask queries, and/or assess a given phenomenon in a different angle to determine “what is happening” (Saunders, Lewis, and Thornhill 139). The form of research is significant when clarifying a given problem. Descriptive research aims at offering accurate profiles of people, situations, and/or events. Explanatory research determines causal relationships that exist between different variables. Its goal is to study a given problem or a situation by relating it to variables (Saunders, Lewis, and Thornhill 140). Data collected in this type of research is subjected to various statistical tests such as correlations to determine the actual connection between the self-regulating and dependent variables.
Case Study Features
In a case study, boundaries between the context of the study and the phenomenon under investigation are not evident in a clear manner. This situation opposes an experimental study in which research is executed under controlled contexts. A case study is a “strategy for doing research that involves an experimental investigation of particular contemporary phenomenon within its real life context using multiple sources of evidence” (Saunders, Lewis, and Thornhill 145-146). Case studies are important where it is desirable to gain an in-depth understanding of a context together with enacted processes.
Grounded Theory Strategy
The grounded theory strategy involves building theory from deductions and inductions (Saunders, Lewis, and Thornhill 149). It helps in explaining behaviors but putting emphasis on building and developing a theory. The grounded theory begins with data collection without putting any effort to formulate theoretical frameworks. Rather, the theory emerges from data acquired through various observations. Through the data, the researcher can generalize his or her predictions followed by their testing by way of further observation. Testing may result in the confirmation or non-justification of predications. Therefore, it entails constant referencing to data to expand and analyze the hypothesis (Saunders, Lewis, and Thornhill 149).
Internet-mediated research attracts various ethical issues that need to be considered to guarantee sincerity in research. A researcher has to consider the implications of accessing data. Another important issue encompasses explaining to people from whom the data is obtained why the data is required and/or the purpose of obtaining it from them. Ethical issues emerge concerning disclosure of one’s presence and intentions in an online platform, the confidentiality of information obtained, ensuring ambiguity, and the incorporation of feedback from research subjects.
Principles of Data Protection and Management
Managing data ethically and lawfully requires compliance with data management and protection principles, which include reliability, ambiguity, privacy, and sensitivity. These issues influence the authority for accessing research data from an organization or any other research community. For example, access to data may be declined due to a lack of confidence that the data issued will be treated with utmost confidentiality and anonymity.
Probability sampling requires the making of inferences from a sample about the population to successfully answer research questions. Sampling frameworks have implications for generalizability. In probability sampling, the probability of every case selected in a population is well known. Hence, a researcher is capable of answering all research questions while meeting research objectives that lead to statistical estimation of the traits of the population. The characteristics of the sample can be extended to the population, implying that research findings in probability sampling are generalizable with minimal chances of error.
In research projects, secondary data presents both advantages and disadvantages. Secondary data encompasses facts that have already been collected for any other purpose, apart from the one it will be reanalyzed for in the research (Saunders, Lewis, and Thornhill 257). Secondary data is advantageous since some of its forms such as government surveys, for instance, census data, are easily available over the internet, publications, or in CD-ROM within libraries in higher institutions of learning. Professional bodies and organizations maintain online sites from where data can be accessed. Companies may also provide information gateways that permit access to online databases.
Secondary data requires less resource expenditure. Easy accessibility of secondary data implies huge savings in terms of time and financial resources. The data is usually of high quality. It permits the conducting of longitudinal studies while providing relative and proportional statistics (Saunders, Lewis, and Thornhill 269). It may lead to unforeseen discoveries and data permanence.
Secondary data is disadvantageous. Some facts such as company minutes are only reserved for internal use. Thus, they have limited accessibility. Their accessibility involves engaging in negotiations. Secondary data may also be collected for purposes that do not match one’s needs. Initial purposes of data collection can influence data presentation. The researcher has no control over its quality. Secondary data may provide definitions that are unsuitable for the research. It may cost unjustified.
Evaluating Secondary Data
Several issues such as appropriateness, defined suitability, and measurement bias need to be checked when evaluating secondary data. Based on the measurement partiality, secondary data should be evaluated to determine whether deliberate distortions or intentional data misrepresentation are evident. Measurement bias may also arise due to alteration of the manner of data collection. Secondary data should also be evaluated based on the costs involved in its collection compared to benefits associated with the data. These costs include financial and time resources. Both reliability and validity ensure the suitability of data for a specific use. To satisfy the criterion of overall suitability, secondary data needs to be evaluated in terms of coverage, the unmeasured variables, and measurement validity (Saunders, Lewis, and Thornhill 273-274).
Structure, Semi-structured, and Unstructured Interviews
Structured interviews deploy questionnaires developed based on some predetermined and standardized and/ or similar interrogatives. Therefore, they encompass questionnaires that are administered by the researcher or the interviewer (Saunders, Lewis, and Thornhill 320).
Responses to the standard questions are also standardized and pre-corded. Semi-structured interviews and unstructured interviews are not standardized. In semi-structured interviews, researchers have several listed premises and questions covered in an interview, although the questions may vary from one interview to another. Therefore, some questions are omitted, depending on the organizational context. The questions’ order also varies, depending on the conversion flow. Unstructured interviews are informal. The interviewee talks freely about his or her beliefs, behaviors, and events that relate to the research topic.
Reliability refers to the capacity of research to yield unswerving findings. It incorporates perspectives of repeatability and measurement instruments’ stability. Research is reliable if two scholars can administer the same questionnaire to the same sample and obtain analogous responses. Therefore, the lack of standardization in semi-structured and in-depth interviews makes the tool suffer from reliability/dependability issues.
Validity implies the degree to which tests measure what they are set to determine. In situations where one fails to develop the interviewee’s trust or lacks credibility, the information collected suffers from poor validity and credibility. During semi-structured and in-depth interviews, the interviewee may only give limited information, which raises doubts about its validity and credibility. For non-standardized interviews, the only possible validity is due to the permission of clarification for questions.
Data Quality Issues
To overcome quality issues when conducting semi-structured and in-depth interviews requires the researcher to incorporate appropriate measures into his or her study. One may plan on how to demonstrate credibility and/or obtain interviewee confidence. Overcoming credibility or quality issues requires the possession of information about the context of the interview. Such information may include the financial data of an organization and company reports. Drawing from such information in the interview process, the interviewee can be requested to offer detailed accounts of the information.
This strategy guarantees accuracy in responses. Supplying appropriate information to interviewees before the interview also helps in overcoming credibility issues. Such information includes themes of the interview. The information permits interviewees to have adequate time to prepare for the talk where they can even prepare company documentation that supports their arguments or responses. This strategy has the effect of promoting reliability and validity of information supplied through semi-structured and in-depth interviews.
Research design needs to define the nature of the relationship that is likely to exist between variables. This claim holds for explanatory research where data collected is used in testing theories. By defining the expected relationships, a questionnaire that helps in collecting the right information can be designed to help in preventing the collection of unsuitable information for research.
Answers to Research Questions and Achievement of Objectives
The data collected should aid in achieving the research objectives. It should also facilitate the answering of the research questions. This outcome can be accomplished by deciding on the main outcome of the research (explanatory or descriptive). One should sub-divide “each research question or objective into more specific investigative questions about which he or she needs to gather data” (Saunders, Lewis, and Thornhill 368).
This step should be repeated in case of suspicion of insufficiency in terms of the precision of exploratory questions. This stage is followed by the identification of various variables that steer the collection of data to answer each of the identified undercover questions. The researcher then identifies the level of every variable data details followed by developing the appropriate measurement questions that capture data to a level that is necessary for every variable (Saunders, Lewis, and Thornhill 368).
A questionnaire needs to be valid. This attribute can be assessed by examining the questionnaire’s internal legitimacy, content soundness, prognostic legality, and construct validity. Internal validity is accomplished by assessing whether the questionnaire measures what it was designed to measure by “looking for other relevant facts that support the answers found using the questionnaire, relevance being determined by the nature of researchers’ research question and their judgment” (Saunders, Lewis, and Thornhill 373). Content validity is assessed by a careful examination of the literature reviewed to determine the appropriateness of the research definition.
The second approach entails assessing each question to determine whether it is “‘essential’, ‘useful but not essential’, or ‘not necessary'” (Saunders, Lewis, and Thornhill 373). Predictive validity is assessed by conducting a correlation analysis.
Saunders, Mark, Philip Lewis, and Adrian Thornhill. Research Methods for Business Students, Harlow: Pearson, 2012. Print.