Research tends to be one of social work student’s least favorite subjects. Maybe it’s PTSD from undergrad statistics, the desire to “just do social work” and help, or boredom with just how plain dry it can be for those who did not choose it as a concentration. After all, many of us were attracted to social work because of its rich tradition of serving others and immersing ourselves in people’s unique narratives.
Research, however, does have an essential place in our work for the following reasons:
- We are called to be evidence-based practitioners. While evidence-based practice doesn’t rely on research data alone (clinical experience and client preferences/values count as well), being scientifically grounded is still an important part of practice.
- Even though it’s not therapy, research also tells many rich stories that can change the way we approach our work. It helps us uncover patterns and test or dismantle hypotheses that have a deep impact both in how we practice and the assumptions we make behind the ways that practice. For example, regardless of whether you are using CBT or a specific branch of psychodynamic therapy, common factors studies support that the therapeutic relationship is often the most powerful curative agent than the clinician’s specific theory. Check out Gurman and Messer’s Essential Psychotherapies (2010) for a discussion on this.
- Research protects clients from harm. For example, for many years, so-called “conversion therapies” have been practiced to change clients’ sexual orientation and/or their gender identity. In many cases, this has tragically led to an increase in suicidality, rates of depression, and self-harm both because of how ineffective these therapies are, and more importantly, the harmful assumptions they make about who people are.
When it comes to the exam, what are some research and evaluation topics you should know?
- The difference between the independent (often manipulated variable) and dependent (measured) variables.
- The concept of validity (Are we measuring what we intend to measure? How are we measuring it?). Internal versus external validity (How generalizable are our findings in this sample to a larger population?).
- Reliability (e.g., interobserver reliability, test-retest reliability, parallel forms reliability).
- Data collection and analysis: Qualitative versus quantitative data, descriptive statistics, and inferential statistics.
- Types of research design: single-subject design (a case study), experimental design (the most rigorous as it will require randomization and sizeable sample size), and quasi-experimental design.
- Program evaluation (e.g., the difference between formative and summative evaluations).