Analysis touting prototype MoCo “Countering Violent Extremism” program flawed, expert says

“To call [the “Montgomery County Model”] effective in the way they have would lead the average citizen to believe it reduces actual violence which is distinctly not the case. […To] say that it is an evidence based program after a single evaluation of a single component of the overall program is irresponsible.  To be “evidence based”, a program requires much more rigorous and repeated study.” 
— M.Garrettson

“Countering violent extremism” (CVE) programs have been in vogue in  law enforcement and counter-terrorism for several years.  The idea seems simple: enlist communities and their leaders in noticing persons who might be prone to violent extremism, take steps to watch those persons, and perhaps somehow guide them back to the straight and narrow.

Yet can the alleged potential to violent ‘extremism’ really be accurately estimated and acted on from one individual to the next?  Will one kind of allegedly violence-prone ‘extremist’ be singled out while others are left alone?  More broadly, is it right or is it counterproductive to try to turn communities or congregations into informant networks? Critics at the Center for Constitutional Rights, Defending Rights & Dissent, the ACLU, and many other groups raise these questions and say these programs often amount to thinly veiled institutional Islamophobia — as perhaps evidenced by President Trump’s plans to recast the programs as CIE: “countering Islamic extremism.”

WORDE has been touting this article for nearly a year as “proof of concept.”  Original copy here; online-searchable copy (Scribd) here.

One of the prototypes for CVE programs nationwide has been the “Montgomery County Model” (since renamed “BRAVE“, for “Building Resilience Against Violent Extremism) developed by the Montgomery Village based WORDE organization and its founder, Hedieh Mirahmadi.  (For an earlier post about this program and its proponents at the private/county “Faith Community Working Group” partnership, see here.) Employing methods honed in part during post 9-11 stints in Afghanistan, Mirahmadi (who has since moved on to the FBI) and WORDE set about creating an effective CVE program here in Montgomery County.

When Michael J. Williams et al published a 167 page report in June 2016 titled “Evaluation of a Multi-Faceted, U.S. Community-Based, Muslim-Led CVE Program,” it appeared that WORDE had succeeded.  To quote the conclusion WORDE has trumpeted ever since:

We found that of all of WORDE’s activities, their volunteer-service and multicultural programming had intended positive effects on 12 of 14 CVE-relevant outcomes. Additionally, there with [sic] no discernable [sic] unintended effects. To wit, these results make WORDE’s volunteer -service and multicultural programming the first evidence-based CVE-relevant programming in the United States.

Yet the report raises red flags.  For one thing, the report isn’t published in a peer-reviewed journal, but as a grant report hosted by a federal agency (NCJRS) that’s part and parcel of a federal domestic security apparatus dispensing growing sums of grant money for CVE programs. If a restaurant and the magazine reviewing it are both owned by the same parent company, five star ratings may be a little easier to come by.

Accordingly, MCCRC turned to a local expert to “evaluate the evaluation.”  Mariana Garrettson has a Masters of Public Health from the University of North Carolina at Chapel Hill, and has made her career in violence prevention research.   Our discussion follows.


MCCRC:  Can you summarize the CVE program the authors evaluated for us?

MG: The program as they describe it, includes four components:

  1. Community education (to raise awareness of violence and offer sessions on things like conflict resolution, youth engagement and family support)
  2. Service provider education (to increase sensitivity to Islam and build relationships between different kinds of service providers–including law enforcement–who might be involved)
  3. Volunteerism and Multi-Cultural Programming (to get diverse youth and people working together in creative and volunteer efforts to promote civic engagement, cross-race and cross-religion social integration and family relationship building)
  4. Peer gate-keeper training (training high school students to recognize and assist peers who might be experiencing isolation, personal crisis, or bullying)

MCCRC:  Does the research presented truly support the key conclusion?

MG: This document presents mostly work done in the creation of the overall program and the development of scientifically valid tools to measure the effectiveness of programs like this.  It claims to also present evidence of the effectiveness of the first 3 components (although it seems like the work is mostly around the third component). They do not, nor do they make any claim to have evaluated the peer gate-keeper training, which is the part that seems the most concerning to most people.  The discussion of the development of a service provider-law enforcement relationships, which seems the other main concern, is also non-existent in this document.

The one component of the program which they did evaluate, and they say they found to be effective, has methodological flaws that make the claim an over-reach.  Finally, we are years away from anyone being able to prove that this sort of program actual reduces incidents of violence.  To call this program effective in the way they have would lead the average citizen to believe it reduces actual violence which is distinctly not the case.


MCCRC: Was the research methodology and analysis sound?

MG: They have done several things very well. Still, they make claims that are not supported with the evidence that they provide.  To say that this is now an “evidence-based CVE relevant program” is distinctly beyond where they are.  They have used the development and implementation of this particular program to develop good tools.  And they state that the educational components 1-3 seem to mostly show the results they were looking for. (I have some concerns about their methodology here, but more on that next).  But to say that it is an evidence based program after a single evaluation of a single component of the overall program is irresponsible.  To be “evidence based”, a program requires much more rigorous and repeated study.

The most basic scientific problem is one of defining terms.  What is the definition of “violent extremism.”  How does it differ from gang violence or youth violence in general?  If they are including gang violence, youth violence, or other mass casualty events, they should be basing their work on the fairly extensive research literature in these areas.  For example, there is a whole network of Centers of Excellence in Youth Violence prevention who have studied, implemented, and evaluated many various approaches to youth violence prevention.

The next issue arises out of this initial lack of a specific definition for violent extremism: why is there an emphasis on Muslims and Islam?  Do Muslims catalyze more violent episodes domestically than any other demographic group?  If so, where is the epidemiological evidence for that?  Once there is a unique and specific definition, then it becomes possible to count episodes and study them for clues on who to target, where and how.

Ultimately this evaluation claims that people who attended the programming developed for the CVE effort showed beliefs and behavioral intentions consistent with what the program wanted.  (They surveyed for things like “making friends outside my race”, “feel a sense of purpose”, “feel accepted”, “learn about cultures other than my own” etc)  The findings are based on self report surveys done by participants after programs.  There was no pre-test against which to compare the post-test findings.  There were “non-participants” included in the analyses who participated in volunteer activities, but not the ones that had CVE specific programming.  Unfortunately, there is no explanation of how the comparison group was recruited and then no data included on their results.

The authors do say that there was no difference between the participants and the comparison group.  They say this lack of a difference is not critical to the evaluation.  I differ strongly with this statement.  With no pretest against which to compare the participants’ findings and not difference between the participant and comparison groups, there is NO evidence that the CVE programs caused the results that they found.  The sample was a convenience sample, meaning that people chose to participate in the surveys and focus groups.  Within the context of this report, the authors present no evidence that shows that the participants’ beliefs, attitudes and behavioral intentions were influenced by the program they are evaluating rather than being something they had before participating in this program.


MCCRC: What did the researchers and program do well?

The inclusion of community members and service providers early on in the process of figuring out how to focus the overall program and to identify what people think of as the greatest risk factors for violent extremism is very strong, especially since the research literature is limited as to what the risk factors are from an empirical standpoint.  They also go through a quality process of developing tools that can be used with future programs to evaluate effectiveness.

The other strong thing they do is look at how this program is similar to and different from other bystander intervention programs (that have been used largely in anti-bullying and anti-sexual assault interventions).  The bystander model is very strong and is a logical theoretical base to start from.  They also talk about gate-keeper training, which has been very successful in suicide prevention efforts in middle and high schools.  It seems to me to be an excellent idea to use this concept and broaden it; what should peers be looking for as red flags for self-directed violence and how can they help doesn’t seem a big leap from what peers should look for as red flags for other-directed violence.

The key is that the strategies for how to intervene need to be focused on helping the individual at risk of hurting themselves or others to get help and stay safe.  Using these programs to funnel prospective criminals to law enforcement is not consistent with what the models were created to do.


MCCRC: Did the authors look for or suggest ways of looking for discernible unintended effects”?  

MG: The authors state that there were not unintended consequences from program activities.  However, they give no explanation for how they came to this conclusion.  They also fail to explain how they measured or even asked about unintended effects.


MCCRC: Please sum up your views of the evaluation.

MG: They have done several things very well.  Still, my final thoughts are that the evaluation is inherently flawed and makes claims that are beyond what it can support.  The process of developing the education parts using input from community members is scientifically strong. Likewise, their development of measurement tools is well done and sets the stage for doing a full evaluation of program outcomes.  But with no evaluation of the peer gate-keeper training, there is not evidence on what it entails, how it was implemented, or what the effects (intended or unintended) might be.  There was no discussion of evaluation of the components that involved law enforcement or service providers.  The one part that they did evaluate had methodological flaws that make it impossible to know if the outcomes measured where caused or influenced in any way by the CVE program.

MCCRC: Thank you.


===

For more on CVE programs, we recommend “CVE: Myths and Fact” by the Center  for Constitutional Rights (CCR), resource pages by the CCR and the ACLU, and articles by Defending Rights & Dissent; see also Waqas Mirza’s Muckrock investigation of the “Montgomery County Model” in particular. Montgomery County residents are encouraged to add their names to a petition demanding an end to the county’s involvement with CVE programs.

Advertisements
This entry was posted in Post and tagged , , , , . Bookmark the permalink.

One Response to Analysis touting prototype MoCo “Countering Violent Extremism” program flawed, expert says

  1. Jim Huang says:

    As someone who works in public health evaluation (and now having read the original evaluation report), I agree with Mariana’s assessment. I agree that the evaluation work done here does not constitute anything that anyone could even pretend to call “evidence-based”. The authors couldn’t even stretch their conclusions to “well-implemented” (honestly, the evaluation doesn’t even really address implementation, it’s mostly a tome in two parts on 1) how to construct measures that could be used in the future to evaluate a CVE program and 2) how to recruit people into CVE, and the authors were clearly most interested in recruitment). The authors pretend in many ways that this is an outcomes evaluation. Plus they use post-intervention statistical differences in order to create the same measures that they then try to evaluate this CVE program on (that’s methodological bull). I agree with Mariana that they ignored a huge body of background literature that would be relevant for their claims. And I agree with her assessment of the methodological issues. There are also ethical issues in whether the subjects of their evaluation were aware they were being evaluated in this manner, as well as ethical issues in the evaluation’s premise that this community is especially at risk for violent extremism.

    Additionally, the evaluators don’t really tease out what it is that WORDE would have done on its own (the rather impressive work of bringing teens out to do community service, etc. I’m guessing would’ve happened with or without the CVE program), and what the CVE program itself actually demanded.

    In my view, the authors here are completely compromised in their professional ethics. This was not an evaluation that was done in good faith. It’s inappropriate to focus half of your (claimed outcomes) evaluation product on ways to induce further recruitment without establishing outcomes. I’m completely disgusted and appalled by the authors of this evaluation.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s