Overcoming Challenges in Evaluation for Department of Education-Funded Grants at HSIs


Hear me? Ok, there it is. Ok, my name is
Veronica Fematt and I am the Policy and Practice Dissemination Coordinator for
the Center for Evaluation and Educational Effectiveness or CEEE that is
what we call it here at Cal State Long Beach. Thank you for joining us. Good
morning, we’re excited to host today’s webinar, “Overcoming Challenges and
Evaluation for Department of Education Funded Grants at Hispanic-Serving
Institutions.” This webinar is part of a series of webinars and briefs dedicated
to examining and fostering effective practices to support the success of
first-generation, low-income, and/or, Hispanic students in STEM at our Hispanic-Serving Institutions. These webinars and briefs are part of a broader project
that have received funding from the US Department of Education. CEEE is part of a
partnership that involves 10 California State University campuses working
together to learn collectively about how best to support HSI-STEM student success.
Our goal with this project is to broaden the conversation beyond California and
the CSU and share what we’re learning both the good and the bad and engage
with our colleagues nationally to identify challenges, opportunities, and
promising practices. For more information on this project you can go ahead, and go
to our website let me go ahead and show you our website. Here you go, so for more
information on the project you can go to our website shown here and for any
questions regarding this project or future webinars, or past webinars. You can
send questions to me directly at my email shown on this slide as well. Now
I’d like to go ahead and point out a few features from Zoom that are available to
everyone. So all of us are muted or our participants are muted right now, and if
you hover over your mouse over your screen you’ll be able to see the chat
function. If you’re experiencing any audio, or visual issues please use this
feature to alert me. I will be monitoring the chat activity throughout the webinar.
You can also use this feature to communicate with each other and we’re
going to be holding the Q&A session towards the end of the presentation in
which point, you will be able to pose questions to our presenters. If you like
someone’s question, or you were thinking of asking the same question and you see
another participant ask on the Q&A. You have the ability to up vote that question,
which will move it right back to the top of the queue and the presenters will be
able to see that question among the first few questions that are posed to
them. Lastly, this webinar is being recorded and will be closed caption and
uploaded to our website everyone who registered for this webinar will receive
an email once the recording is made available, you can also access past
webinars on this website. Ok now I would like to introduce you to our presenters
first we have Rebecca Eddy she is the founder and current President of
Cobblestone Applied Research & Evaluation, Inc., which serves clients
throughout the US with offices in Southern California, Washington, D.C., and
Seattle, Washington. She is a primary evaluator on more than 80 external
evaluation projects. She has 20 years of experience in teaching and conducting
evaluation and applied research studies including 14 Department of Education
Title III and Title V funded grants. She continues to train new evaluators
through the Certificate at Advanced of Advanced Study in Evaluation at
Claremont Graduate University. Steven Margell has held positions as those staff
and faculty in higher education he currently leads the evaluation efforts
for the Department of Education Title III grants with an emphasis on cross campus
collaboration. Beyond standard reporting procedures his efforts focus on rigorous
evaluation of various grant funded activities and student support services
that is followed by dissemination of results to inform policy on campus to
support and institutionalize effective programming. With that being said I would
just like to add that I’m very excited about today’s webinar as we’re getting
to hear from both an external evaluator and an internal evaluator, and with that
being said I would like to now turn it over to Steven. [Veronica]: Steven can you hear me? [Steven]:Yeah sorry about that. [Veronica]:There you are! [Steven]: I was muted. haha. Thanks Veronica! [Veronica]: No problem! [Steven]: And I
am not convinced my screen is sharing correctly. Sorry about that everyone, so
thanks for joining us today my name is Steven Margell and as Veronica said
I’m the lead evaluator for Title III HSI-STEM grant at Humboldt State
University in California. Today we’ll be covering a wide range of topics starting
with a brief background on the Higher Education Act Title III and Title V
grants. After that we’ll discuss evaluation and evaluators followed
by both program and evaluation design. Then we’ll discuss how to identify
appropriate measures of success, reporting the results of your efforts,
and how your evaluation process can be used strategically to respond to your unique
campus needs. Throughout the presentation we’ll be discussing various challenges
that Rebecca and I have encountered as well as solutions to those challenges.
However, I’m sure there will be challenges that are unique to your program and
if you would like to bring those to the question-and-answer box you’ll discuss
those at the end of the presentation. Again as Veronica mentioned if you’re
having issues with video, or audio please use that chat feature to communicate
with Veronica directly. Now the Higher Education Act of 1965 is intended to
strengthen the educational resources of our colleges and universities and to
provide financial assistance for students in post-secondary and higher
education. Today we’ll focus on the Title III and
Title V sections and specifically those subsections which relate to
Hispanic-Serving Institutions. Broadly speaking Title III grants are
intended to equalize educational opportunities for minoritized and
low-income students. They’re meant to support colleges and universities that
serve students who historically have been denied access to post-secondary
education, due to race or national origin and whose participation in higher
education is in frankly our nation’s best interest so that equality of access
and the quality of post-secondary education opportunities are enhanced for
all potential students. Funding is intended to expand the capacity your
institution to serve minoritized and low-income students. Specific goals of
Title III grants include improving and strengthening the academic quality, the
institutional management, and the fiscal stability of the campus. Title III
institutional programs are wide ranging from the AANAPISI to HSI-STEM programs
to SIP programs just to name a few but there are others. Program eligibility is
predetermined by the Department of Education using campus data from the
federal IPEDS reporting of your student composition. Your institutional research
office is responsible for submitting this data and the Department of
Education has an eligibility matrix where you can look up your institution
and see which programs you are eligible for. We have a link to the eligibility
matrix in a recommended reading list that Veronica will be sending out to
everybody that registered for the webinar, after we complete it. So you can find that eligibility matrix as well as all of our references that were separating today on that recommended reading list. Moving on to Title V grants these are intended to support colleges and
universities efforts to improve and expand their capacity to serve Hispanic
students and other low-income students. Funding is intended to expand
and academic offerings program quality and institutional stability.
The DHSI and PPOHA are Title V institutional programs that are Hispanic serving in
nature and again program eligibility for these different grant programs is
predetermined and you can look yourself up on that eligibility matrix that will
send out after the presentation. Now let’s discuss why evaluation is
important and how your evaluation plan can strengthen your program goals. Even
as early as in the grant proposal stage the evaluation plan helps you to set
timelines and goals for your program activities. Additionally, program
evaluation is required for all Title III and V grants in order to
appropriately determine the merit or worth of the programming. The initial results
of your evaluation can and should be used to improve the programming
throughout the lifetime of the grant. Evaluation also keeps you accountable to
both federal requirements, but it’s also critical to keep your programming
relevant to campus needs. Both federal and campus stakeholders such as
departments and administrators must be informed of the programming efficacy in
order to work towards institutionalization. And finally
evaluation is critical to inform conversations related to future funding
of the program, once those federal funds are exhausted. The Department of Ed
provides a fantastic document titled, “Guidance on Evaluation,” this is also
cited in your recommended reading list. This website has very good examples of
impact and evaluation criterion questions to help guide you both during
your development, and while over viewing the evaluation results to inform program
development. We’re not going to be going into those questions today but they’re
available and they are valuable considerations when designing your
evaluation plan. Now I’m going to be handing off to Rebecca Eddy who will
continue speaking about evaluators and program design, as well as evaluation
design. And Rebecca I see you’re still on mute [Rebecca]: I apologize for that, thanks so much
Steven!
[Steven]: No problem!
[Rebecca]: Yes, okay thanks so much again for for being
here today we want to talk about quite a few things related to evaluation as
Veronica mentioned initially. I am an external evaluator whereas Steven is an
internal evaluator, so we’ve got a little bit different perspective when it comes
to working with programs to do their evaluation. First I want to acknowledge
that I have an entire team of people that work with me, with our clients that
really do a fantastic job and I want to acknowledge them now they include member
the Patel, Lena Patelski, Nicole Galport, Ashley Elltihian, Courtney Coulter, and
Dave Nickelson. And our really terrific clients which includes several
look that the CSU’s and including California State University San
Bernardino and that’s the reason why we’re here today talking about them. So I
wanted to give a little bit more information in our experience what to do
what to expect more cover that with an evaluator because we often get calls
from people who haven’t worked and then evaluated before or writing a Title V
or Title III grant for the first time and don’t really know what to expect for
the process. So I just wanted to give a little bit of insight to what you can
expect. In the proposal stage often we like to be in an early part of it where
we can provide lots of recommendations, related to research literature that
might be relevant measures that may be used to be able to track some of this
data you will collect over the course of the program including valid or reliable
scales, which we like we prefer to use and this should be related to both
implementation data as well as short-term or long-term outcomes. We also
like to work with programs to be able to really think through what are some
feasible targets that they would like to achieve during of course the grant. Often
it’s about a five-year project and to really identify things that are
appropriate performance measures that are realistic to achieve in the
time frame allotted so it’s something that we like to do. Now we always we
don’t always have the opportunity to work on the proposal
stage, but when we do we like to be able to have as much of this ingrained
as possible. So when we do the evaluation design it’s appropriate. Once you’re
actually working on the evaluation and the program is underway after it’s been
funded we’d like to have ongoing communication regarding what’s happening
within the program and the evaluation. So some prospective clients sometimes have
the understanding that we come in maybe once or twice a year to kind of just
check on things. We don’t like to work that way because we’d like to have an
ongoing conversation about program development. As well as, how we can you
know establish you know good data collection systems and things like that.
So we like to be fairly involved in a regular basis. Often the challenge is
being able to collect you know data that they haven’t collected before or
creating more efficient or more feasible data collection systems, or using
instruments that would be appropriate. And this is something that evaluators
generally can help with because they have experienced doing this sort of work
even if it’s sort of setting up a data template to be able to collect something
locally on campus for a program. We often help with the submission IRB he
documents the renewal we know a lot about ethics and consent forms and
what’s appropriate to ask and so that’s often what the evaluator evaluation team
will be charged with doing as well. Of course we analyze all the data your
evaluator should have a lot of experience doing this kind of work we’re
related to usually both quantitative as well as qualitative analysis and then
taking those data and being able to translate that into reasonable data based
recommendations and then being able to do again initially based on program
improvement some more formative kinds of evaluation tests and then ultimately
determining merit of worth and a summative evaluation at the end. Each
year there are reports due is of course those of you who are familiar with these
an initial progress report, or the annual progress report we usually help with
those to be able to make sure that everything is covered and accurate.
Both in the executive summary spells the other parts of the report and then we
actually do and it’s even we’ll talk about this little later in terms of the
other reporting options we actually do what we call a local report which is a
really good integration of everything that’s happened in the program
throughout the year including all of the findings all the implementation
description all of the outcomes and then recommendations. And often those are used
by sort of a variety of stakeholders and will create when necessary other
documents that are appropriate like infographics or briefs or other things
to that that people can use to be able to disseminate the results even more
widely. Just a little bit of a contrast between external and internal if you’re
not familiar with the differences obviously the you know all evaluators
should have the technical assistance to be able to help with you know reporting
and other procedures they should have a good background and design with in terms
of evaluation design and perfect program design as well as methods and then be
able to provide that formative and summative feedback based on everything
that they’re collecting throughout the program. Where’s Steven his strength is
really as an internal evaluator so of course internal evaluators have an
advantage of having really good context for their local institution and an
awareness of campus needs that may not be prevalent to someone who’s on the
outside, they have you know good relationships with the multiple
different programs or people on campus and then perhaps better insight into you
know greater strategic plans and how those aligned with the current grant. The
advantage of having an external team or individual is sometimes this is often
actually required because the perception is it’s going to be more objective to be
able to have an accountability so often NSF will have an external evaluator
component within their grant call we often again we work with several
different institutions at least six different HSI’s in particular on these
types of grants and we know what other programs have done and what
worked well. So often that gives us a good insight into what might work well
for another institution just in terms of even the type of data tracking your
systems that are available as well as you know the kind of other expertise of
how someone solved the problem or what other strategies that might have adopted
to be able to help their population of students. So it’s really really sort of a
wider range of breath that we have across multiple institutions as opposed
to just working in depth with one. When thinking about designing a program
specific activities obviously there’s lots of considerations you certainly
want to respond to the priorities outlined in the RFP, but again that that
first should be based on needs of students faculty others that the
institution right we want to start with with what they actually need or what
unmet needs are required at the particular institution. Definitely use
empirical research that will guide the design of something that might work well
for the population of students or faculty that would be part of the grant
and certainly balancing local needs and with the feasibility of program
offerings that’s available at a particular institution and then finally
really thinking about institutionalization as a primary
consideration for these types of grants. What else can be left with the
institution to strengthen an institution when the grant funding is complete. Okay
again I want to talk a little bit about empirical research because I think this
is something that often programs don’t start with, but I think they should
really in terms of when they’re designing your program. Obviously we know
a lot learned a lot over time what works and what doesn’t work well, but we still
need to you know ensure we go through the process of helping, to increase the
probability of success for any particular program by really counting
relying on things that work. Success is often defined as improving student
achievement through grades or pass rates, but success can mean other things as
well. Often we want to have long term outcomes related to things like
retention and graduation for students. Again regardless of the outcome we
want to go through this for sort of formal systematic process of
using research as a basis for this. We can divine some basic program elements
for example, you might hear that you know mentoring is a good idea for students.
You know it’s a popular, you know it’s sort of program activity and something
that people know about, but we want to take a few different approaches to this.
Really think about okay what what in the research literature you know, how is
mentoring program designed to be able to to really affect change in students, or
why might mentoring work? That’s what we try to get into, like what are the
psychological mechanisms that might be supporting the idea that mentoring might be a good idea for students. You know obviously this has to be under the
umbrella of allowable program activities. You know student services or faculty development, but again we want to start really with, you know the idea of what is
the you know participant need that needs to be met. Well you know what are our students you know have that are currently not being met, their
mentoring would help to you know sort of ameliorate. So we we need to really
operationally define a lot of things right so what do we mean by mentoring,
what is the correct dosage, what is an adequate dosage, and again this should all be
based on the research literature. And then what evidence do we have that might
allow us to believe that mentoring might be a worthwhile intervention for our
students and in a particular way. So again this is all related to the program
design and then once we have done our due diligence with the you know to
ensure participant needs are met, then we want to make sure that we design the
evaluation in response to this. Right so we want to know that you know that the
evaluation questions are appropriate they’re asking. We want to know how we’re
tracking implementation of these ideas right, what are we defining as successful
outcomes? what are the key constructs that were really measuring, and we also ask
about positive or negative side effects that we didn’t pay attention to based on
what’s been done in the past. So that’s something that we really want to make
sure that we investigate and then we also want to have an evaluation design
that is the most rigorous and most appropriate
for what we’re going to be measuring. And then evaluation obviously should be
designed as an ongoing effort. Which would provide good information about, how
well the program is being implemented and that connection to some initial
outcome. So again this would be like the formative stage, and then once we have
some evaluation findings obviously we want to feed that back into the program
to be able to have recommendations for improving or tweaking the program, some
way to be able to improve the effectiveness of that effort. And then
ultimately, we want to determine if the program has been successful, and then the
idea is to feed this information back into the empirical literature, to be able
to contribute something to say we know something more about mentoring or some
other kind of intervention that could be really helpful for other institutions as
well. For those of you who are familiar with What Works Clearinghouse. I don’t
want to spend a lot of time here, but there’s certainly lots of information
available about what actually works and the site is actually pretty good. I
will tell you that part of the charge of some of these Title V and
III grants are to really improve the presence of post-secondary research on
What Works Clearinghouse . There’s not a lot on there currently
it’s really very K-12 focused. And so the idea is that with some of these
grant funds you want to have a rigorous enough evaluation design that can
measure some of these things if it’s possible and then try to improve the the
literature related to some of these interventions. So it’s again intervention
reports practice guides, you know you can do a search on there finding what works
so all those things are available. The other thing I would recommend too is for those who are just getting started, you know if you don’t know a lot about the
research literature and you want to find out what are really good places to start
in terms of addressing some of these basic needs. I love Hattie’s work on
visible learning. There’s a lot of K-12 in there too, but it’s certainly relevant to
post-secondary. The updated version of How People Learn came out late last year,
this is the national research council publication. It’s it’s full of really
good basic as well as more applied research. More recently too, something related to more social psych interventions of about
wise interventions. Walton & Wilson came out late last year and then there’s a
Nation article too. So, we have a reading list that is available that will
distribute it after the webinar to be able to make sure that people have
access to some of these recommended sources as well. Really briefly again I
mentioned about What Works Clearinghouse, there’s certainly preference for
rigorous evaluation designs within this. We think about having an experiment, or
having randomly assigned participants as the most rigorous type. Obviously it has
the highest in terms of internal validity. Although there are other really
good quasi-experimental designs they could you can have as well. And and those
are certainly part of this idea of having a rigorous evidence base. So, but
you have to look at things like attrition and baseline equivalence and
there’s all these standards too. So your evaluator should be really familiar
with these types of standards. The most recent handbook is available, this is
version 4. And this is available in on the website. And you should really you
know, kind of take a look at that to make sure that you sort of begin with the end
in mind of what kind of evidence that you want to do. No not all Title III and
Title V grants require this kind of evidence. I know a few years ago that
Title III HSI-STEM program did require it for some part of the intervention within
the grant. So this is something that we’re working on with a couple of our
programs right now, but it’s certainly something that you want to think about.
As an evaluator designing the most rigorous possible evaluation, at least in
part as part of the evaluation of the project to be able to good good evidence
of something that’s working. Ok so we want to talk about some challenges and solutions. This specific one is related to program design that we’ve encountered.
There’s certainly others, and we would definitely welcome questions at the Q&A
at the end. One challenge might be for example, that
the program design isn’t feasible for the current population of students. You know
often they’re designed sort of as a theoretical exercise by a grant writer or PI or a combination of people. And sometimes it
doesn’t actually work when you put it into practice. So, a possible solution
might be you know again using data to modify elements the program that aren’t
meeting the needs of students. This is the whole purpose of having formative
feedback and having very regular you know visiting the data, and looking at
the data and seeing if it’s something’s working. If not, making a change.
Another challenge might be is if you’re not sure if a new intervention will work
with faculty or you know our students or any kind of group, we often will do some
pilot testing. Whether that’s just an instrument or a program element to be
able to make sure that we are iron out the design challenges before expanding that
maybe to a different department or to the rest of campus. So that’s something
that pilot testing should certainly be done initially in the program and really
making sure you work that out. And also seeing you know how people react to that
to is is really invaluable in terms of collecting good data before you roll it
out to other other parts and other entities. Another challenge might be the
program setup is going well. It’s you know people are hired and things are in
place, but it is really low participation. We find this with some really brand new
initiatives that haven’t been tried before, and really we find it’s often a
marketing thing. Right, you have to work really hard and really intentionally in
making sure people know about you know about what’s happening in the program. Often
you know, you can use an advisory board to you know get members to help get the word out. You can certainly do flyers and social media and other presentations to
make sure that people know about what’s happening in programmatic camp.
And it’s definitely something that we would recommend that people think really
carefully about at the very beginning to – and as opposed to just reacting to it
later. Another challenge might be a new policy or campus priorities don’t align
with program goals of design. We see this happens. I know in the CSU system a new
policy may come out and then campuses have to react to that in some particular
way that does impact the project design. And so, you know the program officer
wants to be helpful. We think they need to sort of
be put in the loop and you know making approve changes. If they’re not meeting
the students needs or they’re just like potentially violating some policy or
would be for the next year violating some policy that we need to address
within you know that the program design. So those are some things that we could
kind of think carefully about. We can again entertainer the questions about program design if you have some later. With respectful evaluation design, we have a couple of things that have come up as challenges too. Which are you know, good
implementation data is not available, or applicable to accomplish. And so again,
this is starting at the very beginning making sure that we establish accurate
and feasible data collection systems. So if we know we don’t know how many people are attending the workshop, and we’re not collecting that. The idea is to the
evaluator worked with the program staff to be able to figure out a really
accurate way of collecting those data and getting better at that and seeing
what’s the least you know difficult or the most accurate sort of a balance to
to be able to do with with each of the campus projects. Another one might be the
original evaluation design doesn’t rigourously test the intervention. And this
is what happens often if folk get called in to do something and they’re in their
second or third year, and there are other evaluators no longer working with them.
And it wasn’t designed originally to be a rigorous design. It is possible to be
able to then create some more rigorous tests of evaluation even if it’s not at
the very first year. So again, being really familiar with design standards
options for types of design or even some component of that designed to be able to
test. That’s something that we would definitely recommend even if it’s a
couple of years in. Another evaluation design challenge might be you know that again design changes in response to local needs or practices. Again you have
to keep in mind the original goals, the original activities that are still
relevant, you still have to have you know requirements met in terms of what types
of activities to funding. And so being able to reread the
proposal again and make sure that that is any updates are reflective of those
original intentions and and goals would be really important to keep in mind. Okay,
the other thing I’m going to talk about really briefly is using a logic model.
They’re often required now, and a lot of the grants that we work on in the
proposal stage. And if you haven’t worked with logic models before there’s lots of
resources that we’ll give you. If you are familiar with logic models that’s
terrific you are probably familiar with the idea that
often they will be used in program design and evaluation for a number of
reasons. You know really aligning everything in terms of you know the the
program activities, and what kind of implementation data you’re collecting,
and then ultimately how some of these you know activities lead to short and long
term outcomes. So there’s lots of things that could be done with logic models and
their dynamic – they can be done in a couple different ways. But I’ll just do a
quick review if you’re not familiar with what this looks like. It is a tool to
really helping you think through all of these issues. Again we kind of start with
needs right, as I mentioned too we want to make sure that what needs are we
addressing? Some needs are met needs and some are unmet needs. And we really want
to focus program design elements on unmet needs. And so an example might be
students come to college unprepared to pass college level math courses. It’s not
an unfamiliar need for a lot of our campuses that we work with. Then we want to say okay, what resources do we need? You know we have lots of institutional
resources already. We’ve got expertise, we’ve got faculty, we’ve got staff,
physical space, curriculum, whatever that is. So that should be identified in terms of what you need to be able to the input as a new part of the system, as well as
what you currently have in place to be able to really leverage to be able to
implement a new project. We need to figure out what are the specific
activities. This is the program design part of it right. Who will participate?
We would be really clear in terms of what these requirements are. What the dosage
is? How can we measure whether something is high quality or not? How can we
measure if something is sufficient to meet a need? And that’s where we really
monitor the implementation to ensure that it’s
not just sort of the number of students or the types of activities that were
present. But is this the right kind of activity? Is this the most
appropriate one to meet the need, and are these actually having an effect? And then
we of course establish the answer to that through determining some outcomes.
So often short term outcomes will be things like you know immediate attitude
change, like maybe academic self-efficacy, or some other kind of change like
increasing content knowledge, or math skills. And those will be short-term
needs. Those are the kinds of changes we expect to happen as a result of
participating in the activity. And then ultimately what are those lead to? Often, the long-term needs are even graduation or retention. Clearly impact or really
long-term needs are about achieving a diverse STEM workforce. And that’s the
kind of thing that we ultimately want to be able to design in each of our
projects. A couple other elements that are likely part of your logic
model are assumptions, right? We have to identify what those assumptions that we
are making because sometimes those are wrong. So we need to be really explicit
about those assumptions. And then also some external factors, like you know the
economy has you know huge implications for whether or not people stay in school
and choose to be in the workforce as opposed to finish their degree. So
there’s lots of external things that can happen that are outside of the control
of the program activity. So it’s something to be able to note about these
things to be aware of. Couple of things related to challenging solutions about
logic models. You know in particular, let’s say the challenge is early results
fail to show a positive effect on students. So, that’s when we really sit
down and look at you know, revisit the intervention model in the research
literature, right? Is this appropriate? Are we addressing the right need? Is the kind
of design of the program really you know designed appropriately to meet students,
or this does this not work with the population? And the idea really here is
from Chen if you’re familiar with the literature.
We really want to distinguish between implementation failure versus theory
failure. Because sometimes it could be it was just not implemented well, people
didn’t attend, the dosage wasn’t high enough, it wasn’t a high quality delivery
of a program. Versus theory failure which is what we assumed was going to happen
it really didn’t happen and really didn’t pan out like we thought we would.
And therefore, we need to identify a different kind of intervention to be
able to meet meet. So it’s being able to really sit down and look at the module
look at the logic model and figure that out. We often recommend revisiting this a
couple of years after the project’s been in place to be able to see if there are
tweaks that can be made. Another challenge might be that there’s not
enough time to measure long-term outcomes within the time period of the
grant. And this is something we see all the time. Because if you’re familiar with
you know how these go the first year you’re doing setup, and you know you’re
hiring staff, and doing all these other things, and often you don’t have enough
time to be able to do a full you know cycle of students who would start and
then finish within the five-year grant period. So, we often want to be able to
measure longer-term outcomes that sometimes may or may not be feasible
within the time period the grant. You know, sometimes we might say well maybe we could have some proxy measures you know. Instead of like actual graduation
rates, we can have you know progress toward graduation. That could be
something that we can measure is even a shorter term outcome that will give us
an indication that these longer term things, even those we’re reporting long
term outcomes as well like graduation rates and retention rates. Or you know
maybe, intention to pursue a graduate degree instead of you know completing a
doctoral program or something which clearly couldn’t be done within the
time frame of a grant. So, those are just some things to kind of think about in
terms of if you don’t have that time to be able to do across the tunnel end of
the grant. Okay, I’m gonna turn it over back over to Steven now and he’ll talk
about our last couple sections related to reporting addressing individual
campus needs. [Steven]: Thanks Rebecca. As Rebecca just said, we’re gonna
continue by discussing reporting and campus needs. And with regards to
reporting your results there are many different avenues to report the results
of your programming. First of all, federal reporting is in
most cases mandatory, but it’s also important for maintaining dialogue with
your program officer as evaluation results are used to inform programming
throughout the lifetime of your grant. Publications are an important outlet to
inform the broader dialogue related to the topic and enhance the research
literature about what has worked on your campus and may be applicable at another. Presenting both your program design and
the evaluation findings at a conference is another valuable outlet to inform the
broader dialogue. Conferences are a very important space to network and find camp
assists that are working towards similar goals and you may be able to inform one
another’s work. I’m gonna quickly plug the AHSIE conference that’s the Alliance of
Hispanic-Serving institution Educators Conference, and say that this outlet it’s
been an incredibly powerful outlet for the team at Humboldt State University. We
have strategically invited administrators, key stakeholders on
campus, and the directors of critical campus collaborators to attend AHSIE
with us using our grant funds for travel expenses. The energy and passion at
AHSIE has had the effect of invigorating the dialogue on campus and
our conversations about institutionalization with various
administrators have been greatly improved from inviting them along to see
what work is happening in the world today. I’ll also mention that AHSIE is
still accepting submission proposals to present in spring 2020. So, if you have
something to contribute to the broader dialogue those submissions are due by
October 1st. So you’ll need to maybe pursue that
avenue. Finally, to further the dialog on your own campus and keep those key
stakeholders informed so they can make the best decision possible, local
reporting is a critical piece of your evaluation design. Having a plan in place
to disseminate the results of your research in a digestible format that is
critical and frankly tantamount to the federal reporting. Now, regarding some
very real challenges the Department of Education reporting template does not
readily support qualitative data, or qualitative results. And I think this is
in large part due to the fact that the reporting tool is intended to be
applicable to a myriad of grants and the federal legislation asks for
quantitative measures of success. There are, however, two points in the report
where qualitative data could, and if you’ve got it should be included. The
first is the focus area outcomes, which has a supporting statement dialog box.
And in that supporting statement you could speak to the qualitative results
supporting the efficacy of your program. A second place in the federal report is
the project status where there’s an object narrative, and qualitative
outcomes could be discussed there as well. Another challenge related to reporting is the format and timing of the federal report, which may not seem to provide
information that’s immediately useful or related to program efficacy in regards
to institutionalization on your campus. We found that developing a local reports
that are appropriate to the audience and informative to key stakeholders is a
valuable avenue to keep those key stakeholders informed. But related to
that is the fact that key stakeholders don’t have time to read an evaluation
report or your beautifully published article. And in this case you’ll likely
need to develop multiple deliverables such as infographics, a one or two page
executive summary, or even a slide deck from a presentation. Rebecca discussed using your evaluation
findings to inform modifications to the program design. And you may find that
additional data collection systems may need to be developed to inform this
process. Additionally guidance from your program officer can be invaluable as you
work to balance the needs of your campus or department with with your federal
goals and your existing logic model. In my experience, I can’t speak for Rebecca,
but in my experience at Humboldt State our program officers have often been
open to logic model modifications when they are well founded in the initial
results of the evaluation. Your program officer is on your team, and I think
that’s an important thing to remember. Your program officer while managing many
many well, while managing a large caseload is on your team. And with some
careful consideration they’ll be able to inform modifications to the program design or logic model if necessary and ensure that those
changes are appropriate and defensible. Now, I’d like to I’d actually like to
wrap this all up with a very real challenge we face at Humboldt State, and
how we implemented an evaluation technique to seemingly resolve the issue.
We’ll see how things play down over time. But students on campus, some students on campus were displeased that the HSI-STEM funds were not more directly impacting
them financially in the form of scholarships or to offset cost of books.
Now our initial response was to inform them that there are federal requirements
that funds can’t be used to directly finance the students. This is a Title
III HSI- STEM grant and that’s one of the stipulating, or that’s one of the
stipulations. As you can imagine, us telling them that did little to resolve
the issue. And some of the student body were still very displeased, even going as
far as picketing. We developed some materials, a tri-fold, a one-pager summary,
and a presentation that we touted around campus which presented the program
design the initial program results. But some voices on campus decried that as
propaganda. Us marching around the positive efficacy of the programming
when these students felt that there was no impact. We consulted with our office
of Diversity, Equity, and Inclusion and they had the insight and suggestion of
holding a focus group. So holding that focus group allowed students to voice
their concerns, and it alleviated much of the campus turmoil. Largely we believe,
because it was validated their concerns and gave them a place to voice those
concerns. We invited our program participants to
reflect on their experiences and identify both merits of the program and
issues that they encounter within the program. Overall, the results were
resoundingly positive with very good suggestions for improvement. In order for
the results to be appropriately perceived as unbiased, we since I’m an
internal evaluator and there was concern that the internal evaluation was biased
we solicited several graduate students in our sociology department to
independently perform qualitative analysis of focus group. Currently,
they’re finalizing the results, but even just offering the focus group and
inviting participation has eased tensions on our campus dramatically. Once
the qualitative analysis is finalized, we plan to bring the results back to the
students, through the Associated Student group and share with departments across
campus and use that as foundation for a potential publication as well.
Finally, we plan to use the results of the focus group to inform our
programming responsibly. We the suggestions we received were fantastic,
and we want to share those results in broader dialogue of academia. So at this
point, I’d like to thank you for your time and attention throughout today’s
presentation. Veronica mentioned earlier that she’ll
send out that suggested reading list to everyone, but if you don’t see that come
through also feel free to contact her this is her email address.
Now I’m pulling up the Q/A it seems that the few questions we’ve had have already been
answered. [Rebecca]:Well yeah yeah this is Rebecca jumping in Steven um somebody asked
about the Chen reference which I actually put in the Q&A box, and they
wanted to know the recommended reading for the theory failure, versus
implementation failure and familiar with evaluation. Chen is the theory driven
evaluation, that is something that I think I would
definitely recommend for everyone to read, regardless of the type of project
that you have. So thank you for that question and that’ll also be part of the
reading list. [Steven]:Wonderful, thanks Rebecca! [Rebecca]: Sure! [Steven]: I think we could hang out for a
couple minutes, while we wait to see if any other questions and answers
questions come in for us to answer, but otherwise we may hand it back over to
Veronica. [Veronica]: Alright, thank you Steven and Rebecca so yes, if
there are any questions for our panelists please feel free to type them
in now and your the Q&A chat box. We will wait a few minutes to see if anybody has
a question. So as I mentioned earlier, I will be sending out an email with a
detailed reading list, provided from Rebecca and Steven that has a reading
list available for everyone for the references that we mentioned here today.
And we’ll be also sending a short survey for your feedback on today’s webinar,
it’ll only take a minute or two so please if you see the survey please take
a second to go ahead and take it. Do we have any questions? Great
presentation by you. Thank you! Also, if you would like to access our
first webinar, which was on the implementation of program components. You
can also visit our website, and go to the webinars tab and it’ll be archived
there. This webinar will be closed captioned and will be made available in a
couple weeks, but I will also send that out to folks.
Oh we have some questions! May we have the PowerPoint?
Rebecca and Steven will the PowerPoint slides be made available? [Rebecca]: I’m happy to
make them available. [Veronica]: Yeah perfect, I can also send that out to
folks! Sure! [Rebecca]: I have another question Veronica that
just came up, how do you handle the evaluation costs at the proposal
stage? I will tell you what we do. This week part of our business development, so
we work, we don’t charge for working on the proposal, or the evaluation. And then
obviously we agree to what the cost of the evaluation will be once it’s funded.
If it doesn’t get funded obviously we don’t get paid, so we’re certainly
motivated to be able to provide as high-quality work to our clients to help
them get funded as possible. Which is why we again, often I like to separate I
don’t like to design the program I don’t think I could. I encourage them to find a
way they want, but we also want to make sure that they are using really solid
rationale and research literature, and everything else as much as possible. So,
we’ll give them feedback on that related to increase in strengthening their part
of design at that stage, but we don’t charge for any of this, like I said it’s
just part of our business development. Does that hopefully the answers your question?
Can you speak the cost related to type of evaluation? So I think this varies
quite a bit, the industry standard for evaluation across multiple different
types of projects I would say is usually five to ten percent of direct costs. I
think recently cost has been a little difficult, because some people really
bristle at that because it can be pretty high, but I will tell you part of it is
you know making decisions about what you’re going to do. In terms of
supporting the program versus evaluating the program. Clearly, we know every dollar
that we take is not a dollar that can go to support students, or other program
activities directly. So, we try to be really thoughtful about that, and fair
about that and there are things that if you don’t have a huge budget, you know
they can you know make choices about the types of activities. I will tell you
there’s there’s got to be some kind of minimum
amount, that we would work with because we we obviously can’t do a really
high-quality job, and thorough reporting and everything else for like ten or
fifteen thousand dollars. That’s just not feasible to do. We don’t feel like we
could do a good job for that amount, so we have had people ask us if we can you
know, we don’t have a lot for the budget which maybe sometimes they could save
money doing certain things. We like to say a certain percentage of it
because we know the kind of work, and that goes into doing that. So I don’t
know if that answers the question, but that’s what I generally would would
charge for an external evaluation. Other questions? Another question about marketing
suggestions, for marketing that will lower the costs. I don’t know if we can
talk about a lower, but I mean I’m just thinking have us having a strategy right?
And something that will not cost you anything, or really in terms of having
your allies on campus. Be aware of your program and then getting the word out,
that kind of stuff is really invaluable. Like I said often and having an advisory
board that will meet once or twice a year, that are key individuals on campus
that can really promote your program and reach out. Obviously, the outreach is
really important part of a lot of what happens within getting a project off the
ground, and just having that word of mouth and that’s the kind of thing. I
don’t know about what’s allowable, or not it to answer that specifically, but
there’s a lot you can do that cause nothing, but aside from you know staff
time. I don’t know Steven if you have anything else, or in terms of like
marketing or getting the word out on an internal way? [Steven]: No I think that we haven’t,
we haven’t seen that come up as being an issue. When it comes to program marketing
we’ve been our programs is for first time freshman in STEM, being an HSI-STEM grant. And, we work closely with admissions and our marketing and communications
department to, who initially it was to solicit participation, but as we’ve grown
every incoming student in certain programs is automatically in the
program. So we haven’t encountered a lot of problems there. [Rebecca]: Okay other other questions? Hopefully
that was helpful, no okay. [Veronica]: Okay it looks like we don’t have any more questions,
again if you have any questions for me please send them directly to my email. I
will send everybody notification when we have this up on our website. Thank you
again, Rebecca and Steven this was a really great and informative
presentation. So with that I’m gonna go ahead and end our meeting. Thank you
everyone, who participated hope you have a great rest of the day! [Steven]: Thanks everybody
[Rebecca]: Thank you! [Veronica]: Okay have a good day bye

Leave a Reply

Your email address will not be published. Required fields are marked *