15:55:51 <Necrophorus> ... not sure if I want to try ...
15:56:24 <SarahStierch> Just a reminder - in about 5 minutes we're going to start the first Program Evaluation & Design office hour! With fschulenburg, janstee, and myself!
16:00:05 <SarahStierch> Good morning! Welcome to the first Program Evaluation & Design office hours!
16:00:10 <SarahStierch> We're a new team here at the Wikimedia Foundation - and we're focused around supporting chapters and individual community members evaluate programs - like content donations, edit-a-thons, workshops, education program, etc.
16:00:11 <SarahStierch> I'm going to quickly introduce our team, and I'll be facilitating questions - so if you have questions I'll be your go-to person! So please be respectful of not "chatting over" one another and let's have an awesome time "Geeking out" about PE&D!
16:01:12 <SarahStierch> First, welcome @fschulenburg! Some of you may know Frank from the Wikipedia Education Program. He's a German Wikipedia editor and Commonist. He is the Senior Director of Programs at the Wikimedia Foundation.
16:02:11 <SarahStierch> Second, welcome @janstee - Dr. Jaime Anstee - you can call her Jaime, though I like to call her Dr. ;) - Jaime is our newest team member, and she's the Program Evaluation Specialist. You want to know how to evaluate your workshop or how to write a great survey to get as much information from participants as possible to see if they will edit after your workshop? She's the go to goddess for all things evaluation!!
16:02:21* JanAinali_WMSE says Hello! to Frank�
16:02:21 <Martin_Rulsch> hi Senior Frank :P
16:02:40 <fschulenburg> hehe :-)
16:02:46 <fschulenburg> moin Martin_Rulsch
16:03:03 <fschulenburg> hi JanAinali_WMSE
16:03:06 <fschulenburg> :-)
16:03:22 <SarahStierch> And third, ME! I'm the Community Coordinator for the project. Long time Wikipedian, first time staff member. I'll be your go to person when you have questions, are doing an event and want us to come do a workshop, when you need documentation, and I help maintain meta, and write blogs and all things communication and people and wikilove!
16:03:36* Necrophorus also greets Frank, Jaime and Sarah!�
16:03:46 <SarahStierch> Hello everyone :) W00t!
16:03:49 <fschulenburg> hi Necrophorus
16:04:09 <T13|needsCoffee> Hello everyone.
16:04:10 <SarahStierch> And for future reference
16:04:20 <fschulenburg> hi T13|needsCoffee
16:04:29 <SarahStierch> Welcome T13|needsCoffee - we are just doing introductions our of team
16:04:52 <SarahStierch> That's our hub on meta for the time being! Where you can find resources, cool stuff, and interact with other "program leaders" who are into evaluation.
16:04:55 <SarahStierch> Hi raystorm_ !
16:05:13 <SarahStierch> OK! Now I'm going to pass it over to fschulenburg. He's going to introduce PE&D and why we are here.
16:05:19* SarahStierch opens the curtains for Frank�
16:05:25* SarahStierch shines the spotlight�
16:05:31 <SarahStierch> mic check
16:05:32 <SarahStierch> 1 2 3
16:05:35 <fschulenburg> ooohhh
16:05:43 <Achim_Raschka> prepare eggs
16:05:47* SarahStierch pokes Achim_Raschka �
16:05:50 <fschulenburg> *lol*
16:05:58* SarahStierch pushes fschulenburg to the mic�
16:06:03 <fschulenburg> Hello everybody. It's great to see so many familiar faces here. I'm thrilled to see the level of interest that people have in program evaluation.
16:06:18* Celio_Brazil says: Hy everybody! PED on Meta was already bookmarked. ;)�
16:06:25 <SarahStierch> Hey Celio_Brazil ~~
16:06:32 <Celio_Brazil> I mean: Hi! :)
16:06:38 <SarahStierch> Hola :)
16:06:41 <fschulenburg> Let me start with a quick introduction into how this new team will be operating and how we're planning to support the people who're running programs. And – just as a reminder – when we're talking about "programs", we're referring to things like Wiki Loves Monuments, Edit-a-thons, Wikipedia editing workshops, online writing competitions, etc.
16:07:13 <fschulenburg> Our general assumption is that there are already many awesome programmatic activities and evaluation will help to make these even more awesome. So, if someone would ask me about why program evaluation is important, this would be my first answer: because evaluation can drive the impact of your program. By closely monitoring the status and the outcome of your programmatic activity, you'll know whether you reached your goals or
16:07:13 <fschulenburg> not. And that's an important precondition for improving things.
16:08:00 <fschulenburg> Other than that, program evaluation can also help you to decide what to spend your time, energy and resources on. Here's an example: if you know that a specific program is really good at growing the amount of featured pictures on Commons and your goal is to improve Commons' quality, then that's possibly the right program for what you want to achieve. In this sense, program evaluation can be used as a tool for decision-making.
16:08:13 <psychoslave> Hello
16:09:13 <SarahStierch> Welcome psychoslave! Frank is doing an introduction about program evaluation and design :)
16:09:19 <fschulenburg> Finally, program evaluation also helps you with being accountable to the people who provide you with resources. When I say "resources", most people might think of the FDC and the fact that the FDC asked last year for more information about which programs have the most impact. But I'm actually not only thinking of money. My take on accountability is also to think of the many hours of free time that people spend on programmatic
16:09:19 <fschulenburg> work. I guess it would be highly rewarding if those volunteers knew that what they're doing has the maximum amount of impact.
16:10:02 <psychoslave> Can I find logs somewhere to read what I missed ?
16:10:42 <fschulenburg> Now, when we started to think about program evaluation, we had to decide how we wanted to approach this: should the Foundation create a list of existing programs and then systematically run the numbers to see what has the biggest impact? Should our team in San Francisco decide which metrics are the best? – Our answer was no. Instead, we decided to do it differently: by focusing on capacity building (workshops, online
16:10:43 <fschulenburg> materials, video chats, etc.), we want to enable the people who're running programs to self-evaluate. Also: it's them who will decide which metrics are best suited to find out whether the program reached its goals.
16:10:47 <SarahStierch> Thanks sumanah
16:10:57 <psychoslave> thank you :)
16:11:23 <fschulenburg> Well, I guess it's obvious: top-down approaches don't fit into our culture. But there's also another reason why we think a self-evaluation approach is better. And that brings me back to my first point: evaluation is a very powerful tool to improve things. And the more people in our movement know about evaluation, the more we increase the likelihood of growing the impact that our programs have.
16:11:28 <fschulenburg> (Done)
16:11:51 <SarahStierch> Thanks fschulenburg!
16:11:52 <T13|needsCoffee> psychoslave: you can replace the /logs/ with /html/ and .txt with .htm for a little easier to read version.
16:12:40 <SarahStierch> We're pretty excited about the opportunities that PE&D holds - and we're eager to expand the network of community members we're assisting, so this is a great chance to help engage more people to get excited about evaluation...
16:12:41 <SarahStierch> Ok..
16:13:29 <SarahStierch> Next, janstee is going to talk briefly about "Appreciative Inquiry" which is a perspective that we're taking in our own team work, in the work we do in evaluation, and as a a way of "thinking" that we're trying to develop for the community we're building around evaluation. Take it away Jaime!
16:14:03 <Martin_Rulsch> so you intend capacity building? how do you want to realize that?
16:14:27 <SarahStierch> OK Martin, we'll let Jaime do her AI sharing briefly and then we'll take your first question :)
16:14:36 <Martin_Rulsch> sure
16:14:50 <SarahStierch> ai = appreciative inquiry (not artificial intelligence in this matter)
16:14:58 <SarahStierch> heh
16:15:05 <fschulenburg> :-)
16:15:14 <janstee> So what is Appreciative Inquiry? To understand it is first important to understand that often times evaluation can simply take a Discrepancy Perspective in which
16:15:15 <janstee> evaluation is seems as a process for identifying and measuring differences and inconsistency between what is and what should be and trying to fix "problems"
16:16:12 <janstee> An Appreciative Inquiry Perspective on the other hand sees evaluation as a process for engaging people across the programming system in order to build and grow programs, and projects around what works, rather than exclusively focusing on trying to fix what doesn’t.
16:17:25 <janstee> Using a combination of both perspectives allows us to ensure that we not only identify areas for improvement (important too) but that we identify and share the things that work!
16:19:33 <janstee> So for example, instead of asking the question of "why do 80% of program participants not continue contributing to Wikimedia projects 6 months later", we might ask, "Why do the 20% who do contribute 6 months later do so?".
16:19:53 <SarahStierch> That's awesome Jaime. That's a great example.
16:20:01 <SarahStierch> So it builds relationships…everyone gets a voice…and it's positive!
16:20:20 <SarahStierch> And we all know how good we are in the movement at focusing on the negative right? So we're trying to flip that on it's head.
16:20:28 <SarahStierch> OK! Let's get to Martin_Rulsch's question: "so you intend capacity building? how do you want to realize that?"
16:20:30 <SarahStierch> fschulenburg: will reply
16:20:41 <fschulenburg> Yes, give me a second.
16:22:00 <fschulenburg> Martin, I think the first step is to build a common understanding of the basic concepts of evaluation
16:22:14 <fschulenburg> Also, to start using a shared language
16:22:31 <fschulenburg> so, if e.g. Jaime talks about "inputs" and "outcomes"
16:23:01 <fschulenburg> everybody who's participating in the conversation should know what she's talking about
16:23:04 <Martin_Rulsch> what are the next ones?
16:23:32 <fschulenburg> the next one is that we'll enable people to pull their own data, design their own surveys, etc.
16:24:09 <Martin_Rulsch> how? what people?
16:24:10 <fschulenburg> this week, our analytics team provided us with the first beta version of the User Metrics API. They're also calling it WikiMetrics
16:25:05 <fschulenburg> people who are interested in doing evaluation. I guess that's mostly the people who're running programs
16:29:05 <SarahStierch> We're working on that, and we hope community members will contribute as well
16:29:13 <fschulenburg> JanAinali_WMSE: absolutely. we're also going to add links to books, etc.
16:29:51 <T13|needsCoffee> Would this PE&D program include sone kind of train the trainers application?
16:29:57 <SarahStierch> Great question T13|needsCoffee
16:30:07 <Martin_Rulsch> and how will you enable the community and program managers to use these things?
16:30:07 <JanAinali_WMSE> My idea is, there must be a lot of surveys uesd by chapters already for all sorts of things. Collecting these might be a good start
16:30:12 <LauraHale> I would like to ask a question as well. :) What sort of analysis and tools are going to be used in terms of measuring real ROI? IE, has the group come up with ideal benchmarks regarding community time spent engaging in outreach as a function of new editor participation? What metrics are being used to assess the value for the money spent on programming? Is there an ideal such as say
16:30:12 <LauraHale> US$100 spent on recruiting editors who participate over the couse of three months and make 1,000+ edits?
16:30:13 <psychoslave> Will we be able to do things like "provide different templated versions of the same content and evaluate the user retention impact?"
16:30:17 <SarahStierch> OK!
16:30:21 <SarahStierch> I'm going to gather the questions :)
16:30:28 <SarahStierch> Let's wrap up discussing capacity building, and then we'll move on
16:31:03 <fschulenburg> Martin_Rulsch: I'm not entirely sure if I understand your question
16:31:08 <SarahStierch> Martin_Rulsch: - Do you mean how will we support community and program managers - through mentoring or resources or tech support?
16:31:21 <T13|needsCoffee> A growing concern lately has been people have been trying ti mentor/adopt before going through or as soon as they are done with initial adoption themselves.
16:31:38 <Martin_Rulsch> a page on meta does not make the people use it
16:31:44 <SarahStierch> We're still working on Martin_Rulsch's question.
16:31:49 <SarahStierch> Then we'll go to T13|needsCoffee next :)
16:32:09 <Martin_Rulsch> I see the workshop in Budapest and Hong Kong … what else is planned to get the people go there and use these tools?
16:32:30 <fschulenburg> Martin_Rulsch: actually, so far, we've seen a lot of interest around evaluation
16:32:34 <T13|needsCoffee> I've seen it at the teahouse, afc, -en-help, adopt-a-user, etc...
16:32:48 <Achim_Raschka> Martin_Rulsch: a page on meta doesn't - butb maybe if some people start - for example I with the festival summer - it could be spread more and more
16:33:19 <fschulenburg> In Hong Kong, we will be doing several things: e.g. we'll also have a shared booth with the grantmaking team
16:33:24 <Achim_Raschka> and as FRank said: There is a lot of interest in evaluation the last years
16:33:44 <fschulenburg> and we'll be available for people to discuss things and to answer questions
16:34:11 <fschulenburg> Martin_Rulsch: also, I guess that good communication practices will help, e.g.
16:34:42 <fschulenburg> once we get more blog posts out that highlight how people are actually using evaluation as a means to drive impact
16:35:05 <fschulenburg> other people might be encouraged to follow that example
16:35:28 <fschulenburg> To sum it up, I'm not concerned about a lack of interest ;-)
16:35:40 <SarahStierch> OK! Our next question is from T13|needsCoffee
16:35:47 <SarahStierch> T13|needsCoffee
16:35:48 <SarahStierch> :
16:35:49 <SarahStierch> Would this PE&D program include sone kind of train the trainers application?
16:36:07 <T13|needsCoffee> :)
16:36:31 <SarahStierch> janstee is going to take that!
16:36:51 <SarahStierch> (then we'll get to LauraHale's question!)
16:37:35 <Martin_Rulsch> one last comment: I'd consider spreading the word about these tools and help pages on project pages where the project managers have not yet evaluated their work
16:37:39 <janstee> The training is targeting those who lead programs so that they may share the information and, naturally train others in their working to gain community involvement and input within their own programming context.
16:38:26 <fschulenburg> Martin_Rulsch: yes. thanks. that's a good point. we're always happy about this kind of feedback.
16:38:48 <SarahStierch> Does that help T13|needsCoffee ?
16:39:05 <SarahStierch> Oops, janstee might have a bit more to say..
16:39:05 <T13|needsCoffee> Yeah, a little.
16:39:09 <janstee> So yes, we are basically following train the trainers model as we work with people running programs and provide them with resources to grow evaluation.
16:39:25 <T13|needsCoffee> Ahh. Awesome. :)
16:39:32 <psychoslave> Staying on the same question, and providing a more concrete example, I launched a community decision on fr.wikiversity about boxes design (to hopefully propose something pretty which will retain users), and some people are concerned that this homogenization work may disappoint readers and editors. Could we define a set of boxes templates and evaluate which one seems to give the best result on readers/editors engagement ?
16:39:34 <SarahStierch> janstee is creating training modules, and we'll be doing all kinds of workshops online and off
16:39:35 <SarahStierch> OK
16:39:37 <SarahStierch> Next question
16:39:51 <SarahStierch> LauraHale: I would like to ask a question as well. What sort of analysis and tools are going to be used in terms of measuring real ROI? IE, has the group come up with ideal benchmarks regarding community time spent engaging in outreach as a function of new editor participation? What metrics are being used to assess the value for the money spent on programming? Is there an ideal such as say
16:40:06 <SarahStierch> janstee will reply!
16:41:13 <SarahStierch> psychoslave: we'll get to your questions shortly :)
16:41:13 <sumanah> I sort of want to follow up on T13|needsCoffee's question -- I think there are dozens of people who start and run programs but don't realize that they are doing "programs" and aren't participating in these conversations yet. Maybe sending a global message to the talk page of everyone on outreach.wikimedia.org would be good? :)
16:41:17* sumanah sits happily in queue�
16:42:02 <janstee> In terms of the way investments and impacts in a Return-on-Investments model - we are researching different practices for valuation and will continue to work with the community of programs to identify and come to agreement on how to prioritize how these things will contribute to the measurement model.
16:42:17 <psychoslave> SarahStierch: I didn't mean to insist, I see that you answer questions as they come, I just wanted to add more specific information
16:43:23 <LauraHale> janstee: If I have done an ROI analysis of a program privately, can I share it with you privately for feedback? My whole thesis is basically along the lines of data analysis of this sorts and the ROI component is new. I'd love feedback to see if I should do more of that as it pertains to programming work for TWG.
16:43:25 <janstee> We will work to both define how program investments (staff and volunteer time, budget, etc.) are monitored and tracked as well as how success is measured.
16:43:46 <Achim_Raschka> A question that arose several times: Do you have approaches to evaluate the uncountable effect like user satisfaction, impacts on quality increasement, fun (!!!), political and framework effects ....
16:44:04 <SarahStierch> LauraHale: What is TWG?
16:44:16 <raystorm_> Achim_Raschka, like satisfaction surveys?
16:44:47 <LauraHale> SarahStierch: The Wikinewsie Group, which is in the process of trying to get thematic org recogition
16:45:01 <janstee> Laura: That would be great to check out - my email is janstee@wikimedia.org
16:45:03 <SarahStierch> ah ok! thanks
16:45:15 <SarahStierch> ok psychoslave you're query is next.
16:45:18 <Martin_Rulsch> Achim_Raschka: as far as I understand the problem, the PE&G team will not evaluate (a lot) by themselves but want to ensure program managers to self-evaluate their work in a better way
16:45:27 <Martin_Rulsch> *problem -> project
16:45:33 <LauraHale> janstee: Thank you. Will send you an e-mail in a minute or two. :)
16:45:39 <fschulenburg> Martin_Rulsch: :-)
16:45:49 <SarahStierch> :P
16:45:57 <SarahStierch> classic irc office hours slip up ;)
16:46:01 <SarahStierch> ok psychoslave
16:46:14 <Achim_Raschka> Martin_Rulsch: yes, I know - but it would help to have some kind of idea sets to measure unmeasurables - for myself as a programme leader
16:47:13 <SarahStierch> psychoslave: Staying on the same question, and providing a more concrete example, I launched a community decision on fr.wikiversity about boxes design (to hopefully propose something pretty which will retain users), and some people are concerned that this homogenization work may disappoint readers and editors. Could we define a set of boxes templates and evaluate which one seems to give the best result on readers/editors engagement
16:47:58 <fschulenburg> psychoslave: I guess your question is: will the Foundation be able to provide people with dedicated support for A/B-testing
16:48:06 <fschulenburg> psychoslave: is that correct?
16:49:16 <psychoslave> Well, can you define A/B-testing, please ?
16:49:23 <fschulenburg> sure
16:49:35 <fschulenburg> A/B-testing in your case would work like this:
16:49:37 <SarahStierch> (And fschulenburg will have to leave us in a couple of minutes, so ya'll will be stuck with janstee and I for the last ten minutes ;) )
16:49:49 <fschulenburg> 50% of the users will get infobox A
16:50:02 <fschulenburg> and the other 50% of the users will get infobox B
16:50:07 <psychoslave> Yes, that's it.
16:50:31 <fschulenburg> you'd set up at tracking tool that would measure how many users keep editing after either seeing A or B
16:52:19 <fschulenburg> at this point, they're somewhat restrained. we just got a new director of analytics in
16:52:20 <SarahStierch> and they are hiring, too ;)
16:52:26 <fschulenburg> and they'll need some time to build capacity
16:52:30 <SarahStierch> OK Achim_Raschka: A question that arose several times: Do you have approaches to evaluate the uncountable effect like user satisfaction, impacts on quality increasement, fun (!!!), political and framework effects ....
16:52:43 <SarahStierch> janstee will take that
16:52:55 <SarahStierch> Achim_Raschka are you suggesting surveys or..?
16:53:16 <LauraHale> I posted research relatedto program metrics to the Analytics mailing list and it got zero feedback. As a place for community driven research and community (not developer) attempts to do A/B testing, it probably is not ideal?
16:53:22 <Achim_Raschka> best would be without - for some cases they could work
16:54:26 <raystorm_> there are qualitative tools for measuring subjective things like satisfaction and fun
16:54:30 <SarahStierch> We can discuss analytics concerns afterwards
16:54:32 <SarahStierch> Please? :)
16:54:56 <LauraHale> :)
16:54:59 <sumanah> nod
16:55:43 <fschulenburg> Ok, I have to run. SarahStierch and janstee will continue to answer questions. Thanks a lot for everybody being here. And also for your questions. See you next time.
16:55:48 <fschulenburg> Bye
16:56:03 <SarahStierch> Thanks fschulenburg !!! janstee is responding right now to Achim_Raschka and raystorm_
16:56:05 <janstee> Other than direct inquiry with your participants it is hard for me to imagine how you would measure satisfaction other than via assumptions and assessment of participants productivity before and after an event. We are working to identify scalable methods for online survey data collection that would work for the myriad programming cases that we hope to support evaluation in.
16:57:22 <raystorm_> that would be fantastic
16:57:28 <LauraHale> sumanah: No feedback or even acknowledgement period. There appears to be a disconnect between development metrics (with a focus on SQL database skills) and community driven ability to self assess but only through developers. The other options is survey research, which is problematic given self selecting populations.
16:57:59 <janstee> As far as political and framework effects, generally qualitative tracking of efforts and changes in conditions can be motored through various means but become further removed from direct observation of cause and effect - why we try to identify the tangible units of change along the pathway to the impact or change in condition.
16:58:16* sumanah PMs with Laura�
16:58:21 <janstee> *monitored not motored
16:58:54 <janstee> Does that help to answer your question Achim?
16:59:15 <Achim_Raschka> o.k. - so best would be surveays - when the questions are usable for surveys - and for the rest of questions working with assumptions to prove.
16:59:39 <SarahStierch> And yup, janstee will be developing fabulous training modules on how to do great surveys and so forth.
16:59:47 <SarahStierch> It takes time, but the team is working on it - it's very exciting.
16:59:52 <SarahStierch> OK!
16:59:58 <raystorm_> program leaders would be taughthow to run the scalable method for online survey data collection, then?
17:00:00 <SarahStierch> That wraps up the first Program Evaluation & Design Office Hours
17:00:05 <Achim_Raschka> the most interesting question for me always is "Did all participants have as much fun as possible"
17:00:13 <SarahStierch> janstee can respond quickly to raystorm_
17:00:18* SarahStierch nods at Achim_Raschka �
17:00:25 <raystorm_> appreciate it :)
17:00:29 <SarahStierch> It's an important thing - we don't do this because it's boring, right? :)
17:00:30 <Ainali> Is it a good idea too have one survey for ie. edit-a-thons across all chapters?
17:01:06 <SarahStierch> We'll wrap up this survey conversation and that'll signify the end of office hours for today.
17:02:02 <janstee> It is in the big picture that we will work to identify the most feasible way for program leaders to yes, post, collect, and analyze programmatic surveys…. this is something that will require growing both needs assessment related to the survey tools and shareable items, as well as generation of capacity for a feasible, scalable mode for doing the collection and analysis at each program's local level.
17:03:40 <psychoslave> thank you for your answers
17:03:41 <sumanah> LauraHale and I have clarified in PM the thing about feedback-getting :)
17:03:56 <janstee> The shareable items will make up the programmatic survey recommendations =)
17:04:04 <SarahStierch> OK!
17:04:10 <raystorm_> that sounds great, thank you!
17:04:52 <SarahStierch> (it's an announcement list)
17:04:58 <SarahStierch> We also have an IRC room that is rather quiet right now at #wikimedia-ped
17:05:01 <SarahStierch> Quick survey:
17:05:04 <SarahStierch> (ooh evaluation!)
17:05:07 <raystorm_> thanks for your time answering questions, all of you :)
17:05:08 <SarahStierch> How often would you like to see us do this?
17:05:16 <SarahStierch> Every other week or once a month, for example?
17:05:16 <raystorm_> every day? :P
17:05:19 <SarahStierch> LOL!
17:05:22 <raystorm_> ^^
17:05:22 <sumanah> about every other month
17:05:33 <SarahStierch> let's hear from non-staff please :)
17:06:00 <Achim_Raschka> once a month could be o.k.
17:06:02 <sumanah> fair. also want to note it would be great if it could be in partnership with a person from Analytics to help answer any tech questions that come up about WikiMetrics etc.
17:06:07 <raystorm_> once a week
17:06:25 <raystorm_> at least the first weeks? so many questions to ask...
17:06:36 <Ainali> but not on Friday evenings :)
17:06:36 <SarahStierch> how about this: janstee and i are going to do our best to hang out on IRC more often - I know I need to, I miss talking to my friends on here…so let's make sure we hang out in the #wikimedia-ped room
17:06:36 <raystorm_> an our goes by so quickly! :)
17:06:38 <SarahStierch> and you can email us too <3
17:06:41 <SarahStierch> absolutely
17:06:41 <raystorm_> *hour
17:06:45 <SarahStierch> it totally does!
17:06:53 <SarahStierch> we'll schedule a formal meeting once a month
17:07:01 <raystorm_> nice compromise!
17:07:03 <SarahStierch> but do our best, and of course we're always working on meta
17:07:22 <SarahStierch> yay we have raystorm_ in our room now!
17:07:23 <SarahStierch> LOL
17:07:36 <LauraHale> A week or every other week would be nice.
17:07:39 <SarahStierch> OK everyone> This was great. You'll find a log on the office hours page on meta and we can't wait to connect at Wikimania too - you'll find us at the Learning & Evaluation booth
17:07:48 <SarahStierch> We'll find a happy medium
17:07:54 <LauraHale> Especially if it looks like the work wll have any tiein to FDC funding as program evaluation gets tied into funding.
17:07:55 <Achim_Raschka> o.k. - I have to leave my office now - it's friday evening and later than seven ;)