Fire updates for friends and family #waldocanyonfire

Definitely worse in a bad way.

I’m still in Seattle (good and bad, right?), son is here too working.

Dave, Jack, 2 dogs and 2 cats are leaving, heading south to Albuquerque.  Loading the car now.  They have not been put on notice, but since they can leave, they are, before the official call comes and all chaos ensues.  Pets have a reservation for boarding for two weeks and Dave and Jack are flying here to Seattle on Friday.  At some point we have to get back to Albuquerque to get our pets and our car.  I am NOT moving to Albuquerque, even if our house burns to the ground.

Evacuation zone is less than a mile from our home.  Our son just got his reverse 911 call to get ready to leave.  No other real words to say other than it’s some pretty scary shit going on.  Nothing we can do but stay out of the way and run like hell the other way.

Many friends have been evacuated and several have likely lost homes, over 100 gone so far.  Fire tripled in size in 24 hours, over 15k acres now.  This afternoon/evening is expected to be near apocalyptic, winds and lightning expected, wind has been hurricane force.

Our cabin was just evacuated, but WE ARE NOT THERE or close.  They just announced that neighborhood on the news for mandatory evacuation.

Best sources of news so far have been general twitter traffic with #waldocanyonfire hash, I follow #csgazette, they have a pretty good feed and rollup of updates.  The live feed on is great, especially since I am so far away.

I am trying to keep emotions at bay, they won’t serve me well just yet.

Can’t show whole fire area and evac stuff on a single image anymore, but here’s where we are now.  Blue is evac zones, red is fire as of a couple hours ago.  Arrow on top is our house.  Arrow on bottom is our son’s apartment.



Importing that flat data will be easy peasy, I promise!

Yea, famous last words.

(I promise a happy ending, read on)

We do lots of BIG things around here but we also do some small gigs too.  We do some volunteer/cheap CRM admin for some non-profits.  We do little CRM admin jobs for our extended friends/family group.  It is one of those small groups that decided to go all in with CRM (yea!) but needed to bring their data from a legacy system into CRM and onto marketing lists to use with their new implementation of Click Dimensions.

Client says I’ve got everything you need in CSVs.  I confirm the data really I as flat as can be and it really truly is.  All rows go in as Leads.  We have a selection of contact detail fields like name, email, phone, addresses and such.  Sometimes just an email address.  We had a date unsubscribed on the records too.  No history needed.  Oh and this are the lists they reside on.  Let’s get them on the lists in CRM too.

Knowing legacy systems that are a paid service offering, they have nothing to gain by making it easy to retrieve your data and this was no different.  So each list comes out as a separate csv with no concept of a single row existing on anything other than THIS list, when in reality we have tens of thousands that resided on more than one list.  I logged in to legacy system and pokes around and couldn’t find any way to get an aggregate list with the fields I needed.  Ok, we can make this work.

So this is the collection of files I get, I file per line here:

Main list (all on this one):  76,679 members
List 1:  1,441 members
List 2:  251 members
List 3:  781 members
List 4:  1,649 members
List 5:  1,981 members
List 6:  16,136 members
List 7:  23,006 members

After a couple of hours wasted trying to get Excel to merge/manipulate the records my genius of an 18 year old son spends 10 minutes writing a quick little program to merge it all together with a yes/no column for each separate list (proud and annoyed all at the same time of course).  I discovered a trick a few years back that if you add that yes/no column on records to import for each list, you just then build the list in Advanced Find based on that bit, no need to even clutter the form with it, just get the data in CRM to make your lists.  More details on that here.

Next I turn off the workflow that notifies of new Leads assigned (can you imagine 76,693 notifications?!).  This is a very commonly used workflow, so look before you start an import (and turn it back on later).  I did add a workflow that triggered on create of Lead to mark do not bulk email to yes if the native file unsubscribed field contained data.

I quickly discover that my file is too big and split it in two, one with 35,000 Leads, other with 41,693 Leads.  And now GO! 

And wait.  It actually didn’t take too long to get all that into CRM, about 90 minutes or so.  We had 14 duplicates.  Not bad.  No other errors.

I make all my lists from above and start filling them, starting with the small ones and spot checking data for accuracy.  All looking good so far.

Then I get to the BIG list.  Snag.  Can’t make a static list in CRM Online of more than 50,000 records.  Ideally we need this to be a static list, not dynamic.  So I start asking around, looking around, trying to find a way around this.  I finally cave and open a support ticket.  Like most MVPs I avoid them at all costs.  It is very frustrating to still have to go through the first x-number of tiers of support when you know that you have already covered the basics or you wouldn’t have opened a ticket to begin with. (next paragraph is a bit of a whine-fest about my support experience, feel free to scroll past it, you’ll be fine)

ETA:  I forgot a step I tried, taking the dynamic list I had made and making a copy to a static one, that did not work either.

I will say that the folks there have endless patience with people like me, but I wish they knew more about the product that I needed supporting, not just cursory knowledge of several products and some scripts to follow.  I am a fan of off-shoring your work when it makes sense, so it’s not that.  I did spend a few conversations making the support guy understand that the data was already in CRM, I was no longer importing or exporting anything. I was needing to build a marketing list, an item you learn on Day 1 of CRM user training.  Once that hurdle had passed, they needed to reproduce my problem.  Ugh.  No it’s not broken, it’s a system limitation.  Can you change this setting (AggregateQueryRecordLimit) please?  Support guy tried and said things worked for him and his sample set of 3,000 records.  Of course it did, 3,000 is ok, 76,679 is not.  But his biggest sample data set was 3,000 so he didn’t know what to do. (I told him to turn off duplicate detection and re-import the same records over and over.)  I then asked for someone that had more product knowledge and might be at least familiar with the scope of my issue.  Ok, got someone that knew what I was talking about, but didn’t know anything about tweaking that setting for me.  I suggested he ask someone, he acted as if that was not common but said he’d get back to me.

In the meantime, I am emailing some folks I know and I know some pretty smart people.  Several were not even aware of the 50,000 limit and I couldn’t really even find that published anywhere online.  As a band-aide for the client I make a dynamic list that holds everyone we need on the list.  Not ideal, but band-aide.

Original support guy calls back with his supervisor coaching him on what to say and tells me there’s a fix coming up and this org has not had its updates yet so that’s why it won’t work.  I said that I was not familiar with a fix that would address this and could they explain more to me.  They couldn’t give me anymore details.  It very much felt like a “go away lady” response.

Then one of my other smart friends replies to me and confirms that yes, there is something coming up in the next update that will “address” this.  It is not a fix.  It is a make-do solution.

It would seem that the query for a static list had been using fetch xml and that had a hard top limit of 50,000 records.  The update contained a hotfix that changed this to use SQL instead and that allowed adding 50,000 at a time, with the aggregate limit going up to cover my 76,679 records, just not in a single query, had to do series of smaller ones to get whole list.  Not perfect, but I can work with that.

So the org got updated last night (woohoo) and I could now make a list of 76,679 but had to break it down somehow to come in at less than 50k per batch added.

Remember, the ideal scenario would be to build the list using that bit query (all leads where master list = yes).  But that was out since we know that would return 76,679 records.

On the import job record you have an option of adding records imported to a marketing list, one screen at a time.  So if I have 250 records per page, that 307 pages of scrolling.  Not real practical.

So, since some of these records the only contact data is an email address.  I figured I would slice and dice based on some of the potentially empty fields.

So, round 1…all leads from this date where first name contains data.

Round 2…all leads from this date where email contains data and first name contains no data

That got me 76,656 on my list.

What I really wanted for round 2 was where first name contains no data and where on this list equals no.  Not possible.

So, we have a net loss of 23 records for this list.  That’s a loss of less than half a percent (actually like .029%).  Client and I both agree that is acceptable.

(how did I find the total number on the list?  I did an advanced find on marketing list to show where number of members is greater than x-number…and after running of the query a dozen times got the right number from my educated guess that I started from)

I do feel compelled to say that I followed up with some folks I know at support and they were eager for my feedback and training tips and said they would take it to the group that does the first couple tiers of support to help make support a better experience.

“official” hotfix text from the product team:


We have a maximum limit imposed on the aggregate query which restricts the number of records to 50000(by default). This check is there for the fetch xml based queries. When we add members to the marketing list(by following the repro steps) we use fetch xml based query to find out the total number of members this marketing list will have after the addition of the currently selected members. If the total number of members comes out to be larger than allowed(which is 50,000 by default), an error is thrown and addition of selected members to the list is skipped. 


Used SQL query to retrieve the total members in the marketing list after the addition of currently selected members. Since this query is not fetch xml based it does not undergo the check for maximum allowed limit and hence user can have more than 50,000 members in the marketing list at a given time.