It has got to be one of the most puzzling aspects of the MIM/FIM world, the MIM MA exports, sometimes it appears to have a life of its own! When you first build the MA and do your first major export you wait to see if its all good, if it is, you say phew!
Alright, lets talk about some best practice guidelines that will bring you closer to an error free export.
1/ There are some attributes that MIM MA checks for uniqueness. If these objects are duplicated. You will get errors.
2/ On the deprovision page of your MIM MA, set it to stage a delete on next export when it is disconnected from the Metaverse object. One of the worst mistakes is to set it to disconnectors. If you do an export of the same object down the line you will get “record already exists” errors. To this end when you clean up the MV object make sure you cleanup the MIM MA object, in the CS and in the Portal (via export).
3/ There are attributes that have validation and are also bound to an Object with validation e.g EmployeeType. So if you are exporting employeetype that is not in the OOB config make sure you update the attribute and binding. Check your data and make sure your value is exact, no leading or trailing spaces e.g “Volunteer” is not “Volunteer ”
4/ The MIM MA is sensitive to no Ascii characters, those hidden characters like carriage return, foreign character etc. I have a blogpost on that.
5/ Composite processing. If you are getting SQL deadlock errors then MIM is bunching too many changes together for the SQL server. The way MIM works for exports is that if it has a lot of exports it will compact them into batches instead of one at a time, to make it faster. A lot depends on the number of changes you are making at one time. So a batch of 200×15 attributes equals 3000 changes bunched together. SQL places this into memory for faster processing but if you SQL power is not much you will be getting a lot of SQL deadlock errors especially when you do initial export. You can change the mode of exports to synchronous (one at a time) and that should work. But if you have 300k records it will probably take 24hrs (at least for the initial load). I recommend more horsepower to the SQL server so you don’t have to turn off asynchronous..