Recently published on Microsoft TechNet (30 October) were the Software boundaries and limits for SharePoint 2013. Some noticeable adds are in this guidance so please take the time and familiarize yourself with the changes. Some of the guidance however did not change and a couple in particular I want to reference due to the fact on pretty much every engagement I go on with a customer these areas are not adhered to and in some cases blown completely out of the water.
List view lookup threshold
8 join operations per query
Specifies the maximum number of joins allowed per query, such as those based on lookup, person/group, or workflow status columns. If the query uses more than eight joins, the operation is blocked. This does not apply to single item operations. When using the maximal view via the object model (by not specifying any view fields), SharePoint will return up to the first eight lookups.
List view threshold
Specifies the maximum number of list or library items that a database operation, such as a query, can process at the same time outside the daily time window set by the administrator during which queries are unrestricted.
To give an example of what I see customers do:
Notice where I highlighted. These two setting changes will provide you with the opportunity one day to bring your farm to its knees and in most cases will also cause you to pick up the phone and contact Microsoft to open a case. This is a case that can easily be avoided.. How? By reading the Software Boundaries and Limits for SharePoint 2013. We put this guidance out for a reason and the teams aren't just randomly selecting numbers out of the air and filling in the charts. We do extensive testing on our products with many different hardware configurations that we expect our customers to run under. These boundary numbers are areas where we will see performance degradation occur. Also notice that some of these are soft boundaries and some are hard. The Hard boundaries are supportable stop points and should not be exceeded. The Soft boundaries fall in line with the example above. This is a throttling limit we set. You can adjust it if you like but you will face issues down the line. Depending on your hardware you may not run into it for some time past the limit. But I can guarantee you that in some time you will run into an issue here.
So let's go through an example of what this customer faced. The background here is that I was there for a week performing a Microsoft RAP (Risk Assessment Program) and this is one of the test cases that we run.
I run through the test cases and I am starting my analysis of the FARM's Health and Risks… I come across a case that states that the "List View Threshold is set to a value greater than 5,000" At this point I look into the details and what I'm looking for are the number of list items shown. In this case one list had 60K items. Remember we are not concerned about the number of list items… were concerned with the View. I then have the customer open up Central Admin and I look at the Resource Throttling section for the Web Apps in question and low and behold I find this was changed. Customer stated that they made this change early on in the deployment but didn't know if anything would be affected. These types of findings allow us to amplify the importance of RAPs where we provide proactive side by side discussions with customers. Many times these discussions open interesting findings that we can hopefully correct so as not to have the customer face issues in the future. During this conversation the customer mentioned they had an outage a few months ago and they were about to open a Premier Support Ticket but it suddenly normalized. This was all I needed. This little snippet of info allowed me to prove that this issue is real and was experienced (though the customer never found out root cause)
Changing these settings will cause an issue known as SQL Escalation Lock. Basically what happens is that you can have a very large list view in place. You're a member of a team site and you open up that large list. You're freely viewing the items in this list and notice no issue. Other users however are not so lucky. You're opening up that list in an exclusive mode where others will receive an error when trying to view. But this is just one list. Who cares right? This gets better… follow along. You are done viewing this list and close it. This is where the real problem occurs. What happens now is that your SQL Processors are probably spiked and remain spiked. Performance is now tanked. Why? SQL is trying to remove the lock and depending on how large that list really is... it can take some time for it to heal itself. This is a SQL thing… not so much a SharePoint thing. SQL Queries are handled much better with the numbers we state in the Boundaries list. In time however the CPU's will normalize but in many cases the customers panic and call an outage (which this is) but we can avoid this from happening… read on.
What to do about this?
Two methods are recommended
In MOSS the first option was the preferred choice. However in SharePoint 2010 and 2013 what we see is you still start off with better performance with option 1 but once you start to hit a list size of 100K it starts to level off to same performance results. When I work with my customers depending on how many lists are affected I generally will tell them to go with option 2 because its pretty simple to implement. Option 1 will take quite a bit of work moving items into their desired folders and there will need to be some planning with regards to naming conventions.
Don't believe me? Have a look at the info in this guidance http://technet.microsoft.com/en-us/library/cc262813.aspx#Throttling (for those running SharePoint 2013 this guidance is still relevant)
Hope this avoids issues for someone in the future
So there I was. Sitting all alone. Eating my emergency rations of Peanut Butter Crunch cereal, writing some work reports and watching Netflix on my Xbox. I knew my hometown (Brigantine NJ) and the surrounding shore areas were already getting hit hard by Sandy but I live inland near Princeton so I am figuring.. eh I am safe this won't affect me… work cancelled my work due to the approaching storm (thanks Microsoft for caring). I had all my devices (Laptop, Windows Phone, Duracell Charger, Verizon MIFI and my newly acquired Microsoft Surface)
Lights in place were blaring… work was getting done… movie was somewhat interesting… then it happened.. Click.. Pop… Power went out. So I'm figuring… ok dude… this is just a quick thing.. After about 15 min of sitting in the dark and silence I start to think… ok dude, you're a moron.. go find the candles. So candles get lit, I started up my fireplace, and settling back for what turned out to be 27 hours of darkness. One by one as the hours went on my devices started to die off.. first was the laptop (which is due to be replaced soon I hope) then the Duracell charger went bye bye… phone started to bleed down (it did last quite a while though I'm happy to say) What I was left with was my new Surface and Mifi which lasted the entire time (and still had plenty of juice). The Surface functioned flawlessly for a number of scenarios
All in all I am quite pleased with the device. I was worried about the size of it with screen viewing (bad eyes) and the touch keyboard (I have mastered this keyboard and though I will still be the fat finger king, I am quite comfortable typing on it)
Most important during this time though was the battery life. I could have easily gone another day or two on this.
Big thanks to my company (Microsoft) for putting out such a solid product. Very pleased with this device.
FYI… I purchased this on my own dime. This was not a company purchased device J
If you haven't gotten one yet go have a look at them. The advertisements are one thing but you need to pick one up and feel it to fully appreciate it.
In SharePoint 2010 we introduced FIM which acted as a broker of sorts when bringing profiles from AD into SharePoint. One of the key reasons for this add was to allow companies to not only pull AD info into SharePoint but also to push information from SharePoint back to AD. This was a key change from Microsoft Office SharePoint Server 2007 or MOSS 2007 as we generally called it. The synchronization method was a very simple one way pull from AD which didn't allow a lot of flexibility.
One of the pains that were felt with User Profile Service Application (or UPSA, UPA, Dir Syncer or whatever folks like to refer to it) was that it required more steps to configure than just going into the UI and clicking a couple buttons as we Admin types are generally fond of doing. Early on in the process my buddy Spence wrote the manifesto on how to properly create this. There was a lot of bad guidance floating around out in the webosphere on how to do this and pretty much all of them were wrong. As much as it pains me to admit as I don't want to inflate an ego but one of the questions I would always ask my customers when they deployed SharePoint 2010 went a little like this:
Me – Oh great it looks like you have every Service Application provisioned (this to me 99% of the time pointed me to the fact that they used the FCW (Farm Configuration Wizard) to deploy the farm) More to come from that another time but this is a bad plan for Production Farms. Fine for Dev or Test but Prod…. Please don't do this.
Customer – Yep it was simple but I can't seem to get the User Profile thing working
Me – did you follow any guidance out on the web for this?
Customer – No I just let SharePoint handle it
Me – rubbing my hand through my hair wondering what a proper response would be while continuing to be a Trusted Advisor……
Me – You know we have a couple excellent documents out there that detail what needs to be done here to configure this. Namely TechNet or my buddy with the silly purple page In fact if you follow this guide step by step you will always have a successful experience in deploying UPA.
Getting back on track to the point of this post is to inform you that those guides are still fully relevant today in SharePoint 2013. There is however one exception to this.
Along with the previous mentioned method to Profile Synchronization we have also included the previous method of import…It's called Active Directory Direct import and think of this as the lightweight import method.
The method of implementing this is not right in front of your face though. When I first started looking at this I would have assumed that they would add this functionality at the point where you create your Profile Synch but this lives in a different area. I will highlight both SharePoint 2010 and 2013 in the following screenshots to show this:
SharePoint 2010 (notice the arrow pointing to Configure Sync Settings)
SharePoint 2013 (Notice that the screens look remarkably similar)
SharePoint 2010 (Configure Sync Settings)
SharePoint 2013 (Configure Sync Settings)
As I am illustrating here this is where you would differentiate between the two methods. By default we are setting this for FIM but if you choose to go with AD Direct you would set this at this location.
One additional thing to note here is that in the previous version we had to have Replicate Directory Changes set via Delegate Control in Active Directory. This is still a necessary step for either method you choose here.
Additional Note: Spence informed me that he recently put out an updated article that can be found here
<A Microsoft PFE's Note from the Field>
So in my day to day SharePoint support endeavors one of the classic cases that come in from time to time is customers trying to implement Kerberos into their Farms. One of the issues I see pop up a lot is duplicate SPN's.
The classic method (via the command line) is to run the following command to create an SPN:
Setspn –a http/<FQDN or Netbios Name> Domain\usercredential
For example Setspn –a http/connect.sharepoint.com sharepoint\content1
This would create an SPN to be used to create a valid Kerberos ticket for the http/connect.sharepoint.com site. Easy stuff here and you can verify this by going into ADSIEdit and open the properties for the Content1 Service Name.
But how do we get the issue with Duplicate SPN's. Usually this comes down to poor planning before you turn on your computer. Say as an example (and one I see often) you didn't think this through and plan your AuthN properly. Or say you just made a simple "FatFinger" or had a "Brain Cramp"… it happens. For illustration purposes lets go with the idea that you created the above mentioned SPN but for some reason you tried to run the Setspn command again using a different account. Voila! that's where we are in trouble. In my example I created two SPN entries pointing to the same site but I used two different service accounts. SPContent and SPContent1. As you can see in the screenshot below I get an Event ID 11 thrown which if you can read the text is basically telling me exactly what I need to know to go forward and fix the issue.
So is there a way to verify this before I go and run the command? Absolutly. If you enter the command Setspn –x you will get a list of any duplicates that exist in your environment.
Let's take this one step further and combine both the –A and the –X into a single FatFinger Proof method of creating our SPN's
Operation ABORTED!!! So if we look at the above output were shown the same information as the Setspn –x but we are also combining this with the ability to add an SPN. In both of these outputs you can clearly see that in my demonstration I tried to create an SPN for the same site http/connect.sharepoint.com but I used separate accounts. Also note that I can use the same account for multiple hosts if I so choose.. in this case im using Spcontent1 for both http/connect and http/my.
So in a perfect world when you are doing this work and you don't have any duplicates you can use either the setspn –x initially to see if any duplicates exist or do what I always do… use the setspn –s and cover everything in one sweep. In the above example if there were no duplicates the process would successfully finish and presto chango you have your SPNs created
Hope this helps someone out there J Cheers!
One question I get asked a lot from customers is around patching their SharePoint environments. In general you were always able to install the bits on all your servers in any order you felt like going.. the real question or issue I have found is the completion step which is running the Configuration Wizard. Some customers went so far as to run the installer to load the bits on their boxes but may have only run the Configuration Wizard on one box… or in some cases maybe in a larger farm they simply omitted a box.. this puts your environment at risk.
In SharePoint 2010 I still like to hit my main server that's running CA first with Config Wizard (or PSCONFIG if you choose that method) But there is nothing stopping you from going to the other boxes and kicking off the Config Wiz while the other is running…. What will happen is that the other boxes will go into a locked state for the upgrade. Once one box completes the others will in sequence of starting the Config Wiz run through the 10 step upgrade. As you can see in the screenshot below this particular box is waiting for the lock to release to continue.
The one real take away I need to make is to ensure in a multi-server environment you run this on each server.
Building a brand new SharePoint 2010 Environment? Want to decrease some of the time for making sure your environment is up to date with regards to versioning? Slipstream your build.
In the following section I am going to detail how I build my slipstreams… there are other posts out there that will be similar and may even include shortcuts such as posting the bits directly to the update folder but I like to be a little more hands on and in reality it does not take a lot of effort.
Download the SharePoint 2010 RTM Bits (http://technet.microsoft.com/en-us/evalcenter/ee388573)
I shorten the name of the downloaded file before I begin to something like SP10.exe and SFSSP1 this will make it a bit easier on the typing.
Download the SP1 Bits.
Although technically not necessary I will still download and incorporate both the SharePoint and the SFS SP1 Bits into my slipstream.
Same as with the RTM bits… I will shorten the names of these files to help with typing. SFSP1.exe and OFFSP1.exe
Download the CU's
Lets get busy
So this was the hardest part of the operation. What I do next is create four folders and extract the bits for each into them.
SP1 – Folder names OFFSP1 SFSSP1
Command I run: sp1.exe /extract:c:\OFFSP1 (where sp1.exe is the renamed executable and OFFSP1 is the folder I created and am extracting to.
I will do this for the Cumulative Updates as well.
Once this is complete the SharePoint 2010 Installation folder has a folder called updates. I am simply going to copy and paste the contents of each of the folders I extracted directly into Updates.
This concludes the Slipstream procedure. Once complete you can go forth and install. Your environment will be up to date and you wont have to go through the process of individually installing patches and then remembering if you hit every server in the process… It happens (unfortunately very often)
Additional reference information can be found here at my buddy Spence's site http://www.harbar.net/archive/2011/06/30/327.aspx
Came across this today. My buddy Ryan Campbell wrote up a nice piece that i felt could use some more exposure.
Virtualization, the SAN and why one big RAID 5 array is wrong
Have a read... its a goodunn
In my day to day work as a PFE I engage quite often with customers with sessions on Upgrading to SharePoint 2010 or performing SPRAPs (SharePoint Risk Assessment Program) on environments. When performing either of these I tend to come across a lot Configuration DB Orphans. When running the command stsadm –o Preupgradecheck on a MOSS 2007 (SP2) environment you may see the following output
Failed : Orphaned site collections
An orphaned site collection is a site collection exists in the content database, but it is not in the configuration site map. Such site collections is not accessible and will not be upgraded properly.The follow orphaned site collections where found
This further goes on to tell you how to fix the issue in the next line you would see in the output report:
Try detaching and re-attach the content databases to fix the orphaned site collections.
SO you may ask…. Well how did I get these orphans anyway and how will they affect my upgrade. To answer the upgrade question it shouldn't. Why you may ask? Because now you have more than enough information to correct this issue before you even attempt the upgrade J. To be perfectly honest the only way I can see this really causing an issue is if you do an In Place upgrade because the vanilla In Place we are not taking our detatching our databases and reattaching them over to a new farm (A much preferred method btw).
To answer the question on how did they get there….. well there could be a number of reasons why this happened. Duplicate URLs or Duplicate Host Headers not being in the site map. It could also be something as simple as a site deletion that occurred within a site collection that may have shown that it was deleted when the site owner made the change but tossed an error in the end…. You're left with a site that is still registered within in the SharePoint Config DB but not something that you can see or open. Were left with something I call a ghost URL.
The takeaway here is twofold. First off it's not difficult to clear this flag. And when you are looking at performing an upgrade you really want to make sure the Preupgradecheck reports back clean. Take the time to deeply analyze this report and start actively correcting the issues found. Once you clear errors re-run the Preupgradecheck to see if the changes are reflected. I have done upgrade talks where we spend half a day just going through this report and then come up with a plan for mitigation.
Some real decent videos just published on TechNet showing how MSIT does SharePoint
Managing a Parallel Upgrade to SharePoint 2010
This information has been posted from the Product Group around some issues that are happening in SharePoint farms that have implemented the October Cumulitive Updates.
Here is the Warning
Here is the workaround