Friday, December 20, 2013

December 2013 CU for SharePoint 2013 has been released

Microsoft has released the December 2013 CU for SharePoint 2013.

According to Stefan Gobner, there appears to be two files to download, due to size, as opposed to the single, self-extracting compressed file in the past. There also seems to be a major issue with this CU and the Performance Point Dashboard.

Please read carefully before deploying

ATTENTION:

Previous releases of the SharePoint Server 2013 cumulative update included both the executable and the .CAB file in the same self-extracting executable download. Because of the file size, the SharePoint Server 2013 package has been divided into two separate downloads. One contains the executable file (identified as ubersrv2013kb2850024fullfilex64glb), while the other contains the .CAB file (identified as ubersrv_1). Both are necessary and must be extracted to the same folder to successfully install the update. Both are available by clicking the same Hotfix Download Available link in the KB article for the release.

See the KB article of the SharePoint Server CU for more details.

KNOWN ISSUE:

After installing the SharePoint 2013 Server December 2013 CU PerformancePoint Dashboard Designer no longer loads. When you try to open Dashboard Designer in SharePoint 2013, you receive an error message. You should not install the SharePoint Server 2013 CU listed below if you need PerformancePoint Dashboard Designer.

The KB articles for December CU will be available at the following locations in a couple of days:
  • KB 2849961 - SharePoint Foundation 2013 December 2013 CU
  • KB 2850024 - SharePoint Server 2013 December 2013 CU
  • KB (delayed) - SharePoint Server 2013 with Project Server December 2013 CU
  • KB 2850013 - Office Web Apps Server 2013 December 2013 CU
The Full Server Packages for December 2013 CU are available through the following links:
After installing the fixes you need to run the SharePoint 2013 Products Configuration Wizard on each machine in the farm.
Be aware that the SharePoint Server 2013 CU contains the SharePoint Foundation CU. And the SharePoint Server 2013 with Project Server CU contains Project Server CU, SharePoint Server CU and SharePoint Foundation CU.
That means only one package has to be installed for the SharePoint 2013 product family.

See the KB article of the SharePoint Server CU for more details.

Tuesday, December 17, 2013

F5 Load Balancing Methods / Algorithms

Using the default load balancing method

The default load balancing method for the LTM system is Round Robin, which simply passes each new connection request to the next server in line. All other load balancing methods take server capacity and/or status into consideration.

If the equipment that you are load balancing is roughly equal in processing speed and memory, Round Robin mode works well in most configurations. If you want to use the Round Robin method, you can skip the remainder of this section, and begin configuring other pool settings that you want to add to the basic pool configuration.

Selecting a load balancing method

If you are working with servers that differ significantly in processing speed and memory, you may want to switch to one of the Ratio or dynamic methods.

Round Robin

This is the default load balancing method. Round Robin mode passes each new connection request to the next server in line, eventually distributing connections evenly across the array of machines being load balanced. Round Robin mode works well in most configurations, especially if the equipment that you are load balancing is roughly equal in processing speed and memory.
Ratio (member) and Ratio (node)

The LTM system distributes connections among machines according to ratio weights that you define, where the number of connections that each machine receives over time is proportionate to a ratio weight you define for each machine. These are static load balancing methods, basing distribution on static user-assigned ratio weights that are proportional to the capacity of the servers. 

Regarding Ratio load balancing:

Load balancing calculations may be localized to each pool (member-based calculation) or they may apply to all pools of which a server is a member (node-based calculation). This distinction is especially important with the Ratio method; with the Ratio (member) method, the actual ratio weight is a member setting in the pool definition, whereas with the Ratio (node) method, the ratio weight is a setting of the node.

The default ratio setting for any node is 1. If you use the Ratio (as opposed to Ratio (member) load balancing method, you must set a ratio other than 1 for at least one node in the configuration. If you do not change at least one ratio setting, the load balancing method has the same effect as the Round Robin load balancing method.

Warning: If you set the load balancing method to Ratio (node), as opposed to Ratio (Member), you must define a ratio setting for each node.

Dynamic Ratio

The Dynamic Ratio method is like the Ratio method except that ratio weights are based on continuous monitoring of the servers and are therefore continually changing.
This is a dynamic load balancing method, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time.

The Dynamic Ratio method is used specifically for load balancing traffic to RealNetworks® RealSystem® Server platforms, Windows® platforms equipped with Windows Management Instrumentation (WMI), or any server equipped with an SNMP agent such as the UC Davis SNMP agent or Windows 2000 Server SNMP agent. To implement Dynamic Ratio load balancing, you must first install and configure the necessary server software for these systems, and then install the appropriate performance monitor. For more information, see Appendix A, Additional Monitor Considerations .

Fastest (node) and Fastest (application)

The Fastest methods pass a new connection based on the fastest response of all currently active nodes. These methods may be particularly useful in environments where nodes are distributed across different logical networks. Load balancing calculations may be localized to each pool (member-based calculation) or they may apply to all pools of which a server is a member (node-based calculation).

Least Connections (member) and Least Connections (node)

The Least Connections methods are relatively simple in that the LTM system passes a new connection to the node that has the least number of current connections. Least Connections methods work best in environments where the servers or other equipment you are load balancing have similar capabilities.

These are dynamic load balancing methods, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time.

Load balancing calculations may be localized to each pool (member-based calculation) or they may apply to all pools of which a server is a member (node-based calculation).

Observed (member) and Observed (node)

The Observed methods use a combination of the logic used in the Least Connections and Fastest modes. With the Observed methods, nodes are ranked based on a combination of the number of current connections and the response time. Nodes that have a better balance of fewest connections and fastest response time receive a greater proportion of the connections. The Observed modes also work well in any environment, but may be particularly useful in environments where node performance varies significantly.

These are dynamic load balancing methods, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time.

Load balancing calculations may be localized to each pool (member-based calculation) or they may apply to all pools of which a server is a member (node-based calculation).

Predictive (member) and Predictive (node)

The Predictive methods also use the ranking methods used by the Observed methods, where nodes are rated according to a combination of the number of current connections and the response time. However, with the Predictive methods, the LTM system analyzes the trend of the ranking over time, determining whether a node's performance is currently improving or declining. The nodes with better performance rankings that are currently improving, rather than declining, receive a higher proportion of the connections. The Predictive methods work well in any environment.

The Predictive methods are dynamic load balancing methods, distributing connections based on various aspects of real-time server performance analysis, such as the current number of connections per node or the fastest node response time.

Load balancing calculations may be localized to each pool (member-based calculation) or they may apply to all pools of which a server is a member (node-based calculation).

SharePoint 2013 Load Balancing Affinity / Persistence / Sticky Sessions

I remember hearing that 2013 no longer requires sticky sessions (affinity/persistence) in the load balancing solution. This is due to the new distributed cache service which now hosts the login tokens. see more below


Improvements in claims infrastructure

SharePoint 2013 also includes the following improvements in claims authentication infrastructure:
  • Easier migration from classic mode to Windows-based claims mode with the new Convert-SPWebApplication Windows PowerShell cmdlet
    Migration can be run against each content database and each web application. This is in contrast to SharePoint 2010 Products, in which the migration was run against each web application. For more information, see Migrate from classic-mode to claims-based authentication in SharePoint 2013.
  • Login tokens are now cached in the new Distributed Cache Service
    SharePoint 2013 uses a new Distributed Cache Service to cache login tokens. In SharePoint 2010 Products, the login token is stored in the memory of each web front-end server. Each time a user accesses a specific web front-end server, it needs to authenticate. If you use network load balancers in front of your web front-ends, users need to authenticate for each web front-end server that is accessed behind the load balancer, causing possible multiple re-authentications. To avoid re-authentication and its delay, it is recommended to enable and configure load balancer affinity (also known as sticky sessions). By storing the login tokens in the Distributed Cache Service in SharePoint 2013, the configuration of affinity in your load balancing solution is no longer required. There are also scale-out benefits and less memory utilization in the web front-ends because of a dedicated cache service.
  • More logging makes the troubleshooting of authentication issues easier
    SharePoint 2013 has much more logging to help you troubleshoot authentication issues. Examples of enhanced logging support are the following:
    • Separate categorized-claims related logs for each authentication mode
    • Information about adding and removing FedAuth cookies from the Distributed Cache Service
    • Information about the reason why a FedAuth cookie could not be used, such as a cookie expiration or a failure to decrypt
    • Information about where authentication requests are redirected
    • Information about the failures of user migration in a specific site collection

Microsoft SharePoint 2013 Disaster Recovery Guide

Where does it all go wrong with disaster recovery? Why a disaster recovery plan fails the business and costs IT staff their jobs or a promotion? This book is an easy to understand guide that explains how to get it right and why it often goes wrong.

Given that Microsoft's SharePoint platform has become a mission critical application where business operations just cannot run without complete up time of this technology, disaster recovery is one of the most important topics when it comes to SharePoint. Yet, support and an appropriate approach for this technology are still difficult to come by, and are often vulnerable to technical oversight and assumptions.

Microsoft SharePoint 2013 Disaster Recovery Guide looks at SharePoint disaster recovery and breaks down the mystery and confusion that surrounds what is a vital activity to any technical deployment. This book provides a holistic approach with practical recipes that will help you to take advantage of the new 2013 functionality and cloud technologies.

You will also learn how to plan, test, and deploy a disaster recovery environment using SharePoint, Windows Server, and SQL tools. We will also take a look at datasets and custom development. If you want to have an approach to disaster recovery that gives you peace of mind, then this is the book for you.

Microsoft SharePoint 2013 Disaster Recovery Guide - by Peter Ward, Pavlo Andrushkiw, Peter Abreu, Pat Esposito, Jeff Gellman, Joel Plaut.
  1. Design, implement, test, and execute solid disaster recovery plans for your SharePoint environment with this essential guide
  2. Learn out of the box backup and restore procedures
  3. Implement a solid disaster recovery strategy for custom development environments
  4. A quick hands on guide to get familiar with procedures to secure your data
  5. Learn why disaster recovery is a struggle to understand and implement
  6. Learn how to support optimized application recovery times with tiered service levels
  7. Inherit a mission critical environment that has no disaster recovery plans
  8. Get familiar with backup and restore procedures that are available to an administrator as well as the pros and cons of each
  9. Learn about Disaster recovery in regards to virtualization and the cloud
  10. Architect data in SharePoint with disaster recovery in mind
  11. Build confidence and refine disaster recovery plans with more frequent testing
  12. Create a theme for use with your video player
http://bit.ly/1dkaMHh