I’m doing a presentation to Queens University (Belfast) Students on Source control. The slide desk from the can be found below.
Deploying SharePoint Farms on Windows Azure VM’s
This first session was all about SharePoint in the cloud. I wanted to attend this session as I believe customers will be interested in what can be done with full-blown SharePoint in the cloud, as opposed to SharePoint online. Some key points:
SharePoint on Azure
- SharePoint can now run on Azure, IAAS
- The benefits of the Cloud apply equally to SharePoint
- Agility – respond quickly to business demand
- Focus on the application, not the infrastructure
- Economics – lower cost
- 100% of API available
- You need to roll your own HA/DR/Scaling solution though
Types of SharePoint Instances
- Internet sites (I’m assuming customers would still need the internet facing SP license for this kind of usage)
- Developer, Test and Staging Environments
- DR strategy
- Hybrid applications that span the data centre and the cloud (as per the image below)
Key things to consider before migrating:
- Are there dependencies on non-supported OS or applications?
- Licensing restrictions? Do our software licenses transfer to the cloud
- Hardware requirements? Network cards or other hardware requirements?
- Forklift – Bring the entire farm to Azure
- IAAS to PAAS – Migrating apps to web/worker roles. Lot of rework required.
- Hybrid – Bring a portion of the solution to the cloud, while some resources stay on-premise
Building a SharePoint Developer Environment
- Obtain a development VM (VHD). (SP Information Worker VHD– (45GB, expanded to 127 GB after upload. Note it’s essentially an ‘empty’ file. You don’t get charged)
- Contains everything a developer needs to develop on SharePoint (VS, AD) etc. Note, this VM is the foundation for all the SharePoint training labs as well. (Part 10a is SharePoint, 10b includes Exchange and 10c is Lync for online presence etc.)
- Download the VM (Paul used a Cloud server for this, i.e. download to this, without bringing it down to the Kainos network. Note, uses Azure bandwidth)
- Upload to Blob Storage via CSUpload Tool
- You need tour Azure subscription ID, your cert thumbprint and your service manager endpoint.
- Note the cert management functionality is on the ‘old’ Azure portal.
- CSUpload ‘Add-Disk’ command is used. (note, ‘images’ and ‘disks’ are the same thing. Only difference is that Azure assumes images are sys-prepped and are used to create multiple instances of a VM)
- CloudXplorer – tool to view Cloud storage
- Create a VM from this disk.
- Connect Via RDP
- Add endpoints (port 80 etc.)
- Good to go.
How can I run my entire Farm in the Cloud?
So this is the standard architecture for a SP farm on-premise:
This is the Cloud platform. As you can see, there’s a lot more configuration!
Main points that Paul mentioned:
- All wrapped in virtual network.
- Within a ‘Cloud Service’, all machines can talk to each other.
- Cloud service’s can talk to other cloud services in the same virtual network.
- You need to start the DNS on the cloud first, for all subsequent IP’s to be resolved.
- Cisco and Juniper routers supported for VPN tunnel back to on-premise.
- Prep an image with SP pre-reqs etc. installed, but don’t run PS-Config. Capture this image and then use it for new machines.
- Azure images have a Virtual IP, i.e. mycloudapp.windowsazure.net – you need to create endpoints to access the ‘internal’ machines.
- Note, load balancer does not support sticky sessions at this time.
- Important to make sure you add your machines to the host, so that lookups aren’t going through the Azure load balancer
- To check if your load balancer is working, you add an html comment to your v4.master page. Then when you access the site, view the HTML source.
- Through PowerShell, you have far more control for configuring WFE’s/Load balancing. You can specify via PS where the LB will do a ‘probe health check’. This probe check takes a path that can be your own custom ASPX page where you can define the rules whether the LB will ‘hit’ the site.
All in all, a pretty decent session. Could be useful for some of our customers who want full SharePoint, but not with the infrastructure overhead.
ASP.NET and the Realtime Web (SignalR)
Brady Gaster – Azure Technical Evangelist
SignalR is going to change the web as we know it. Essentially it’s a technology that allows persistant connections between a client and server using .NET. You can then synchronise actions across a number of connected browsers. It’s been in development for a year now and is in Alpha stage, but it was said that this technology will be rolled into the next VS2012 release.
Brady started the session by demoing a Kinect doing skeletal checks on audience members and then uploading captured images to Windows Azure. Think of it as a clever intruder alarm in that it can work out if the movement detected belongs to an actual person or something else (pet cat, for example)
SignalR in a nutshell
- Open source. Code available.
- Everything is Async
- Abstraction on transport layers (websockets, long polling, ForeverFrame, etc)
- TechEd North America Talkfrom Damien Edwards covers the WCAT tool (load testing SignalR)
- Currently in Alpha State
- Will be included in VS2012 RTM
- ‘Defies everything we know about http’
- 9 Demos!
- Scaling SignalR requires changes on the backend.
- Stat: 40’000 persistent connections (real-time) with SignalR. CPU was setting at 40%. By the end of the year, the goal is to have 100k persistent connections.
- Native .NET
- iOS – Not merged into main SignalR repo yet
- Android (via Mono)
- Self (i.e. console)
- Back Planes (scaling for SignalR)
- Service Bus
- Basic component of SignalR is a Hub. Everything message goes through this Hub.
- Hub is an abstract class. Nothing to override, but does have properties
- Caller – the client that made the initial connection
- Clients – everyone connected to the hub
- Context – context of the call
- Groups – different groups of clients, i.e. admins, anonymous etc.
- Lot of dynamic expressions in SignalR (resolved at runtime)
- Client Side
- Import JQuery
- Import JQuery SignalR file (get it from Nuget)
- Declare hub variable
- Use Jquery extend to extend the hub variable
- When the server ‘calls’ the client page, update HTM on the client side.
- Cool Demo. Really impressed. SignalR can even tell when a user has closed their browser and update the user count on other open browsers.
- Also demo’d SignalR in an Azure Worker role.
The actual SignalR video of the session is now live on channel 9. Definitely worth a watch.