PowerShell – Zip Files and Send Email with Attachment

Hi Guys…

Suppose, you have below requirements:

1. To create a zip file containing all txt and dll files from a particular folder

2. To send the above newly created Zip file to the respective person in an email as an attachment.

—–

I have achieved it by creating the below script: Here in the script, I created two functions – CreateZip and SendEmail.

Script Body –

——————

<#
.Synopsis
   Create Zip file and then sent it as attachment in an email.
.DESCRIPTION
   This script will create a zip file and then will send this zip file as an attachment to an email.
.EXAMPLE
#>

#Script Name – SendEmailWithZipAttachment.ps1
#Creator – Mohd Aslam Ansari
#Date – 30-Dec-15
#Updated – First Version
#References, if any

$LogSource = "C:\temp\"
$ZipFileName = "Log.zip"

CreateZip $LogSource $ZipFileName
SendEmail

Function CreateZip {

New-Item -Force -ItemType directory -Path $LogSource\temp
Get-ChildItem $LogSource\* -Include *.txt,*.dll | ForEach-Object {Copy-Item $_ $LogSource\temp}

if (Test-Path -path $LogSource$ZipFileName)
{
    Remove-Item -Force $LogSource$ZipFileName
}

Add-Type -AssemblyName "System.IO.Compression.FileSystem"
[System.IO.Compression.ZipFile]::CreateFromDirectory($LogSource+’temp’, $LogSource+$ZipFileName)

Remove-Item -Force -Recurse $LogSource\temp
}

Function SendEmail {

$Email = New-Object -ComObject "CDO.Message"
$Email.Configuration.Fields.Item("
http://schemas.microsoft.com/cdo/configuration/sendusing") = 2
$Email.Configuration.Fields.Item("
http://schemas.microsoft.com/cdo/configuration/smtpserver") = ‘xyz.smtp.com’
$Email.Configuration.Fields.Item("
http://schemas.microsoft.com/cdo/configuration/smtpserverport") = 25
$Email.Configuration.Fields.Update()

$Email.From="xyz@xyz.com"
$Email.To="abc@xyz.com"
$Email.Subject="Test email"
$Email.AddAttachment($LogSource+$ZipFileName)
$Email.TextBodyPart = "PFA…"

$Email.Send()
}

—End of Article—

Mapping Drive using PowerShell

Suppose we have requirement to Map a drive to a UNC path that user needs to input. How to do it?

Here is my script that can do it for you…

Below script will ask for the User Input and then will map the “K” drive to that path.

image

On clicking “Ok” it will map. If the drive is already mapped, it will give below error message:

image

Script:

—————–

<#
.Synopsis
   Function to map the drive
.DESCRIPTION
   This script will map the drive to UNC entered by the User.
.EXAMPLE
   ./Drivemapping.ps1
#>

#Script Name – Drivemapping.ps1
#Creator – Mohd Aslam Ansari
#Date – 23-Dec-15
#Updated – First Version
#References, if any – I have used New-PSDrive function to map the drive.

Function DriveMapping
{

Param (
      [string] $drive,    
      [string] $UNCLoc
      )

$driveInfo = New-Object System.IO.DriveInfo($drive)
    if (-Not $driveInfo.IsReady)
    {
        New-PSDrive -Name $drive -Root $UNCLoc -Persist -PSProvider FileSystem -scope Global -Verbose
        [Microsoft.VisualBasic.Interaction]::MsgBox("UNC Path ‘"  + $UNCLoc + "’ mapped to " + $drive + " drive successfully.","OKOnly,SystemModal,Information", "Success")
    }
    else
    {
        Write-Output "Drive not ready/available for mapping"
        [Microsoft.VisualBasic.Interaction]::MsgBox("Drive not ready/available for mapping.","OKOnly,SystemModal,Critical", "Error")
    }
}

[void] [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.VisualBasic")
[string]$drive = "K"
[string]$UNCLoc = [Microsoft.VisualBasic.Interaction]::InputBox("Enter a UNC path to be Mapped–>", "Path to be Mapped to " + $drive + " drive","")

DriveMapping -drive $drive -UNCLoc $UNCLoc

—————-

I used New-PSDrive cmdlet to create it. Please refer to PowerShell help to read more about it.

–End of Article–

Automated Team Project Creation and Assigning User Group to Readers Group

Suppose there is a requirement to create Team projects through command line and once created, a particular User group needs to be added to one of the inbuilt user group in TFS (say Readers group).

I wrote below PowerShell script to implement it.

Requirements –

1) TFS power tools should be installed on the system.

2) Visual Studio should be installed.

Script Code –

<#
.Synopsis
   Script to create team project and to add user group to the Readers group
.DESCRIPTION
   This script will create a Team Project and will then add user’s group to the TP’s Readers group. On running the script, enter the information as asked to enter.
.EXAMPLE
   ./CreateTPAndAddGrptoReaders.ps1
#>

#Script Name – CreateTPAndAddGrptoReaders.ps1
#Creator – Mohd Aslam Ansari
#Date – 22-Dec-15
#Updated – First Version
#References, if any

$TeamCollectionURL = Read-Host "Enter Team Collection URL (Ex. https://[TFSServer/tfs/POCCollection/):"
$TeamProjName = Read-Host "Enter the Team Project Name:"
$ProcessTemplate = Read-Host "Enter the process template (Agile/CMMI/Scrum):"
$UserGroup = Read-Host "Enter the User group that needs read access with domain (Ex domain\group):"

tfpt createteamproject `
/Collection:$TeamCollectionURL `
/teamproject:$TeamProjName `
/processtemplate:$ProcessTemplate `
/sourcecontrol:New `
/Validate `
/Verbose

tfpt createteamproject `
/Collection:$TeamCollectionURL `
/teamproject:$TeamProjName `
/processtemplate:$ProcessTemplate `
/sourcecontrol:New `
/Verbose

cd "C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE"
.\TFSSecurity.exe /g+ [$TeamProjName]\Readers n:$UserGroup /collection:$TeamCollectionURL

[void] [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.VisualBasic")
[Microsoft.VisualBasic.Interaction]::MsgBox("Team Project created with the Name – " + $TeamProjName + ". " + $UserGroup + " added to the Readers group.","OKOnly,SystemModal,Information", "Success")

——

For more information on TFPT command, refer to PowerShell help by typing tfpt /?

—-End of Article—

PowerShell DSC – How to install a MSI

PowerShell DSC – DSC is a new management platform in Windows PowerShell that enables deploying and managing configuration data for software services and managing the environment in which these services run.

Requirement – How to install MSI using PowerShell DSC?

Solution –

Suppose, we have xyz.client.msi that needs to be installed on target machine. For this example, I have considered installing it on the localhost. To install it on the target server, PS remoting needs to be enabled and WinRM port configured for HTTP communication using below PowerShell commands

a) Enable-PSRemoting –Force

b) winrm quickconfig –transport:http

I wrote below script to make it happen:

Configuration ApplicationInstall
{

Package AppInstall
{
    Ensure = "Present"
    Path = "E:\Software\xyz.Client.msi"
    Name = "TestPkg"
    ProductId = "{23FB7DAE-F831-4F6F-854B-2B28A7DD3C98}"
    Arguments = "AllUsers=1"
}
}

ApplicationInstall –Output C:\ApplicationInstall

Start-DscConfiguration -Path C:\ApplicationInstall -ComputerName localhost -Force -Verbose -wait

Explanation –

First thing, we need to create a Configuration with the key word “Configuration”.

Next step is to create the “Package” resource. It has many properties as explained below:

Package [string] #ResourceName
{
Name = [string] #This is Mandatory Property
Path = [string] #This is Mandatory Property
ProductId = [string] #This is Mandatory Property
[ Arguments = [string] ]
[ Credential = [PSCredential] ]
[ DependsOn = [string[]] ]
[ Ensure = [string] { Absent | Present } ]
[ LogPath = [string] ]
[ ReturnCode = [UInt32[]] ]
}

Apart from Mandatory parameters, I have used:

1) Ensure parameter –

        “Present” – Means the MSI should be present, if not, it will install it.

        “Absent” – Means the MSI should not be present, if present, it will uninstall it.

2) Arguments – You can pass any argument that you want to pass with MSI.

Once the Package is written, we need to execute the configuration with the following command and it will create the MOF (Managed Object Format) file and will keep it at C:\ApplicationInstall folder:

ApplicationInstall –Output C:\ApplicationInstall

image

Content of MOF file –

image

MOF files are a convenient way to change WMI settings and to transfer WMI objects between computers. The contents of MOF files only become effective when the files are compiled.

Now to compile MOF file, I ran the below command:

Start-DscConfiguration -Path C:\ApplicationInstall -ComputerName localhost -Force -Verbose –wait

Output –

image

If you re-ran the above command, it will say, “the package is already installed”.

If you want to uninstall the application, just change the “Ensure” property of the package to “Absent”, and then execute the configuration to create the MOF file and then compile it. It will then uninstall the application.

—End of Article—

RM – Agent based or vNext (Agent less)

Microsoft Release Management is the DevOps solution for delivering the software easily and more frequently to any given environment and by controlling the process through approvals for each step.

It can be used in two ways.

1) Agent Based Release – For it, you need to install deployment agents to each of the machines in your environment that are required to deploy your applications.

2) vNext (Agentless) based Release – For it, you need to use Windows PowerShell, Windows PowerShell Desired State Configuration (DSC), or Chef to deploy your application to machines without installing a deployment agent.

I am assuming, you have selected on-premise based release. With this said, which above option to select? Should I select agent based release or vNext release?

I have tried summarizing it in the below table that will help you taking the right decision:

S.No.

Agent based Release

vNext (Agentless Release)

1

Agent needs to be installed on every target machines.

PowerShell Remoting needs to be enabled on all the target machines.

2

Pull based

Push Based

3

On every RM release/updates from Microsoft, Agents may needs to be re-installed and re-configured. Microsoft says, Agents auto-upgrade upon upgrading RM server but there is no guarantee.

No need to change anything on new RM release/updates

4

It can use in-built deployment tools

Here everything needs to be written in PowerShell Scripts that is based on Windows Remote Management and PowerShell

5

No direct PowerShell DSC support. You can run a PowerShell DSC script from Agent based release template using command-line task.

PowerShell DSC fully supported

Let me explain each of the above point in the same order. Each point has its own advantages and disadvantages.

1) As said above, for agent based release, deployment agent needs to be installed on all the target machines so maintenance cost will be there. This negative point can be mitigated by using SCCM packaging to deploy agent automatically to all the servers.

In case of vNext, there is no agent involved, rather PowerShell remoting needs to be enabled which can be a security issue.

PS remoting can be enabled and WinRM port configured for HTTP communication using below PowerShell commands

a) Enable-PSRemoting –Force

b) winrm quickconfig –transport:http

It actually opens WinRM and can be used by hackers to run any script from remote machine. Extra security (like SSL) needs to be in place.

2) Pull Based vs Push based – In case of Agent Based, Agent will be pulling PowerShell script(created by RM as tools or created by users) and will use that script to perform the particular task.

In case of vNext, RM will push the PS script to the target machine using the PowerShell remoting.

Both approaches are ok to use.

3) With every Microsoft Release Management software release/update, we have to uninstall, reinstall and reconfigure the deployment agent on all the target machines. This will be big overhead as it needs to be done on all the target machines. It can be mitigated, by redeploying using SCCM.

Though, as per Microsoft, deployment Agents will be auto-upgraded upon upgrading the release management but there is no guarantee. Recently, with RM 2015 update 1, we had to do a manual upgrade of the agents.

This overhead will not be there at all for vNext based as it does not have any agent in place. This is big plus for vNext based.

4) In Release Management software, Microsoft have written many tools that can be used to achieve particular task like copying, MSI Deployer etc.  List is given below:

image

You can create your own tool and you can add in it as well. Internally, all these tools are created using PowerShell.

Now, in case of Agent based, you can use any of the above already available tools from Microsoft in your automated release to achieve the particular task.

Unlike it, in vNext, everything needs to be written in PowerShell scripts. This means that for any activity, you will need to write a PowerShell script from scratch. This could be hard in the beginning and can be treated as a negative point. But as you will start working on it, you can have your own PowerShell library that you can use with other releases as well.

5) PowerShell DSC – DSC is a new management platform in Windows PowerShell that enables deploying and managing configuration data for software services and managing the environment in which these services run.

There is no direct support of PowerShell DSC in case of Agent based release. however, you can run a PowerShell DSC script from Agent based release template using command-line task and by enabling Remoting (WinRM).

vNext based release fully support PowerShell DSC as it runs with the same concept.

Conclusion –

Agent based, 

  • Getting started is easier – rich set of tools/actions
  • When you decide to move to web-based model, you need to translate the fine grained actions into equivalent PowerShell scripts.

vNext based

  • Getting started is harder, need to be hands-on with PowerShell scripting

Note – If you need to get started quickly then Agent based. If you’re comfortable with PS scripting then vNext.

–End of Article–

TFS/RM PowerShell policies

Lets discuss the PowerShell execution policy scenario when we are using Microsoft TFS and/or RM(Release Management).

Before running any PowerShell script, first step is to define an execution policy. How to define it? See below:

First see what is the existing policy in place by running Get-ExecutionPolicy cmdlet. By default, it will be “Restricted” as shown below:

image

Windows PowerShell has four different execution policies:

  • Restricted – No scripts can be run. Windows PowerShell can be used only in interactive mode.

  • AllSigned – Only scripts signed by a trusted publisher can be run.

  • RemoteSigned – Downloaded scripts must be signed by a trusted publisher before they can be run.

  • Unrestricted – No restrictions; all Windows PowerShell scripts can be run.

Now, we have to select one best suited to our Organization. From the above list, most secured option would be “AllSigned”. How to set it? See below:

Run Set-ExecutionPolicy with “AllSigned” argument as shown below: It will set the policy.

image

With this policy (AllSigned) in place, we need to sign all the PowerShell scripts before running them in the target machine.

Now the question arises is that –

  1. What will happen to the PS scripts that TFS/RM generates and runs internally to implement particular task? We cannot sign them as we cannot see them. This is important to answer and understand because TFS and RM, all does all the activities using PS scripts internally.
  2. As well as many times, we writes PS scripts to do some specific task from TFS and RM, do we need to sign them?

To answer the first question, Microsoft says that the execution of the PS generated internally will be unaffected by the policy you set at machine or user account level. They will be executed under “ByPass” policy which will not take your machine’s policy into account. It is possible because the context under which the script will run, will be administrator of that server and MS assumes that since they have written the code themselves, so it is secure and can be executed with bypassing the already set policy. Point to be noted here is that the policy will be bypassed only for that session under which the script is executing and it will not impact the existing policy in any way. One can run and use the existing policy for any other script execution at the same time.

 

To answer the second question, the PS script that user will write and will be passing to the Remote PowerShell task, when that script is executed it will get executed with the ByPass Policy as well. Microsoft assumes that you being developer of the script, will write the secure code and execute the script with proper security. If you want your script to run using the particular policy like in our case “AllSigned”, you can define in your user script (say, at the top line of the script) to run under a different execution policy (For example, Set-ExecutionPolicy AllSigned –Scope Process), then that policy will be used for all the subsequent script invocations, that will get invoked from your main script which you passed in the task instead of the ByPass Policy that we set.

—End of Article—

Agent based Release – How to make it secure

Here is the scenario….

This is regarding MS Release Management 2015. If we set a release (QA –> Stag –> Prod) and in each environment, we have some specific sensitive information that needs to be replaced in the config files. Now, our understanding is that we have to create variables in components and it will replace the information accordingly in the particular environment but the value needs to be written in the workflows at the time of the release creation. Release will be created by QA team and we don’t want them to see the sensitive information of the Production. How to achieve it? How can we avoid writing sensitive value at the beginning? Can we have security around it?

Here is the solution to this scenario:

Follow the below steps for Agent Based release templates.

  1. For every environment like Staging/Production, create a Group. Let’s say a group for SQA as shown below. It can be added from Administration –> Manage Users.
  2. From the security tab, add the stage and the type of permission like “Edit Value…”. Suppose we have added stage as “QA” and we gave “Edit Values and Target servers” rights then all the members of SQA group can edit the values of the variables in that particular environment.clip_image002
  3. Configure Variables for different stages as “Encrypted” type. To do so, go to Configure Apps –> Components –> Configuration Variables clip_image002[5]
  4. Finally, the values of the encrypted variables can be set in each stages by the respective teams. Please note that the values needs to be set before initiating the release. Once the release will be started, it cannot be changed. This is also controlled by the security set at step 2 above.image –End of Article–