Code signing using PowerShell Scripting

Scenario – You have a requirement to sign the output of your application. The output can be in the form of .dll, .exe, .ocx etc. You need to sign all of them before distributing them to others.

Requirements – To implement the code signing, you will need to things –

  1. Security certificate
  2. Timestamp server URL

You need to install certificate on the server where you want to sign your code from. Insure that the certificate should be non-exportable or else some one will export it and use it.

Timestamp is needed to ensure the validity of the signed code.

Implementation – With the above requirements in place, we can use the below PowerShell Cmdlets to sign the code –

  1. Set-AuthenticodeSignature – This is to add the authenticode signature to the file.
  2. Get-AuthenticodeSignature – This is to get information about the Authenticode signature in the file. With this cmdlet, you can find if the file has valid signature or not. 

I have written below script to sign any .dll, .exe, .ocx files present in the base location where this script is placed, if they don’t have valid signature.

It does below activities –

  1. It is reading certificate from the store on the server.
  2. It is then finding the certificate that we want to use from the output of above command and then storing it into the variable to be used.
  3. There is a command line in the script to get the full path of all the files (.dll, .ocx, .exe) from all the folders and subfolders where the script is kept.
  4. It is then finding if the file (with full path) as found in step 3 above has a valid signature or not. If it has valid signature, it is ignoring this file and going to next file. If the file is not signed, it is calling function “Codesigning()” to sign it.
  5. All the activities are captured in the log file at the same location as well as displaying on the console.

Here is the PowerShell Script –

<#
.Synopsis
   Script is to sign all internal built (.dll, .exe, .ocx) file outputs
#>

function CodeSigning ()
{
    Param (
            [Parameter(Mandatory=$True)]
            [ValidateNotNull()]
            $FileNameWithPath,           
            [Parameter(Mandatory=$True)]
            [ValidateNotNull()]
            $CertInfo
          )

    write-host "———————————————"
    Write-Host "FileName – " $FileNameWithPath
    Write-Host "Code Signing Started–"
    Set-AuthenticodeSignature $FileNameWithPath $CertInfo -TimestampServer
http://TimeStampURL
    Write-Host "Code Signing Finished Successfully"
    write-host "———————————————"
}

Start-Transcript -Path ".\codesigningTrans.log"
$cert= (dir cert:localmachine\my\ -CodeSigningCert)
write-host "————All Certificate Information from the server———-"
write-host $cert
write-host "———————————————"
foreach ($_cert in $cert)
{
    if($_cert.Thumbprint -eq "ReplaceWithACTUAL")
    {
        $CertInfo = $_cert
        Write-Host "————Certificate in Use start————–"
        Write-Host $CertInfo
        Write-Host "————Certificate in Use End————–"
        $FileData = Get-ChildItem -rec | Where {$_.Extension -in ".dll",".exe",".ocx"}  | ForEach-Object -Process {$_.FullName}

        foreach ($_FileData in $FileData)
        {
            $FileNameWithPath = $_FileData           
            $IsValid = Get-AuthenticodeSignature  $FileNameWithPath | where {$_.Status -eq "Valid"}
            if (!$IsValid.Status -eq "Valid")
            {
                CodeSigning $FileNameWithPath $CertInfo
            }
            else
            {
                Write-Host $FileNameWithPath " already has valid signature"
            }
        }  
    }
}
Stop-Transcript
$log = Get-Content ".\codesigningTrans.log"
$log > ".\codesigningTrans.log"

————————-End of Article———————-

Advertisements

Connecting TFS GIT in LINUX

Problem –

Developers were getting issue when they were trying to do GIT operation (like clone) in TFS from LINUX (CentOS). They were getting certificate issues.

Resolution –

Solution is to generate SSH key in LINUX and set the same in TFS. With it, Git in LINUX is able to handshake with TFS.

Here are the steps that needs to be performed in LINUX –

1. Generate SSH key. Run the below command - 

      ssh-keygen -t rsa -C "emailID"

Note – “emailID is your email ID.

2. Run the below command to get the key and copy the output key as shown below –

      cat ~/.ssh/id_rsa.pub

image

Go to the TFS web portal and follow the below screens –

3. Click on person icon and select Security option.

image

4. Below screen will appear. Select “SSH Public Keys” option and click on Add. It will ask for the key and enter the key, you copied in step 2 above.

image

Now go to the LINUX environment and try to do git clone operation. It should work.

                      ——–End of Article——

TFS–Automated Builds–Agent Workspace–Application folder

In the current scenario, when we need to build any application through build definition in TFS, it downloads all the sourcecode/components in the Agent’s workspace in a folder with a “digit” as a name of folder instead of the application name. For example, in case of xxxBillSplit application, it downloaded the source code in folder “7” under the Agent workspace.

For many applications like xxxBillSplit, it is perfectly ok but for application like XYZ where we have cross team project references, it is not working and getting failed.

To address this issue for complex applications like XYZ, following changes can be done –

  1. Under Agent’s workspace, a folder with the name “SourceRootMapping” get created as soon as agent builds the first application. This is single folder for all the applications agent is building. Under this folder, there are folders for all the build definitions with the collection id (GUID) as name a shown below – image
  2. Under GUID folder, there is a folder with the name as build definition ID. Both the above information about GUID and build ID can be found from the build definition as shown below –image
  3. Under the build ID folder, there is json file called “SourceFolder.json” which contains information about the builds as shown below – Please note “7” referencing in many places that I have shown in highlighted box –image
  4. Replace build id (7 here) as highlighted above with the application name as shown below.image
  5. Once done, rebuild the application. Folder with the application name will be created. You can now delete the folder with ID (7 here) as shown below – image

This activity needs to be done for all the team projects and it is one-time activity. Once done, backup of this folder (SourceRootMapping) can be taken and in case of new server/new agent/new workspace, this folder can be restored to implement the changes.

 

                 ————End of the Article————-

App-V Package Publishing in XenApp 7.8

Purpose –

The purpose of this blog post is to state the steps required to publish virtualized package created using Microsoft App-V in Citrix XenApp 7.8.

App-V Package Publish Steps for XenApp 7.8 –

Follow the below steps –

1. Launch “Citrix Studio”.

2. Go to the node Configuration -> App-V Publishing. Right click on “App-V Publishing”. Click on Add Package.

image

3. It will open window to browse App-V package (.appv) file. Select the file with extension “.appv” and click Open.

image

Below screen will appear…

image

On successful addition of package, below screen will appear:

image

4. Now, go to the “Applications” node, right click it and select “Add Applications” option as shown below –

image

5. Below screen will appear. Click Next.

image

6. Select the “Delivery group” where the virtualized application will be delivered.

image

7. Click on Add button and select the “App-V..” option as shown below –

 image

Below screen will be displayed. It will show the list of App-V packages we add before. Select the respective one and click Ok. It will close the below screen and will take the control to the above screen. Click Next and it will take to the summary screen and click finish to add the application.

image

8. Once the application is added, right click on the added application and select the properties to add the users for its access control. Below screen will appear.

image

Add the required users as shown below –

image

Application is now published in XenApp 7.8 and is now ready to use.

DevOps – Release Perspective

Hi Folks…

We are hearing “DevOps” buzz word these day often….so I thought of writing my understanding on it from release perspective.

What is it? What impact it can put on us?

Let’s think of life without it….let’s sit on time machine and go in the past…How IT worked during that time? Company has business and they need IT to automate its activities…Business gives the requirements to IT. IT has many groups, like Development group, QC/QA group, operations group that deal with production/infrastructure support etc.

Information flows from one group to another in a sequential manner. Once IT receives requirements from the business, it does some feasibility analysis and then assigns the same to the development group. This assignment can be manual (via emails/excel) or through some tools.

Development team starts working on it and their only goal was to implement the requirements given by the business and they have not had any idea about the actual production environment. Development team are using tools for versioning their code and they may or may not be using any tool to automate the testing/build.

In absence of any tool, they had to do building of code and testing all manual. With long deadlines and once in a while release, it worked fine. Development team writes code and at the end of coding they were doing full builds and the system testing. Finding what broke their existing feature is a costly thing. But all well, as far as, they are delivering the software to business. Business will find the issues, they will contact Development team again and the same process will go on till the time acceptable software is delivered to operations.

Till this time there is hardly any talk between Development team and operations. Many times operations with their understanding of production, have many important recommendation/s, but since they are in picture at the end, Development team doesn’t accept their recommendations and they have to carry on the release. Operations might find it difficult or even not possible to put the software in production as is, as it is not designed in a way, software can be hosted on that particular environment. Good example, is with XenApp. In XenApp, since many users will be accessing the application from same XenApp server and if the application is writing some information in a common file, the last user will overwrite the information of previous user. Such things needs to be addressed in the development phase but since Development team is not aware such changes needs to be done at the end that calls for additional testing and so additional cost.

These are just the tip of iceberg. There can have lots of problem just because of less coordination between development team and operations.

In the current dynamic business scenarios, business is changing like anything and they need their changes to reflect on production as soon as possible. If changes are taking time, you are out of the business. Competition is huge. With old traditional methods, we cannot continue. Many companies like Amazon, Google have demonstrated that they can have multiple releases to production in a day. It has increased the expectation of the management to many folds. How these guys have made it possible? If they can deliver, why can’t we? It has many perspective and we all should understand what differently they have done to make it happen? Do they have some magic? Not at all….Let’s discuss it…

First step is to change the way we think, if they can do, we can too…

Second step is to think how to make all or most of the manual activities automated. Many manual activities that we do often can be easily automated like automated build, automated testing, automated deployments etc. We can have continuous integration and continuous delivery to make things moving fast. How it works? Let’s discuss it.

Developer have just finished work on the task he was working and now wants to do check-in. He has two options, one is to do build manually on the development machine and do some initial unit testing and then do a check in. Second is that on check-in, the already written unit test case will be run automatically and the build will be done on the independent machine. This second option is called continuous integration and with it, developers will get the results in few minutes and their build is also pristine. In the first option above, developer has to do the unit test themselves and they may miss few test cases and that can be costly at the time of the actual release.

In continuous delivery, the build files are continuously getting deployed in to the target environment on every change with proper approval workflow in place.

The essence of the continuous integration and delivery is maximum automation. For the management, it seems to be like clicking a button. Enabling continuous integration and continuous delivery is itself a time consuming thing as it sometimes needs scripting/programming. Writing such scripts at the time of release will put extra effort and time that might defeat the actual purpose of the automation. Then how to create them fast?

Here comes the main point behind DevOps. DevOps = Dev + Ops. Both the development team and operations have to have an excellent coordination and communication from the beginning of the software life cycle. This way automation can be implemented while development team is doing coding. As soon as the automation framework is created, development team can start using it in their day to day activity which will refine it to the maximum and at the time of actual release, framework is already in place and a click can make release a piece of cake. It will always deliver with great accuracy and predictability.

Implementing UNIX Grep functionality in PowerShell

The grep UNIX command allows you to find lines in files that contain key words or phrases. With this command, it is possible to perform a quick search of a file or directory without having to look at each file via a text editor or the UNIX more command.

To achieve same functionality in PowerShell, we can use Select-String cmdlet as shown below:

dir -Recurse *.* | Select-String -Pattern "Clear" | Select -Unique path

This will list all the files with path that contains “Clear” as string inside it.

Below command line will look for all the patterns available in Pattern.txt and will list the matching files with the path.

dir -Recurse *.* | Select-String -Pattern (Get-Content .\Pattern.txt) | Select -Unique path

–End of Article–

PS Script for replacing strings in the File

If you have a requirement to replace particular string in a file with some other string, it can be done using below PS statements.

(Get-Content .\test.txt | ForEach-Object {$_ -replace "CurrentUser","CurrentUser2"}) | Set-Content .\test.txt

Explanation –

1. First step is to read the content of the file

2. Pipe the output. Read each line by line and replace the required value. Note – “$_” means this row. It is equivalent to “this.” in .Net.

3. Please note “(“and “)” at the start of first command and end of second command respectively. It is needed as we want to send (Pipe) the full output to the last statement to set the content.

4. Finally set the updated content to the same file.