Quantcast
Channel: ASP.NET Hacker
Viewing all 490 articles
Browse latest View live

Creating an email form with ASP.NET Core Razor Pages

$
0
0

In the comments of my last post, I got asked to write about, how to create a email form using ASP.NET Core Razor Pages. The reader also asked about a tutorial about authentication and authorization. I'll write about this in one of the next posts. This post is just about creating a form and sending an email with the form values.

Creating a new project

To try this out, you need to have the latest Preview of Visual Studio 2017 installed. (I use 15.3.0 preview 3) And you need .NET Core 2.0 Preview installed (2.0.0-preview2-006497 in my case)

In Visual Studio 2017, use "File... New Project" to create a new project. Navigate to ".NET Core", chose the "ASP.NET Core Web Application (.NET Core)" project and choose a name and a location for that new project.

In the next dialogue, you probably need to switch to ASP.NET Core 2.0 to see all the new available project types. (I will write about the other ones in the next posts.) Select the "Web Application (Razor Pages)" and pressed "OK".

That's it. The new ASP.NET Core Razor Pages project is created.

Creating the form

It makes sense to use the contact.cshtml page to add the new contact form. The contact.cshtml.cs is the PageModel to work with. Inside this file, I added a small class called ContactFormModel. This class will contain the form values after the post request was sent.

public class ContactFormModel
{
  [Required]
  public string Name { get; set; }
  [Required]
  public string LastName { get; set; }
  [Required]
  public string Email { get; set; }
  [Required]
  public string Message { get; set; }
}

To use this class, we need to add a property of this type to the ContactModel:

[BindProperty]
public ContactFormModel Contact { get; set; }

This attribute does some magic. It automatically binds the ContactFormModel to the view and contains the data after the post was sent back to the server. It is actually the MVC model binding, but provided in a different way. If we have the regular model binding, we should also have a ModelState. And we actually do:

public async Task<IActionResult> OnPostAsync()
{
  if (!ModelState.IsValid)
  {
    return Page();
  }

  // create and send the mail here

  return RedirectToPage("Index");
}

This is an async OnPost method, which looks pretty much the same as a controller action. This returns a Task of IActionResult, checks the ModelState and so on.

Let's create the HTML form for this code in the contact.cshtml. I use bootstrap (just because it's available) to format the form, so the HTML code contains some overhead:

<div class="row"><div class="col-md-12"><h3>Contact us</h3></div></div><form class="form form-horizontal" method="post"><div asp-validation-summary="All"></div><div class="row"><div class="col-md-12"><div class="form-group"><label asp-for="Contact.Name" class="col-md-3 right">Name:</label><div class="col-md-9"><input asp-for="Contact.Name" class="form-control" /><span asp-validation-for="Contact.Name"></span></div></div></div></div><div class="row"><div class="col-md-12"><div class="form-group"><label asp-for="Contact.LastName" class="col-md-3 right">Last name:</label><div class="col-md-9"><input asp-for="Contact.LastName" class="form-control" /><span asp-validation-for="Contact.LastName"></span></div></div></div></div><div class="row"><div class="col-md-12"><div class="form-group"><label asp-for="Contact.Email" class="col-md-3 right">Email:</label><div class="col-md-9"><input asp-for="Contact.Email" class="form-control" /><span asp-validation-for="Contact.Email"></span></div></div></div></div><div class="row"><div class="col-md-12"><div class="form-group"><label asp-for="Contact.Message" class="col-md-3 right">Your Message:</label><div class="col-md-9"><textarea asp-for="Contact.Message" rows="6" class="form-control"></textarea><span asp-validation-for="Contact.Message"></span></div></div></div></div><div class="row"><div class="col-md-12"><button type="submit">Send</button></div></div></form>

This also looks pretty much the same as in common ASP.NET Core MVC views. There's no difference.

BTW: I'm still impressed by the tag helpers. This guys even makes writing and formatting code snippets a lot easier.

Acessing the form data

As I wrote some lines above, there is a model binding working for you. This fills up the property Contact with data and makes it available in the OnPostAsync() method, if the attribute BindProperty is set.

[BindProperty]
public ContactFormModel Contact { get; set; }

Actually, I expected to have a model, passed as argument to the OnPost, as I saw it the first time. But you are able to use the property directly, without any other action to do:

var mailbody = $@"Hallo website owner,

This is a new contact request from your website:

Name: {Contact.Name}
LastName: {Contact.LastName}
Email: {Contact.Email}
Message: ""{Contact.Message}""


Cheers,
The websites contact form";

SendMail(mailbody);

That's nice, isn't it?

Sending the emails

Thanks to the pretty awesome .NET Standard 2.0 and the new APIs available for .NET Core 2.0, it get's even nicer:

// irony on

Finally in .NET Core 2.0, it is now possible to send emails directly to an SMTP server using the famous and pretty well known System.Net.Mail.SmtpClient():

private void SendMail(string mailbody)
{
  using (var message = new MailMessage(Contact.Email, "me@mydomain.com"))
  {
    message.To.Add(new MailAddress("me@mydomain.com"));
    message.From = new MailAddress(Contact.Email);
    message.Subject = "New E-Mail from my website";
    message.Body = mailbody;

    using (var smtpClient = new SmtpClient("mail.mydomain.com"))
    {
      smtpClient.Send(message);
    }
  }
}

Isn't that cool?

// irony off

It definitely works and this is actually a good thing.

In previews .NET Core versions it was recommended to use an external mail delivery service like SendGrid. This kind of services usually provide a REST based API , which can be used to communicate with that specific service. Some of them also provide various client libraries for the different platforms and languages to wrap that APIs and makes them easier to use.

I'm anyway a huge fan of such services, because they are easier to use and I don't need to handle message details like encoding. I don't need to care about SMTP hosts and ports, because it is all HTTPS. I don't really need to care as much about spam handling, because this is done by the service. Using such services I just need to configure the sender mail address, maybe a domain, but the DNS settings are done by them.

SendGrid could be bought via the Azure marketplace and contains huge number of free emails to send. I would propose to use such services whenever it's possible. The SmtpClient is good in enterprise environments where you don't need to go threw the internet to send mails. But maybe the Exchanges API is another or better option in enterprise environments.

Conclusion

The email form is working and it is actually not much code written by myself. That's awesome. For such scenarios the razor pages are pretty cool and easy to use. There's no Controller to set-up, the views and the PageModels are pretty close and the code to generate one page is not distributed over three different folders as in MVC. To create bigger applications, MVC is for sure the best choice, but I really like the possibility to keep small apps as simple as possible.


Querying AD in SQL Server via LDAP provider

$
0
0

This is kinda off-topic, because it's not about ASP.NET Core, but l really like it to share. I recently needed to import some additional user data via a nightly run into a SQL Server Database. The base user data came from a SAP database via an CSV bulk import. But not all of the data. E.g. the telephone numbers are maintained mostly by the users itself in the AD. After SAP import, we need to update the telephone numbers with the data from the AD.

The bulk import was done with a stored procedure and executed nightly with an SQL Server job. So it makes sense to do the AD import with a stored procedure too. I wasn't really sure whether this works via the SQL server.

My favorite programming languages are C# and JavaScript, and I'm not really a friend of T-SQL, but I tried it. I googled around a little bit and found a solution quick solution in T-SQL.

The trick is map the AD via an LDAP provider as a linked server to the SQL Server. This can even be done via a dialogue, but I never got it running like this, so I chose the way to use T-SQL instead:

USE [master]
GO 
EXEC master.dbo.sp_addlinkedserver @server = N'ADSI', @srvproduct=N'Active Directory Service Interfaces', @provider=N'ADSDSOObject', @datasrc=N'adsdatasource'
EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'ADSI',@useself=N'False',@locallogin=NULL,@rmtuser=N'<DOMAIN>\<Username>',@rmtpassword='*******'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'collation compatible',  @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'data access', @optvalue=N'true'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'dist', @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'pub', @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'rpc', @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'rpc out', @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'sub', @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'connect timeout', @optvalue=N'0'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'collation name', @optvalue=null
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'lazy schema validation',  @optvalue=N'false'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'query timeout', @optvalue=N'0'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'use remote collation',  @optvalue=N'true'
GO 
EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'remote proc transaction promotion', @optvalue=N'true'
GO

You can to use this script to set-up a new linked server to AD. Just set the right user and password to the second T-SQL statement. This user should have read access to the AD. A specific service account would make sense here. Don't save the script with the user credentials in it. Once the linked server is set-up, you don't need this script anymore.

This setup was easy. The most painful part, was to setup a working query.

SELECT * FROM OpenQuery ( 
  ADSI,  'SELECT cn, samaccountname, mail, mobile, telephonenumber, sn, givenname, co, company
  FROM ''LDAP://DC=company,DC=domain,DC=controller''
  WHERE objectClass = ''User'' and co = ''Switzerland''') AS tblADSI
  WHERE mail IS NOT NULL AND (telephonenumber IS NOT NULL OR mobile IS NOT NULL)
ORDER BY cn

Any error in the Query to execute resulted in an generic error message, which told me that there was an problem to built this query. Not really helpful.

It took me two hours to find the right LDAP connection string and some more hours to find the right properties the query.

The other painful thing are the conditions. Because the where clause outside the OpenQuery couldn't be run inside the OpenQuery. Don't ask me why. My Idea was to limit the result set completely with the query inside the OpenQuery, but was only able to limit to the objectType "User" and to the country. Also the AD need to be maintained in a proper way: e.g. the field company btw. didn't return the company (which should be the same in the entire company) but the company units.

BTW: the column order in the result set is completely the other way round, than defined in the query.

Later, I could limit the result set to existing emails (to find out whether this is a real user) and existing telephone numbers.

The rest is easy: Wrap that query in a stored procedure, iterate threw all of the users, find the related ones in the database (previously imported from SAP) and update the telephone numbers.

Current Activities

$
0
0

Wow! Some weeks without a new blog post. My plan was to write at least one post per week. But this doesn't always work. Usually I write in the train, when I go to work to Basel (CH). Sometimes I write two or three posts in advance, to publish it later on. But I had some vacation, worked two weeks completely at home, and I had some other things to prepare and to manage. This is a short overview of the current activities:

INETA Germany

The last year was kinda busy, even from the developer community perspective. The leadership of the INETA Germany was one, but a pretty small part. We are currently planning some huge changes with the INETA Germany. One of the changes is to open the INETA Germany for Austrian and Swiss user groups and speakers. We already have some Austrian and Swiss speakers registered in the Speakers Bureau. Contact us via hallo@ineta-deutschland.de, if you are a Austrian or Swiss user group and if you wanna be a INETA member.

Authoring articles for a magazine

Also the last twelve months was busy, because almost every month I wrote an article for one of the most popular German speaking developer Magazine. All this articles were about ASP.NET Core and .NET Core. Here is a list of them:

And there will be some more in the next months.

Technical Review of a book

For the first time I do a technical review for a book about "ASP.NET Core 2.0 and Angular 4". I'm a little bit proud of getting asked doing it. The author is one of the famous European authors and this book is awesome and great for Angular and ASP.NET Core beginners. Unfortunately I cannot yet mention the Authors name and link to the book title, but I will definitely if it is finished.

Talks

What else? Yes I was talking and I will talk about various topics.

In August I did a talk at the Zurich Azure meetup about how we (the yooapps.com) created the digital presentation (web site, mobile apps) for one of the most successful soccer clubs in Switzerland on Microsoft Azure. It is also about how we maintain it and how we deploy it to Azure. I'l do the talk at the Basel .NET User Group today and in October at the Azure meetup Karlsruhe. I'd like to do this talk at a lot more user groups, because we are kinda proud of that project. Contact me, if you like to have this talk in your user group. (BTW: I'm in the Seattle (WA) area at the beginning of March 2018. So I could do this talk in Seattle or Portland too.)

Title: Soccer in the cloud – FC Basel on Azure

Abstract: Three years ago we released a completely new version of the digital presentation (website, mobile apps, live ticker, image database, videos, shop and extranet) of one of the most successful soccer clubs In Switzerland. All of the services of that digital presentation are designed to run on Microsoft Azure. Since then we are continuously working on it, adding new features, integrating new services and improving performance, security and so on. This talk is about, how the digital presentation was built, how it works and how we continuously deploy new feature and improvements to Azure.

At the ADC 2017 in Cologne I talked about customizing ASP.NET Core (sources on GitHub) and I did a one day workshop about developing a web application using ASP.NET Core (sources on GitHub)

Also in October I do an introduction talk about .NET Core, .NET Standard and ASP.NET Core at the "Azure & .NET meetup Freiburg" (Germany)

Open Source:

One Project that should be released for a while is LightCore. Unfortunately I didn't find some time the last weeks to work on that. Also Peter Bucher (who initially created that project and is currently working on that) was busy too. We Currently have a critical bug, which needs to be solved first, before we are able to release. Even the ASP.NET Core integrations needs to be finished first.

Doing a lot of sports

Since almost 18 Months, I do a lot of running and biking. This also takes some time, but is a lot of addictive fun. I cannot live two days without being outside running and biking. I wouldn't believe that, if you told me about it two years before. That changed my life a lot. I attend official runs and bike races and last but not least: I lost around 20 kilos the last one and a half years.

Unit Testing an ASP.​NET Core Application

$
0
0

ASP.NET Core 2.0 is out and it is great. Testing worked well in the previous versions, but in 2.0 it is much more easier.

Xunit, Moq and FluentAssertions are working great with the new Version of .NET Core and ASP.NET Core. Using this tools Unit Testing is really fun. Even more fun with testing is provided in ASP.NET Core. Testing Controllers wasn't easier in the previous versions.

If you remember the old Web API and MVC version, based on System.Web, you'll probably also remember how to write unit test for the Controllers.

In this post I'm going to show you how to unit test your controllers and how to write integration tests for your controllers.

Preparing the project to test:

To show you how this works, I created a new "ASP.NET Core Web Application" :

Now I needed to select the Web API project. Be sure to select ".NET Core" and "ASP.NET Core 2.0":

To keep this post simple, I didn't select an authentication type.

In this project is nothing special, except the new PersonsController, which is using a PersonService:

[Route("api/[controller]")]
public class PersonsController : Controller
{
    private IPersonService _personService;

    public PersonsController(IPersonService personService)
    {
        _personService = personService;
    }
    // GET api/values
    [HttpGet]
    public async Task<IActionResult> Get()
    {
        var models = _personService.GetAll();

        return Ok(models);
    }

    // GET api/values/5
    [HttpGet("{id}")]
    public async Task<IActionResult> Get(int id)
    {
        var model = _personService.Get(id);

        return Ok(model);
    }

    // POST api/values
    [HttpPost]
    public async Task<IActionResult> Post([FromBody]Person model)
    {
        if (!ModelState.IsValid)
        {
            return BadRequest(ModelState);
        }

        var person = _personService.Add(model);

        return CreatedAtAction("Get", new { id = person.Id }, person);
    }

    // PUT api/values/5
    [HttpPut("{id}")]
    public async Task<IActionResult> Put(int id, [FromBody]Person model)
    {
        if (!ModelState.IsValid)
        {
            return BadRequest(ModelState);
        }

        _personService.Update(id, model);

        return NoContent();
    }

    // DELETE api/values/5
    [HttpDelete("{id}")]
    public async Task<IActionResult> Delete(int id)
    {
        _personService.Delete(id);
        return NoContent();
    }
}

The Person class is created in a new folder "Models" and is a simple POCO:

public class Person
{
  public int Id { get; set; }
  [Required]
  public string FirstName { get; set; }
  [Required]
  public string LastName { get; set; }
  public string Title { get; set; }
  public int Age { get; set; }
  public string Address { get; set; }
  public string City { get; set; }
  [Required]
  [Phone]
  public string Phone { get; set; }
  [Required]
  [EmailAddress]
  public string Email { get; set; }
}

The PersonService uses GenFu to auto generate a list of Persons:

public class PersonService : IPersonService
{
    private List<Person> Persons { get; set; }

    public PersonService()
    {
        var i = 0;
        Persons = A.ListOf<Person>(50);
        Persons.ForEach(person =>
        {
            i++;
            person.Id = i;
        });
    }

    public IEnumerable<Person> GetAll()
    {
        return Persons;
    }

    public Person Get(int id)
    {
        return Persons.First(_ => _.Id == id);
    }

    public Person Add(Person person)
    {
        var newid = Persons.OrderBy(_ => _.Id).Last().Id + 1;
        person.Id = newid;

        Persons.Add(person);

        return person;
    }

    public void Update(int id, Person person)
    {
        var existing = Persons.First(_ => _.Id == id);
        existing.FirstName = person.FirstName;
        existing.LastName = person.LastName;
        existing.Address = person.Address;
        existing.Age = person.Age;
        existing.City = person.City;
        existing.Email = person.Email;
        existing.Phone = person.Phone;
        existing.Title = person.Title;
    }

    public void Delete(int id)
    {
        var existing = Persons.First(_ => _.Id == id);
        Persons.Remove(existing);
    }
}

public interface IPersonService
{
  IEnumerable<Person> GetAll();
  Person Get(int id);
  Person Add(Person person);
  void Update(int id, Person person);
  void Delete(int id);
}

This Service needs to be registered in the Startup.cs:

services.AddScoped<IPersonService, PersonService>();

If this is done, we can create the test project.

The unit test project

I always choose xUnit (or Nunit) over MSTest, but feel free to use MSTest. The used testing framework doesn't really matter.

Inside that project, I created two test classes: PersonsControllerIntegrationTests and PersonsControllerUnitTests

We need to add some NuGet packages. Right click the project in VS2017 to edit the project file and to add the references manually, or use the NuGet Package Manager to add these packages:

<PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" /><PackageReference Include="Microsoft.AspNetCore.TestHost" Version="2.0.0" /><PackageReference Include="FluentAssertions" Version="4.19.2" /><PackageReference Include="Moq" Version="4.7.63" />

The first package contains the dependencies to ASP.NET Core. I use the same package as in the project to test. The second package is used for the integration tests, to build a test host for the project to test. FluentAssertions provides a more elegant way to do assertions. And Moq is used to create fake objects.

Let's start with the unit tests:

Unit Testing the Controller

We start with an simple example, by testing the GET methods only:

public class PersonsControllerUnitTests
{
  [Fact]
  public async Task Values_Get_All()
  {
    // Arrange
    var controller = new PersonsController(new PersonService());

    // Act
    var result = await controller.Get();

    // Assert
    var okResult = result.Should().BeOfType<OkObjectResult>().Subject;
    var persons = okResult.Value.Should().BeAssignableTo<IEnumerable<Person>>().Subject;

    persons.Count().Should().Be(50);
  }

  [Fact]
  public async Task Values_Get_Specific()
  {
    // Arrange
    var controller = new PersonsController(new PersonService());

    // Act
    var result = await controller.Get(16);

    // Assert
    var okResult = result.Should().BeOfType<OkObjectResult>().Subject;
    var person = okResult.Value.Should().BeAssignableTo<Person>().Subject;
    person.Id.Should().Be(16);
  }
  [Fact]
  public async Task Persons_Add()
  {
    // Arrange
    var controller = new PersonsController(new PersonService());
    var newPerson = new Person
    {
      FirstName = "John",
      LastName = "Doe",
      Age = 50,
      Title = "FooBar",
      Email = "john.doe@foo.bar"
    };

    // Act
    var result = await controller.Post(newPerson);

    // Assert
    var okResult = result.Should().BeOfType<CreatedAtActionResult>().Subject;
    var person = okResult.Value.Should().BeAssignableTo<Person>().Subject;
    person.Id.Should().Be(51);
  }

  [Fact]
  public async Task Persons_Change()
  {
    // Arrange
    var service = new PersonService();
    var controller = new PersonsController(service);
    var newPerson = new Person
    {
      FirstName = "John",
      LastName = "Doe",
      Age = 50,
      Title = "FooBar",
      Email = "john.doe@foo.bar"
    };

    // Act
    var result = await controller.Put(20, newPerson);

    // Assert
    var okResult = result.Should().BeOfType<NoContentResult>().Subject;

    var person = service.Get(20);
    person.Id.Should().Be(20);
    person.FirstName.Should().Be("John");
    person.LastName.Should().Be("Doe");
    person.Age.Should().Be(50);
    person.Title.Should().Be("FooBar");
    person.Email.Should().Be("john.doe@foo.bar");
  }

  [Fact]
  public async Task Persons_Delete()
  {
    // Arrange
    var service = new PersonService();
    var controller = new PersonsController(service);

    // Act
    var result = await controller.Delete(20);

    // Assert
    var okResult = result.Should().BeOfType<NoContentResult>().Subject;

    // should throw an eception, 
    // because the person with id==20 doesn't exist enymore
    AssertionExtensions.ShouldThrow<InvalidOperationException>(
      () => service.Get(20));
  }
}

This snippets also shows the benefits of FluentAssertions. I really like the readability of this fluent API.

BTW: Enabling live unit testing in Visual Studio 2017 is impressive:

With this unit test approach the invalid ModelState isn't tested. To get this tested we need to test more integrated. Also the ModelBinder should be executed, to validate the input and to set the ModelState

One last thing about unit test: I mentioned Moq before, because I propose to isolate the code to test. This means, wouldn't use a real service, when I test a controller. I also wouldn't use a real repository, when I test a service. And so on... That's why you should work with fake services instead of real ones. Moq is a tool to create such fake objects and to set them up:

[Fact]
public async Task Persons_Get_From_Moq()
{
  // Arrange
  var serviceMock = new Mock<IPersonService>();
  serviceMock.Setup(x => x.GetAll()).Returns(() => new List<Person>
  {
    new Person{Id=1, FirstName="Foo", LastName="Bar"},
    new Person{Id=2, FirstName="John", LastName="Doe"},
    new Person{Id=3, FirstName="Juergen", LastName="Gutsch"},
  });
  var controller = new PersonsController(serviceMock.Object);

  // Act
  var result = await controller.Get();

  // Assert
  var okResult = result.Should().BeOfType<OkObjectResult>().Subject;
  var persons = okResult.Value.Should().BeAssignableTo<IEnumerable<Person>>().Subject;

  persons.Count().Should().Be(3);
}

To learn more about Moq, have a look into the repository: https://github.com/moq/moq4

Integration testing the Controller

But let's start with the simple cases, by testing the GET methods first. To test the integration of a web API or any other web based API, you need to have a web server running. Even in this case a web server is needed, but fortunately there is a test server which can be used. With this host it is not needed to set-up a separate web server on the test machine:

public class PersonsControllerIntegrationTests
{
  private readonly TestServer _server;
  private readonly HttpClient _client;

  public PersonsControllerIntegrationTests()
  {
    // Arrange
    _server = new TestServer(new WebHostBuilder()
                             .UseStartup<Startup>());
    _client = _server.CreateClient();
  }
  // ... 
}

At first, the TestServer will be set up. This guy gets the WebHostBuilder - known from every ASP.NET Core 2.0 application - and uses the Startup of our project to test. At second, we are able to create a HttpClient out of that server. We'll use this HttpClient in the tests to create request to the server and to receive the responses.

[Fact]
public async Task Persons_Get_All()
{
  // Act
  var response = await _client.GetAsync("/api/Persons");
  response.EnsureSuccessStatusCode();
  var responseString = await response.Content.ReadAsStringAsync();

  // Assert
  var persons = JsonConvert.DeserializeObject<IEnumerable<Person>>(responseString);
  persons.Count().Should().Be(50);
}

[Fact]
public async Task Persons_Get_Specific()
{
  // Act
  var response = await _client.GetAsync("/api/Persons/16");
  response.EnsureSuccessStatusCode();
  var responseString = await response.Content.ReadAsStringAsync();

  // Assert
  var person = JsonConvert.DeserializeObject<Person>(responseString);
  person.Id.Should().Be(16);
}

Now let's test the POST request:

[Fact]
public async Task Persons_Post_Specific()
{
  // Arrange
  var personToAdd = new Person
  {
    FirstName = "John",
    LastName = "Doe",
    Age = 50,
    Title = "FooBar",
    Phone = "001 123 1234567",
    Email = "john.doe@foo.bar"
  };
  var content = JsonConvert.SerializeObject(personToAdd);
  var stringContent = new StringContent(content, Encoding.UTF8, "application/json");

  // Act
  var response = await _client.PostAsync("/api/Persons", stringContent);

  // Assert
  response.EnsureSuccessStatusCode();
  var responseString = await response.Content.ReadAsStringAsync();
  var person = JsonConvert.DeserializeObject<Person>(responseString);
  person.Id.Should().Be(51);
}

We need to prepare a little bit more. Here we create a StringContent object, which derives from HttpContent. This will contain the Person we want to add as JSON string. This get's sent via post to the TestServer.

To test the invalid ModelState, just remove a required field or pass a wrong formatted email address or telephone number and test against it. In this sample, I test against three missing required fields:

[Fact]
public async Task Persons_Post_Specific_Invalid()
{
  // Arrange
  var personToAdd = new Person { FirstName = "John" };
  var content = JsonConvert.SerializeObject(personToAdd);
  var stringContent = new StringContent(content, Encoding.UTF8, "application/json");

  // Act
  var response = await _client.PostAsync("/api/Persons", stringContent);

  // Assert
  response.StatusCode.Should().Be(System.Net.HttpStatusCode.BadRequest);
  var responseString = await response.Content.ReadAsStringAsync();
  responseString.Should().Contain("The Email field is required")
    .And.Contain("The LastName field is required")
    .And.Contain("The Phone field is required");
}

This is almost the same pattern for the PUT and the DELETE requests:

[Fact]
public async Task Persons_Put_Specific()
{
  // Arrange
  var personToChange = new Person
  {
    Id = 16,
    FirstName = "John",
    LastName = "Doe",
    Age = 50,
    Title = "FooBar",
    Phone = "001 123 1234567",
    Email = "john.doe@foo.bar"
  };
  var content = JsonConvert.SerializeObject(personToChange);
  var stringContent = new StringContent(content, Encoding.UTF8, "application/json");

  // Act
  var response = await _client.PutAsync("/api/Persons/16", stringContent);

  // Assert
  response.EnsureSuccessStatusCode();
  var responseString = await response.Content.ReadAsStringAsync();
  responseString.Should().Be(String.Empty);
}

[Fact]
public async Task Persons_Put_Specific_Invalid()
{
  // Arrange
  var personToChange = new Person { FirstName = "John" };
  var content = JsonConvert.SerializeObject(personToChange);
  var stringContent = new StringContent(content, Encoding.UTF8, "application/json");

  // Act
  var response = await _client.PutAsync("/api/Persons/16", stringContent);

  // Assert
  response.StatusCode.Should().Be(System.Net.HttpStatusCode.BadRequest);
  var responseString = await response.Content.ReadAsStringAsync();
  responseString.Should().Contain("The Email field is required")
    .And.Contain("The LastName field is required")
    .And.Contain("The Phone field is required");
}

[Fact]
public async Task Persons_Delete_Specific()
{
  // Arrange

  // Act
  var response = await _client.DeleteAsync("/api/Persons/16");

  // Assert
  response.EnsureSuccessStatusCode();
  var responseString = await response.Content.ReadAsStringAsync();
  responseString.Should().Be(String.Empty);
}

Conclusion

That's it.

Thanks to the TestServer! With this it is really easy to write integration tests the controllers. Sure, it is still a little more effort than the much simpler unit tests, but you don't have external dependencies anymore. No external web server to manage. No external web server, which is out of control.

Try it out and tell me about your opinion :)

.NET Core 2.0 and ASP.NET 2.0 Core are here and ready to use

$
0
0

Recently I did a overview talk about .NET Core, .NET Standard and ASP.NET Core at the Azure Meetup Freiburg. I told them about .NET Core 2.0, showed the dotnet CLI and the integration in Visual Studio. Explained the sense of .NET Standard and why developers should take care about it. I also showed them ASP.NET Core, how it works, how to host and explained the main differences to the ASP.NET 4.x versions.

BTW: This Meetup was really great. Well organized on a pretty nice and modern location. It was really fun to talk there. Thanks to Christian, Patrick and Nadine to organize this event :-)

After that talk they asked me some pretty interesting and important questions:

Question 1: "Should we start using ASP.NET Core and .NET Core?"

My answer is a pretty clear YES.

  • Use .NET Standard for your libraries, if you don't have dependencies to platform specific APIs (eg. Registry, drivers, etc.) even if you don't need to be x-plat. Why? Because it just works and you'll keep a door open to share your library to other platforms later on. Since .NET Standard 2.0 you are not really limited, you are able to do almost all with C# you can do with the full .NET Framework
  • Use ASP.NET Core for new web projects, if you don't need to do Web Forms. Because it is fast, lightweight and x-plat. Thanks to .NET standard you are able to reuse your older .NET Framework libraries, if you need to.
  • Use ASP.NET Core to use the new modern MVC framework with the tag helpers or the new lightweight razor pages
  • Use ASP.NET Core to host your application on various cloud providers. Not only on Azure, but also on Amazon and Google:
    • http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/dotnet-core-tutorial.html
    • https://aws.amazon.com/blogs/developer/running-serverless-asp-net-core-web-apis-with-amazon-lambda/
    • https://codelabs.developers.google.com/codelabs/cloud-app-engine-aspnetcore/#0
    • https://codelabs.developers.google.com/codelabs/cloud-aspnetcore-cloudshell/#0
  • Use ASP.NET Core to write lightweight and fast Web API services running either self hosted, in Docker or on Linux, Mac or Windows
  • Use ASP.NET Core to create lightweight back-ends for Angular or React based SPA applications.
  • Use .NET Core to write tools for different platforms

As an library developer, there is almost no reason to not use the .NET Standard. Since .NET Standard 2.0 the full API of the .NET Framework is available and can be used to write libraries for .NET Core, Xamarin, UWP and the full .NET Framework. It also supports to reference full .NET Framework assemblies.

The .NET Standard is an API specification that need to be implemented by the platform specific Frameworks. The .NET Framework 4.6.2, .NET Core 2.0 and Xamarin are implementing the .NET Standard 2.0, which means they all uses the same API (namespaces names, class names, method names). Writing libraries against the .NET Standard 2.0 API will run on .NET Framework 2.0, on .NET Core 2.0, as well as on Xamarin and on every other platform specific Framework that supports that API.

Question 2: Do we need to migrate our existing web applications to ASP.NET Core?

My answer is: NO. You don't need to and I would propose to not do it if there's no good reason to do it.

There are a lot blog posts out there about migrating web applications to ASP.NET Core, but you don't need to, if you don't face any problems with your existing one. There are just a few reasons to migrate:

  • You want to go x-plat to host on Linux
  • You want to host on small devices
  • You want to host in linux based Docker containers
  • You want to use a faster framework
    • A faster framework is useless, if your code or your dependencies are slow ;-)
  • You want to use a modern framework
    • Note: ASP.NET 4.x is not outdated, still supported and still gets new features
  • You want to run your web on a Microsoft Nano Server

Depending on the level of customizing you did in your existing application, the migration could be a lot of effort. Someone needs to pay for the effort, that's why I would propose not to migrate to ASP.NET Core, if you don't have any problems or a real need to do it.

Conclusion

I would use ASP.NET Core for every new web project and .NET Standard for every library I need to write. Because it is almost mature and really usable since the versions 2.0. You can do almost all the stuff, you can do with the full .NET framework.

BTW: Rick Strahl also just wrote an article about that. Please read it. It's great, as almost all of his posts:https://weblog.west-wind.com/posts/2017/Oct/22/NET-Core-20-and-ASPNET-20-Core-are-finally-here

BTW: The slides off that talk are on SlideShare. If you want me to do that talk in your meetup or in your user group, just ping me on twitter or drop me an email

GraphiQL for ASP.​NET Core

$
0
0

One nice thing about blogging is the feedback from the readers. I got some nice kudos, but also great new ideas. One idea was born out of a question about a "graphi" UI for the GraphQL Middleware I wrote some months ago. I never heard about "graphi", which actually is "GraphiQL", a generic HTML UI over a GraphQL endpoint. It seemed to be something like a Swagger UI, but just for GraphQL. That sounds nice and I did some research about that.

What is GraphiQL?

Actually it is absolutely not the same as Swagger and not as detailed as Swagger, but it provides a simple and easy to use UI to play around with your GraphQL end-point. So you cannot really compare it.

GraphiQL is a React component provided by the GraphQL creators, that can be used in your project. It basically provides an input area to write some GraphQL queries and a button to sent that query to your GrapgQL end-point. You'll than see the result or the error on the right side of the UI.

Additionally it provides some more nice features:

  • A history of sent queries, which appears on the left side, if you press the history-button. To reuse previously used queries.
  • It rewrites the URL to support linking to a specific query. It stores the query and the variables in the URL, to sent it to someone else, or to bookmark the query to test.
  • It actually creates a documentation out of the GraphQL end-point. By clicking at the "Docks" link it opens a documentation about the types used in this API. This is really magic, because it shows the documentation of a type I never requested:

Implementing GraphiQL

The first idea was to write something like this by my own. But it should be the same as the existing GraphiQL UI. Why not using the existing implementation? Thanks to Steve Sanderson, we have the Node Services for ASP.NET Core. Why not running the existing GraphiQL implementation in a Middleware using the NodeServices?

I tried it with the "apollo-server-module-graphiql" package. I called this small JavaScript to render the graphiql UI and return it back to C# via the NodeSerices:

var graphiql = require('apollo-server-module-graphiql');

module.exports = function (callback, options) {
    var data = {
        endpointURL: options.graphQlEndpoint
    };

    var result = graphiql.renderGraphiQL(data);
    callback(null, result);
};

The usage of that script inside the Middleware looks like this:

var file = _env.WebRootFileProvider.GetFileInfo("graphiql.js");
var result = await _nodeServices.InvokeAsync<string>(file.PhysicalPath, _options);
await httpCont

That works great, but has one problem. It wraps the GraphQL query in a JSON-Object that was posted to the GraphQL end-point. I would need to change the GraphQlMiddleware implementation, because of that. The current implementation expects the plain GraphQL query in the POST body.

What is the most useful approach? Wrapping the GraphQL query in a JSON object or sending the plain query? Any Ideas? What do you use? Please tell me by dropping a comment.

With this approach I'm pretty much dependent to the Apollo developers and need to change my implementation, whenever they change their implementations.

This is why I decided to use the same concept of generating the UI as the "apollo-server-module-graphiql" package but implemented in C#. This unfortunately doesn't need the NodeServices anymore.

I use exact the same generated code as this Node module, but changed the way the query is send to the server. Now the plain query will be sent to the server.

I started playing around with this and added it to the existing project, mentioned here: GraphQL end-point Middleware for ASP.NET Core.

Using the GraphiqlMiddleware

The result is as easy to use as the GraphQlMiddleware. Let's see how it looks to add the Middlewares:

if (env.IsDevelopment())
{
    app.UseDeveloperExceptionPage();
	// adding the GraphiQL UI
    app.UseGraphiql(options =>
    {
        options.GraphiqlPath = "/graphiql"; // default
        options.GraphQlEndpoint = "/graph"; // default
    });
}
// adding the GraphQL end point
app.UseGraphQl(options =>
{
    options.GraphApiUrl = "/graph"; // default
    options.RootGraphType = new BooksQuery(bookRepository);
    options.FormatOutput = true; // default: false
});

As you can see the second Middleware is bound to the first one by using the same path "/graph". I didn't create any hidden dependency between the both Middlewares, to make it ease to use it in various combinations. Maybe you want to use the GraphiQL UI only in the Development or Staging environment as shown in this example.

Now start the web using Visual Studio (press [F5]). The web starts with the default view or API. Add "graphiql" to the URL in the browsers address bar and see what happens. You should see a generated UI for your GraphQL endpoint, where you can now start playing around with our API, testing and debugging it with your current data. (See the screenshots on top.)

I'll create a separate NuGet package for the GraphiqlMiddleware. This will not have the GraphQlMiddleware as a dependency and could be used completely separate.

Conclusion

This was a lot easier to implement than expected. Currently there is still some refactoring needed:

  • I don't like to have the HMTL and JavaScript code in the C#. I'd like to load that from an embedded resource file, which actually is a HTML file.
  • I should add some more configuration options. E.g. to change the theme, as equal to the original Node implementation, to preload queries and results, etc.
  • Find a way to use it offline as well. Currently there's a connection to the internet needed to load the CSS and JavaScripts from the CDNs.

You wanna try it? Download, clone or fork the sources on GitHub.

What do you think about that? Could this be useful to you? Please leave a comment and tell me about your opinion.

Update [10/26/2017 21:03]

GraphiQL is much more powerful than expected. I was wondering how the GraphQL create IntelliSense support in the editor and how it creates the documentation. I had a deeper look into the traffic and found two more cool things about it:

First: GraphiQL sends a special query to the GraphQL to request for the GraphQL specific documentation. IN this case it looks like this:

  query IntrospectionQuery {
    __schema {
      queryType { name }
      mutationType { name }
      subscriptionType { name }
      types {
        ...FullType
      }
      directives {
        name
        description
        locations
        args {
          ...InputValue
        }
      }
    }
  }

  fragment FullType on __Type {
    kind
    name
    description
    fields(includeDeprecated: true) {
      name
      description
      args {
        ...InputValue
      }
      type {
        ...TypeRef
      }
      isDeprecated
      deprecationReason
    }
    inputFields {
      ...InputValue
    }
    interfaces {
      ...TypeRef
    }
    enumValues(includeDeprecated: true) {
      name
      description
      isDeprecated
      deprecationReason
    }
    possibleTypes {
      ...TypeRef
    }
  }

  fragment InputValue on __InputValue {
    name
    description
    type { ...TypeRef }
    defaultValue
  }

  fragment TypeRef on __Type {
    kind
    name
    ofType {
      kind
      name
      ofType {
        kind
        name
        ofType {
          kind
          name
          ofType {
            kind
            name
            ofType {
              kind
              name
              ofType {
                kind
                name
                ofType {
                  kind
                  name
                }
              }
            }
          }
        }
      }
    }
  }

Try this query and sent it to your GraphQL API using Postman or a similar tool and see what happens :)

Second:GraphQL for .NET knows how to answer that query and sent the full documentation about my data structure to the client like this:

{"data": {"__schema": {"queryType": {"name": "BooksQuery"
            },"mutationType": null,"subscriptionType": null,"types": [
                {"kind": "SCALAR","name": "String","description": null,"fields": null,"inputFields": null,"interfaces": null,"enumValues": null,"possibleTypes": null
                },
                {"kind": "SCALAR","name": "Boolean","description": null,"fields": null,"inputFields": null,"interfaces": null,"enumValues": null,"possibleTypes": null
                },
                {"kind": "SCALAR","name": "Float","description": null,"fields": null,"inputFields": null,"interfaces": null,"enumValues": null,"possibleTypes": null
                },
                {"kind": "SCALAR","name": "Int","description": null,"fields": null,"inputFields": null,"interfaces": null,"enumValues": null,"possibleTypes": null
                },
                {"kind": "SCALAR","name": "ID","description": null,"fields": null,"inputFields": null,"interfaces": null,"enumValues": null,"possibleTypes": null
                },
                {"kind": "SCALAR","name": "Date","description": "The `Date` scalar type represents a timestamp provided in UTC. `Date` expects timestamps to be formatted in accordance with the [ISO-8601](https://en.wikipedia.org/wiki/ISO_8601) standard.","fields": null,"inputFields": null,"interfaces": null,"enumValues": null,"possibleTypes": null
                },
                {"kind": "SCALAR","name": "Decimal","description": null,"fields": null,"inputFields": null,"interfaces": null,"enumValues": null,"possibleTypes": null
                },
              	[ . . . ]
                // many more documentation from the server
        }
    }
}

This is really awesome. With GraphiQL I got a lot more stuff than expected. And it didn't take more than 5 hours to implement this middleware.

NuGet, Cache and some more problems

$
0
0

Recently I had some problems using NuGet, two of them were huge, which took me a while to solve them. But all of them are easy to fix, if you know how to do it.

NuGet Cache

The fist and more critical problem was related to the NuGet Cache in .NET Core projects. It seems the underlying problem was a broken package in the cache. I didn't find out the real reason. Anyway, every time I tried to restore or add packages, I got an error message, that told me about an error at the first character in the project.assets.json. Yes, there is still a kind of a project.json even in .NET Core 2.0 projects. This file is in the "obj" folder of a .NET Core project and stores all information about the NuGet packages.

This error looked like a typical encoding error. This happens often if you try to read a ANSI encoded file, from a UTF-8 encoded file, or vice versa. But the project.assets.json was absolutely fine. It seemed to be a problem with one of the packages. It worked with the predefined .NET Core or ASP.NET Core packages, but it doesn't with any other. I wasn't able anymore to work on any .NET Core projects that targets .NET Core, but it worked with projects that are targeting the full .NET Framework.

I couldn't solve the real problem and I didn't really want to go threw all of the packages to find the broken one. The .NET CLI provides a nice tool to manage the NuGet Cache. It provides a more detailed CLI to NuGet.

dotnet nuget --help

This shows you tree different commands to work with NuGet. delete and push are working on the remote server to delete a package from a server or to push a new package to the server using the NuGet API. The third one is a command to work with local resources:

dotnet nuget locals --help

This command shows you the help about the locals command. Try the next one to get a list of local NuGet resources:

dotnet nuget locals all --list

You can now use the clear option to clear all caches:

dotnet nuget locals all --clear

Or a specific one by naming it:

dotnet nuget locals http-cache --clear

This is much more easier than searching for all the different cache locations and to delete them manually.

This solved my problem. The broken package was gone from all the caches and I was able to load the new, clean and healthy ones from NuGet.

Versions numbers in packages folders

The second huge problem is not related to .NET Core, but to classic .NET Framework projects using NuGet. If you also use Git-Flow to manage your source code, you'll have at least to different main branches: Master and Develop. Both branches contain different versions. Master contains the current version code and Develop contains the next version code. It is also possible that both versions use different versions of dependent NuGet packages. And here is the Problem:

Master used e. g. AwesomePackage 1.2.0 and Develop uses AwesomePackage 1.3.0-beta-build54321

Both versions of the code are referencing to the AwesomeLib.dll but in different locations:

  • Master: /packages/awesomepackage 1.2.0/lib/net4.6/AwesomeLib.dll
  • Develop: /packages/awesomepackage 1.3.0-beta-build54321/lib/net4.6/AwesomeLib.dll

If you now release the Develop to Master, you'll definitely forget to go to all the projects to change the reference paths, don't you? The build of master will fail, because this specific beta folder wont exist on the server, or even more bad: The build will not fail because the folder of the old package still exists on the build server, because you didn't clear the build work space. This will result in runtime errors. This problem will probably happen more likely, if you provide your own packages using your own NuGet Server.

I solved this by using a different NuGet client than NuGet. I use Paket, because it doesn't store the binaries in version specific folder and the reference path will be the same as long as the package name doesn't change. Using Paket I don't need to take care about reference paths and every branch loads the dependencies from the same location.

Paket officially supports the NuGet APIs and is mentioned on NuGet org, in the package details.

To learn more about Paket visit the official documentation: https://fsprojects.github.io/Paket/

Conclusion

Being an agile developer, doesn't only mean to follow an iterative process. It also means to use the best tools you can buy. But you don't always need to buy the best tools. Many of them are open source and free to use. Just help them by donating some bugs, spread the word, file some issues or contribute in a way to improve the tool. Paket is one of such tools, lightweight, fast, easy to use and it solves many problems. It is also well supported in CAKE , which is the build DSL I use to build, test and deploy applications.

Trying BitBucket Pipelines with ASP.NET Core

$
0
0

BitBucket provides a continuous integration tool called Pipelines. This is based on Docker containers which are running on a Linux based Docker machine. Within this post I wanna try to use BitBucket Pipelines with an ASP.NET Core application.

In the past I preferred BitBucket over GitHub, because I used Mercurial more than Git. But that changed five years ago. Since than I use GitHub for almost every new personal project that doesn't need to be a private project. But at the YooApps we use the entire Atlassian ALM Stack including Jira, Confluence and BitBucket. (We don't use Bamboo yet, because we also use Azure a lot and we didn't get Bamboo running on Azure). BitBucket is a good choice, if you anyway use the other Atlassian tools, because the integration to Jira and Confluence is awesome.

Since a while, Atlassian provides Pipelines as a simple continuous integration tool directly on BitBucket. You don't need to setup Bamboo to build and test just a simple application. At the YooApps we actually use Pipelines in various projects which are not using .NET. For .NET Projects we are currently using CAKE or FAKE on Jenkins, hosted on an Azure VM.

Pipelines can also used to build and test branches and pull request, which is awesome. So why shouldn't we use Pipelines for .NET Core based projects? BitBucket actually provides an already prepared Pipelines configuration for .NET Core related projects, using the microsoft/dotnet Docker image. So let's try pipelines.

The project to build

As usual, I just setup a simple ASP.NET Core project and add a XUnit test project to it. In this case I use the same project as shown in the Unit testing ASP.NET Core post. I imported that project from GitHub to BitBucket. if you also wanna try Pipelines, feel free to use the same way or just download my solution and commit it into your repository on BitBucket. Once the sources are in the repository, you can start to setup Pipelines.

Setup Pipelines

Setting up Pipelines actually is pretty easy. In your repository on BitBucket.com is a menu item called Pipelines. After pressing it you'll see the setup page, where you are able to select a technology specific configuration. .NET Core is not the first choice for BitBucket, because the .NET Core configuration is placed under "More". It is available anyway, which is really nice. After selecting the configuration type, you'll see the configuration in an editor inside the browser. It is actually a YAML configuration, called bitbucket-pipelines.yml, which is pretty easy to read. This configuration is prepared to use the microsoft/dotnet:onbuild Docker image and it already has the most common .NET CLI commands prepared, that will be used with that ASP.NET Core projects. You just need to configure the projects names for the build and test commands.

The completed configuration for my current project looks like this:

# This is a sample build configuration for .NET Core.
# Check our guides at https://confluence.atlassian.com/x/5Q4SMw for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: microsoft/dotnet:onbuild

pipelines:
  default:
    - step:
        caches:
          - dotnetcore
        script: # Modify the commands below to build your repository.
          - export PROJECT_NAME=WebApiDemo
          - export TEST_NAME=WebApiDemo.Tests
          - dotnet restore
          - dotnet build $PROJECT_NAME
          - dotnet test $TEST_NAME

If you don't have tests yet, comment the last line out by adding a #-sign in front of that line.

After pressing "Commit file", this configuration file gets stored in the root of your repository, which makes it available for all the developers of that project.

Let's try it

After that config was saved, the build started immediately... and failed!

Why? Because that Docker image was pretty much outdated. It contains an older version with an SDK that still uses the the project.json for .NET Core projects.

Changing the name of the Docker image from microsoft/dotnet:onbuild to microsoft/dotnet:sdk helps. You now need to change the bitbucket-pipelines.yml in your local Git workspace or using the editor on BitBucket directly. After committing the changes, again the build starts immediately and is green now

Even the tests are passed. As expected, I got a pretty detailed output about every step configured in the "script" node of the bitbucket-pipelines.yml

You don't need to know how to configure Docker using the pipelines. This is awesome.

Let's try the PR build

To create a PR, I need to create a feature branch first. I created it locally using the name "feature/build-test" and pushed that branch to the origin. You now can see that this branch got built by Pipelines:

Now let's create the PR using the BitBucket web UI. It automatically assigns my latest feature branch and the main branch, which is develop in my case:

Here we see that both branches are successfully built and tested previously. After pressing save we see the build state in the PRs overview:

This is actually not a specific built for that PR, but the build of the feature branch. So in this case, it doesn't really build the PR. (Maybe it does, if the PR comes from a fork and the branch wasn't tested previously. I didn't test it yet.)

After merging that PR back to the develop (in that case), we will see that this merge commit was successfully built too:

We have four builds done here: The failing one, the one 11 hours ago and two builds 52 minutes ago in two different branches.

The Continuous Deployment pipeline

With this, I would be save to trigger a direct deployment on every successful build of the main branches. As you maybe know, it is super simple to deploy a web application to an Azure web app, by connecting it directly to any Git repository. Usually this is pretty dangerous, if you don't have any builds and tests before you deploy the code. But in this case, we are sure the PRs and the branches are building and testing successfully.

We just need to ensure that the deployment is only be triggered, if the build is successfully done. Does this work with Pipelines? I'm pretty curious. Let's try it.

To do that, I created a new Web App on Azure and connect this app to the Git repository on BitBucket. I'll now add a failing test and commit it to the Git repository. What now should happen is, that the build starts before the code gets pushed to Azure and the failing build should disable the push to Azure.

I'm skeptical whether this is working or not. We will see.

The Azure Web App is created and running on http://build-with-bitbucket-pipelines.azurewebsites.net/. The deployment is configured to listen on the develop branch. That means, every time we push changes to that branch, the deployment to Azure will start.

I'll now create a new feature branch called "feature/failing-test" and push it to the BitBucket. I don't follow the same steps as described in the previous section about the PRs, to keep the test simple. I merge the feature branch directly and without an PR to develop and push all the changes to BitBucket. Yes, I'm a rebel... ;-)

The build starts immediately and fails as expected:

But what about the deployment? Let's have a look at the deployments on Azure. We should only see the initial successful deployment. Unfortunately there is another successful deployment with the same commit message as the failing build on BitBucket:

This is bad. We now have an unstable application running on azure. Unfortunately there is no option on BitBucket to trigger the WebHook on a successful build. We are able trigger the Hook on a build state change, but it is not possible to define on what state we want to trigger the build.

Too bad, this doesn't seem to be the right way to configure the continuous deployment pipeline in the same easy way than the continuous integration process. Sure there are many other, but more complex ways to do that.

Update 12/8/2017

There is anyway a simple option to setup an deployment after successful build. This could be done by triggering the Azure webhook inside the Pipelines. An sample bash script to do that can be found here: https://bitbucket.org/mojall/bitbucket-pipelines-deploy-to-azure/ Without the comments it looks like this:

curl -X POST "https://\$$SITE_NAME:$FTP_PASSWORD@$SITE_NAME.scm.azurewebsites.net/deploy" \
  --header "Content-Type: application/json" \
  --header "Accept: application/json" \
  --header "X-SITE-DEPLOYMENT-ID: $SITE_NAME" \
  --header "Transfer-encoding: chunked" \
  --data "{\"format\":\"basic\", \"url\":\"https://$BITBUCKET_USERNAME:$BITBUCKET_PASSWORD@bitbucket.org/$BITBUCKET_USERNAME/$REPOSITORY_NAME.git\"}"

echo Finished uploading files to site $SITE_NAME.

I now need to set the environment variables in the Pipelines configuration:

Be sure to check the "Secured" checkbox for every password variable, to hide the password in this UI and in the log output of Pipelines.

And we need to add two script commands to the bitbucket-pipelines.yml:

- chmod +x ./deploy-to-azure.bash
- ./deploy-to-azure.bash

The last step is to remove the Azure web hook from the web hook configuration in BitBucket and to remove the failing test. After pushing the changes to BitBucket the build and the first successfull deployment starts immediately.

I now add the failing test again to test the failing deployment again and it worked as expected. The test fails and the next commands don't get executed. The web hook will never triggered and the unstable app will not be deployed.

Now there is a failing build on Pipelines:

(See the commit messages)

And that failing commit is not deployed to azure:

The Continuous Deployment is successfully done.

Conclusion

Isn't it super easy to setup a continuous integration? ~~Unfortunately we are not able to complete the deployment using this.~~ But anyway, we now have a build on any branch and on any pull-request. That helps a lot.

Pros:

  • (+++) super easy to setup
  • (++) almost fully integrated
  • (+++) flexibility based on Docker

Cons:

  • (--) runs only on Linux. I would love to see windows containers working
  • (---) not fully integrated into web hooks. "trigger on successful build state" is missing for the hooks

I would like to have something like this on GitHub too. The usage is almost similar to AppVeyor, but pretty much simpler to configure, less complex and it just works. The reason is Docker, I think. For sure, AppVeyor can do a lot more stuff and couldn't really compared to Pipelines. Anyway, I will do compare it to AppVeyor and will do the same with it in one of the next posts.

Currently there is a big downside with BitBucket Pipelines: Currently this is only working with Docker images that are running on Linux. It is not yet possible to use it for full .NET Framework projects. This is the reason why we never used it at the YooApps for .NET Projects. I'm sure we need to think about doing more projects using .NET Core ;-)


Book Review: ASP.​NET Core 2 and Angular 5

$
0
0

Last fall, I did my first technical review of a book written by Valerio De Sanctis, called ASP.NET Core 2 and Angular 5. This book is about to use Visual Studio 2017 to create a Single Page Application using ASP.NET Core and Angular.

About this book

The full title is "ASP.NET Core 2 and Angular 5: Full-Stack Web Development with .NET Core and Angular" and was published by PacktPub and also available on Amazon. It is available as a printed version and via various e-book formats.

This book doesn't cover both technologies in deep, but gives you a good introduction on how both technologies are working together. It leads you step by step from the initial setup to the finished application. Don't expect a book for expert developers. But this book is great for ASP.NET Developers who want to start with ASP.NET Core and Angular. This book is a step by step tutorial to create all parts of an Application that manages tests, its questions, answers and results. It describes the database as well as the Web APIs, the Angular parts and the HTML, the authentication and finally the deployment to a web server.

Valerio uses the Angular based SPA project , which is available in Visual Studio 2017 and the .NET Core 2.0 SDK. This project template is not the best solution for bigger projects, but but it fits good for small size projects as described in this book.

About the technical review

It was my first technical review of an entire book. It was kinda fun to do this. I'm pretty sure it was a pretty hard job for Valerio, because the technologies changed while he was working on the chapters. ASP.NET Core 2.0 was released after he finished four or five chapter and he needed to rewrite those chapters. He changed the whole Angular integration into the ASP.NET Project, because of the new Angular SPA project template. Also Angular 5 came out during writing. Fortunately there wasn't so much relevant changes between, version 4 and version 5. In know this issues, about writing good contents, while technology changes. I did a article series for a developer magazine about ASP.NET Core and Angular 2 and both ASP.NET Core and Angular changes many times. And changes again right after I finished the articles. I rewrote that stuff a lot and worked almost six months on only three articles. Even my Angular posts in this blog are pretty much outdated and don't work anymore with the latest versions.

Kudos to Valerio, he really did a great job.

I got one chapter by another to review. My job wasn't just to read the chapters, but also to find logical errors, mistakes that will possibly confuse the readers and also to find not working code parts. I followed the chapters as written by Valerio to build this sample application. I followed all instructions and samples to find errors. I reported a lot of errors, I think. And I'm sure that all of them where removed. After I finished the review of the last chapter, I also finished the coding and got a running application deployed on a webserver.

Readers reviews on Amazon and PacktPub

I just had a look into the readers reviews on Amazon and PacktPub. There are not so much reviews done currently, but unfortunately there are (currently) 4 out of 9 reviews talking about errors in the code samples. Mostly about errors in the client side Angular code. This is a lot IMHO. This turns me sadly. And I really apologize that. I was pretty sure I found almost all mistakes, maybe at least those errors that prevents a running application. Because I got it running at the end. Additionally I wasn't the only technical review. There was Ramchandra Vellanki who also did a great job, for sure.

What happened that some readers found errors? Two reasons I thought about first:

  1. The readers didn't follow the instructions really carefully. Especially experienced developers really know how it works or how it should work in their perspective. They don't read exactly, because they seem to know where the way goes. I did so as well during the first three or four chapters and I needed to start again from the beginning.
  2. Dependencies were changing since the book was published. Especially if the package versions inside the package.json were not fixed to a specific version. npm install then loads the latest version, which may contain breaking changes. The package.json in the book has fixed version, but the sources on GitHub doesn't.

I'm pretty sure there are some errors left in the codes, but at the end the application should run.

Also there are conceptual differences. While writing about Angular and ASP.NET Core and while working with it, I learned a lot and from my current point of view, I would not host an Angular app inside an ASP.NET Core application anymore. (Maybe I'll think about doing that in a really small application.) Anyway, there is that ASP.NET Core Angular SPA project and it is really easy to setup a SPA using this. So, why not using this project template to describe the concepts and interaction of Angular and ASP.NET Core? This keeps the book simple and short for beginners.

Conclusion

I would definitely do a technical review again, if needed. As I said, it is fun and an honor to help an author to write a book like this.

Too bad, that some readers struggle about errors anyway and couldn't get the code running. But writing a book is hard work. And we developers all know, that no application is really bug free, so even a book about quickly changing technologies cannot be free of errors.

Trying React the first time

$
0
0

The last two years I worked a lot with Angular. I learned a lot and I also wrote some blog posts about it. While I worked with Angular, I always had React in mind and wanted to learn about that. But I never head the time or a real reason to look at it. I still have no reason to try it, but a little bit of time left. So why not? :-)

This post is just a small overview of what I learned during the setup and in the very first tries.

The Goal

It is not only about developing using React, later I will also see how React works with ASP.NET and ASP.NET Core and how it behaves in Visual Studio. I also want to try the different benefits (compared to Angular) I heard and read about React:

  • It is not a huge framework like Angular but just a library
  • Because It's a library, it should be easy to extend existing web-apps.
  • You should be more free to use different libraries, since there is not all the stuff built in.

Setup

My first ideas was to follow the tutorials on https://reactjs.org/. Using this tutorial some other tools came along and some hidden configuration happened. The worst thing from my perspective is that I need to use a package manager to install another package manager to load the packages. Yarn was installed using NPM and was used. Webpack was installed and used in some way, but there was no configuration, no hint about it. This tutorial uses the create-react-app starter kid. This thing hides many stuff.

Project setup

What I like while working with Angular is a really transparent way of using it and working with it. Because of this I searched for a pretty simple tutorial to setup React in a simple, clean and lightweight way. I found this great tutorial by Robin Wieruch: https://www.robinwieruch.de/minimal-react-Webpack-babel-setup/

This Setup uses NPM to get the packages. It uses Webpack to bundle the needed Javascript, Babel is integrated in to Webpack to transpile the JavaScripts from ES6 to more browser compatible JavaScript.

I also use the Webpack-dev-server to run the React app during development. Also react-hot-loader is used to speed up the development time a little bit. The main difference to Angular development is the usage of ES6 based JavaScript and Babel instead of using Typescript. It should also work with typescript, but it doesn't really seem to matter, because they are pretty similar. I'll try using ES6 to see how it works. The only thing I possibly will miss is the type checking.

As you can see, there is not really a difference to Typescript yet, only the JSX thing takes getting used to:

// index.js
import React from 'react';
import ReactDOM from 'react-dom';

import Layout from './components/Layout';

const app = document.getElementById('app');

ReactDOM.render(<Layout/>, app);

module.hot.accept();

I can also uses classes in JavaScript:

// Layout.js
import React from 'react';
import Header from './Header';
import Footer from './Footer';

export default class Layout extends React.Component {
    render() {
        return (
            <div><Header/><Footer/></div>
        );
    }
}

With this setup, I believe I can easily continue to play around with React.

Visual Studio Code

To support ES6, React and JSX in VSCode I installed some extensions for it:

  • Babel JavaScript by Michael McDermott
    • Syntax-Highlighting for modern JavaScripts
  • ESLint by Dirk Baeumer
    • To lint the modern JavaScripts
  • JavaScript (ES6) code snippets by Charalampos Karypidis
  • Reactjs code snippets by Charalampos Karypidis

Webpack

Webpack is configured to build a bundle.js to thde ./dist folder. This folder is also the root folder for the Webpack dev server. So it will serve all the files from within this folder.

To start building and running the app, there is a start script added to the packages.config

"start": "Webpack-dev-server --progress --colors --config ./Webpack.config.js",

With this I can easily call npm start from a console or from the terminal inside VSCode. The Webpack dev server will rebuild the codes and reload the app in the browser, if a code file changes.

const webpack = require('webpack');

module.exports = {
    entry: [
        'react-hot-loader/patch','./src/index.js'
    ],
    module: {
        rules: [{
            test: /\.(js|jsx)$/,
            exclude: /node_modules/,
            use: ['babel-loader']
        }]
    },
    resolve: {
        extensions: ['*', '.js', '.jsx']
    },
    output: {
        path: __dirname + '/dist',
        publicPath: '/',
        filename: 'bundle.js'
    },
    plugins: [
      new webpack.HotModuleReplacementPlugin()
    ],
    devServer: {
      contentBase: './dist',
      hot: true
    }
};

React Developer Tools

For Chrome and Firefox there are add-ins available to inspect and debug React apps in the browser. For Chrome I installed the React Developer Tools, which is really useful to see the component hierarchy:

Hosting the app

The react app is hosted in a index.html, which is stored inside the ./dist folder. It references the bundle.js. The React process starts in the index.js. React putts the App inside a div with the Id app (as you can see in the first code snippet in this post.)

<!DOCTYPE html><html>
  <head>
      <title>The Minimal React Webpack Babel Setup</title>
  </head>
  <body>
    <div id="app"></div>
    <script src="bundle.js"></script>
  </body></html>

The index.js import the Layout.js. Here a basic layout is defined, by adding a Header and a Footer component, which are also imported from other components.

// Header.js
import React from 'react';
import ReactDOM from 'react-dom';

export default class Header extends React.Component {
    constructor(props) {
        super(props);
        this.title = 'Header';
    }
    render() {
        return (<header><h1>{this.title}</h1></header>
        );
    }
}
// Footer.js
import React from 'react';
import ReactDOM from 'react-dom';

export default class Footer extends React.Component {
    constructor(props) {
        super(props);
        this.title = 'Footer';
    }
    render() {
        return (<footer><h1>{this.title}</h1></footer>
        );
    }
}

The resulting HTML looks like this:

<!DOCTYPE html><html><head><title>The Minimal React Webpack Babel Setup</title></head><body><div id="app"><div><header><h1>Header</h1></header><footer><h1>Footer</h1></footer></div></div><script src="bundle.js"></script></body></html>

Conclusion

My current impression is that React is much more fast on startup than Angular. This is just a kind of a Hello world app, but even for such an app Angular need some time to start a few lines of code. Maybe that changes if the App gets bigger. But I'm sure it keeps to be fast, because of less overhead in the framework.

The setup was easy and works on the first try. The experience in Angular helped a lot here. I already know the tools. Anyway, Robins tutorial is pretty clear, simple and easy to read: https://www.robinwieruch.de/minimal-react-Webpack-babel-setup/

To get started with React, there's also a nice Video series on YouTube, which tells you about the really basics and how to get started creating components and adding the dynamic stuff to the components: https://www.youtube.com/watch?v=MhkGQAoc7bc&list=PLoYCgNOIyGABj2GQSlDRjgvXtqfDxKm5b

The ASP.​NET Core React Project

$
0
0

In the last post I wrote I had a first look into a plain, clean and lightweight React setup. I'm still impressed how easy the setup is and how fast the loading of a React app really is. Before trying to push this setup into a ASP.NET Core application, it would make sense to have a look into the ASP.NET Core React project.

Create the React project

You can either use the "File New Project ..." dialog in Visual Studio 2017 or the .NET CLI to create a new ASP.NET Core React project:

dotnet new react -n MyPrettyAwesomeReactApp

This creates a ready to go React project.

The first impression

At the first glance I saw the webpack.config.js, which is cool. I really love Webpack and I love how it works, how it bundles the relevant files recursively and how it saves a lot of time. Also a tsconfig.json is available in the project. This means the React-Code will be written in TypeScript. Webpack compiles the TypeScript into JavaScript and bundles it into an output file, called main.js

Remember: In the last post the JavaScript code was written in ES6 and transpiled using Babel

The TypeScript files are in the folder ClientApp and the transpiled and bundled Webpack output gets moved to the wwwroot/dist/ folder. This is nice. The Build in VS2017 runs Webpack, this is hidden in MSBuild tasks inside the project file. To see more, you need to have a look into the project file by right clicking the project and select Edit projectname.csproj

You'll than find a ItemGroup with the removed ClientApp Folder:

<ItemGroup><!-- Files not to publish (note that the 'dist' subfolders are re-added below) --><Content Remove="ClientApp\**" /></ItemGroup>

And there are two Targets, which have definitions for the Debug and Publish build defined:

<Target Name="DebugRunWebpack" BeforeTargets="Build" Condition=" '$(Configuration)' == 'Debug' And !Exists('wwwroot\dist') "><!-- Ensure Node.js is installed --><Exec Command="node --version" ContinueOnError="true"><Output TaskParameter="ExitCode" PropertyName="ErrorCode" /></Exec><Error Condition="'$(ErrorCode)' != '0'" Text="Node.js is required to build and run this project. To continue, please install Node.js from https://nodejs.org/, and then restart your command prompt or IDE." /><!-- In development, the dist files won't exist on the first run or when cloning to
        a different machine, so rebuild them if not already present. --><Message Importance="high" Text="Performing first-run Webpack build..." /><Exec Command="node node_modules/webpack/bin/webpack.js --config webpack.config.vendor.js" /><Exec Command="node node_modules/webpack/bin/webpack.js" /></Target><Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish"><!-- As part of publishing, ensure the JS resources are freshly built in production mode --><Exec Command="npm install" /><Exec Command="node node_modules/webpack/bin/webpack.js --config webpack.config.vendor.js --env.prod" /><Exec Command="node node_modules/webpack/bin/webpack.js --env.prod" /><!-- Include the newly-built files in the publish output --><ItemGroup><DistFiles Include="wwwroot\dist\**; ClientApp\dist\**" /><ResolvedFileToPublish Include="@(DistFiles->'%(FullPath)')" Exclude="@(ResolvedFileToPublish)"><RelativePath>%(DistFiles.Identity)</RelativePath><CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory></ResolvedFileToPublish></ItemGroup></Target>

As you can see it runs Webpack twice. Once for the vendor dependencies like Bootstrap, jQuery, etc. and once for the react app in the ClientApp folder.

Take a look at the ClientApp

The first thing you'll see, if you look into the ClientApp folder. There are *.tsx-files instead of *.ts files. This are TypeScript files which are supporting JSX, the wired XML/HTML syntax inside JavaScript code. VS 2017 already knows about the JSX syntax and doesn't show any errors. That's awesome.

This client app is bootstrapped in the boot.tsx (we had the index.js in the other blog post). This app supports routing via the react-router-dom Component. The boot.tsx defines an AppContainer, that primarily hosts the route definitions. stored in the routes.tsx. The Routes than calls the different components depending on the path in the bowsers address bar. This routing concept is a little more intuitive to use than the Angular one. The routing is defined in the component that hosts the routed contents. In this case the Layout component contains the dynamic contents:

// routes.tsx
export const routes = <Layout><Route exact path='/' component={ Home } /><Route path='/counter' component={ Counter } /><Route path='/fetchdata' component={ FetchData } /></Layout>;

Inside the Layout.tsx you see, that the routed components will be rendered in a specific div tag that renders the children defined in the routes.tsx

// Layout.tsx
export class Layout extends React.Component<LayoutProps, {}> {
  public render() {
    return <div className='container-fluid'><div className='row'><div className='col-sm-3'><NavMenu /></div><div className='col-sm-9'>
      { this.props.children }</div></div></div>;
  }
}

Using this approach, it should be possible to add sub routes for specific small areas of the app. Some kind of "nested routes".

There's also an example available about how to fetch data from a Web API. This sample uses isomorphic-fetch' to fetch the data from the Web API:

constructor() {    
  super();
  this.state = { forecasts: [], loading: true };

  fetch('api/SampleData/WeatherForecasts')
    .then(response => response.json() as Promise<WeatherForecast[]>)
    .then(data => {
          this.setState({ forecasts: data, loading: false });
	});
}

Since React doesn't provide a library to load data via HTTP request, you are free to use any library you want. Some other libraries used with React are axios, fetch or Superagent.

A short look into the ASP.NET Core parts

The Startup.cs is a little special. Not really much, but you'll find some differences in the Configure method. There is the use of the WebpackDevMiddleware, that helps while debugging. It calls Webpack on every change in the used TypeScript files and reloads the scripts in the browser while debugging. Using this middleware, you don't need to recompile the whole application or to restart debugging:

if (env.IsDevelopment())
{
  app.UseDeveloperExceptionPage();
  app.UseWebpackDevMiddleware(new WebpackDevMiddlewareOptions
  {
    HotModuleReplacement = true,
    ReactHotModuleReplacement = true
  });
}
else
{
  app.UseExceptionHandler("/Home/Error");
}

And the route configuration contains a fallback route, that gets used, if the requested path doesn't match any MVC route:

app.UseMvc(routes =>
{
  routes.MapRoute(
    name: "default",
    template: "{controller=Home}/{action=Index}/{id?}");

  routes.MapSpaFallbackRoute(
    name: "spa-fallback",
    defaults: new { controller = "Home", action = "Index" });
});

The Integration in the views is interesting as well. In the _Layout.cshtml:

  • There is a base href set to the current base URL.
  • The vendor.css and a site.css is referenced in the head of the document.
  • The vendor.js is referenced at the bottom.
<!DOCTYPE html><html><head><meta charset="utf-8" /><meta name="viewport" content="width=device-width, initial-scale=1.0" /><title>@ViewData["Title"] - ReactWebApp</title><base href="~/" /><link rel="stylesheet" href="~/dist/vendor.css" asp-append-version="true" /><environment exclude="Development"><link rel="stylesheet" href="~/dist/site.css" asp-append-version="true" /></environment></head><body>
    @RenderBody()<script src="~/dist/vendor.js" asp-append-version="true"></script>
    @RenderSection("scripts", required: false)</body></html>

The actual React app isn't referenced here, but in the Index.cshtml:

@{
    ViewData["Title"] = "Home Page";
}<div id="react-app">Loading...</div>

@section scripts {
    <script src="~/dist/main.js" asp-append-version="true"></script>
}

This makes absolutely sense. Doing like this, you are able to create a React app per view. Routing probably doesn't work this way, because there is only one SpaFallbackRoute, but if you just want to make single views more dynamic, it would make sense to create multiple views which are hosting a specific React app.

This is exactly what I expect using React. E. g. I have many old ASP.NET Applications and I want to get rid of the old client script and I want to modernize those applications step by step. In many cases a rewrite costs to much and it would be easy to replace the old code by clean React apps.

The other changes in that project are not really related to React in general. They are just implementation details of this React demo applications

  • There is a simple API controller to serve the weather forecasts
  • The HomeController only contains the Index and the Error actions

Some concluding words

I didn't really expect such a clearly and transparently configured project template. If I try to put the setup of the last post into a ASP.NET Core project, I would do it almost the same way. Using Webpack to transpile and bundle the files and save them somewhere in the wwwroot folder.

From my perspective, I would use this project template as a starter for small projects to medium sized projects (whatever this means). For medium to bigger sized projects, I would - again - propose to divide the client app ad the server part into two different projects, to host them independently, to develop them independently. Hosting independently also means, scale independently. Develop independently means both, scale the teams independently and to focus only on the technology and tools, which are used for this part of the application.

To learn more about React and how it works with ASP.NET Core in Visual Studio 2017, I will create a Chat-App. I will also write a small series about it:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo

Another GraphQL library for ASP.​NET Core

$
0
0

I recently read a interesting tweet by Glenn Block about a GraphQL app running on the Linux Subsystem for Windows:

It is impressive to run a .NET Core app in Linux on Windows, which is not a Virtual Machine on Windows. I never hat the chance to try that. I just played a little bit with the Linux Subsystem for Windows. The second that came to mind was: "wow, did he use my GraphQL Middleware library or something else?"

He uses different libraries, as you can see in his repository on GitHub: https://github.com/glennblock/orders-graphql

  • GraphQL.Server.Transports.AspNetCore
  • GraphQL.Server.Transports.WebSockets

This libraries are built by the makers of graphql-dotnet. The project is hosted in the graphql-dotnet organization on GitHub: https://github.com/graphql-dotnet/server. They also provide a Middleware that can be used in ASP.NET Core projects. The cool thing about that project is a WebSocket endpoint for GraphQL.

What about the GraphQL middleware I wrote?

Because my GraphQL middleware, is also based on graphql-dotnet, I'm not yet sure whether to continue maintaining this middleware or to retire this project. I'm not yet sure what to do, but I'll try the other implementation to find out more.

I'm pretty sure the contributors of the graphql-dotnet project know a lot more about GraphQL and there library, than I do. Both project will work the same way and will return the same result - hopefully. The only difference is the API and the configuration. The only reason to continue working on the project is to learn more about GraphQL or to maybe provide a better API ;-)

If I retire my project, I would try to contribute to the graphql-dotnet projects.

What do you think? Drop me a comment and tell me.

Creating a chat application using React and ASP.​NET Core - Part 1

$
0
0

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

Requirements

I want to create a small chat application that uses React, SignalR and ASP.NET Core 2.0. The frontend should be created using React. The backend serves a Websocket end-point using SignalR and some basic Web API end-points to fetch some initial data, some lookup data and to do the authentication (I'll use IdentityServer4 to do the authentication). The project setup should be based on the Visual Studio React Project I introduced in one of the last posts.

The UI should be clean and easy to use. It should be possible to use the chat without a mouse. So the focus is also on usability and a basic accessibility. We will have a large chat area to display the messages, with an input field for the messages below. The return key should be the primary method to send the message. There's one additional button to select emojis, using the mouse. But basic emojis should also be available using text symbols.

On the left side, I'll create a list of online users. Every new logged on user should be mentioned in the chat area. The user list should be auto updated after a user logs on. We will use SignalR here too.

  • User list using SignalR
    • small area on the left hand side
    • Initially fetching the logged on users using Web API
  • Chat area using SignalR
    • large area on the right hand side
    • Initially fetching the last 50 messages using Web API
  • Message field below the chat area
    • Enter kay should send the message
    • Emojis using text symbols
  • Storing the chat history in a database (using Azure Table Storage)
  • Authentication using IdentityServer4

Project setup

The initial project setup is easy and already described in one of the last post. I'll just do a quick introduction here.

You can either use visual studio 2017 to create a new project

or the .NET CLI

dotnet new react -n react-chat-app

It takes some time to fetch the dependent packages. Especially the NPM packages are a lot. The node_modules folder contains around 10k files and will require 85 MB on disk.

I also added the "@aspnet/signalr-client": "1.0.0-alpha2-final" to the package.json

Don'be be confused, with the documentation. In the GitHub repository they wrote the NPM name signalr-client should not longer used and the new name is just signalr. But when I wrote this lines, the package with the new name is not yet available on NPM. So I'm still using the signalr-client package.

After adding that package, an optional dependency wasn't found and the NPM dependency node in Visual Studio will display a yellow exclamation mark. This is annoying and id seems to be an critical error, but it will work anyway:

The NuGet packages are fine. To use SignalR I used the the Microsoft.AspNetCore.SignalR package with the version 1.0.0-alpha2-final.

The project compiles without errors. And after pressing F5, the app starts as expected.

Since a while I configured Edge as the start-up browser to run ASP.NET Core projects, because Chrome got very slow. Once the IISExpress or Kestrel is running you can easily use Chrome or any other browser to call and debug the web. Which makes sense, since the React developer tolls are not yet available for Edge and IE.

This is all to setup and to configure. All the preconfigured TypeScript and Webpack stuff is fine and runs as expected. If there's no critical issue, you don't really need to know about it. It just works. I would anyway recommend to learn about the TypeScript configuration and Webpack to be safe.

Closing words

Now the requirements are clear and the project is set-up. In this series I will not set up an automated build using CAKE. I'll also not write about unit tests. The focus is React, SignalR and ASP.NET Core only.

In the next chapter I'm going build the UI React components and to implement the basic client logic to get the UI working.

Creating a chat application using React and ASP.​NET Core - Part 2

$
0
0

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

Basic Layout

First let's have a quick look into the hierarchy of the React components in the folder ClientApp.

The app gets bootstrapped within the boot.tsx file. This is the first sort of component where the AppContainer gets created and the router is placed. This file also contains the the call to render the react app in the relevant HTML element, which is a div with the ID react-app in this case. It is a div in the Views/Home/Index.cshtml

This component also renders the content of the routes.tsx. This file contains the route definitions wrapped inside a Layout element. This Layout element is defined in the layout.tsx inside the components folder. The routes.tsx also references three more components out of the components folder: Home, Counter and FetchData. So it seems the router renders the specific components, depending on the requested path inside the Layout element:

// routes.tsx
import * as React from 'react';
import { Route } from 'react-router-dom';
import { Layout } from './components/Layout';
import { Home } from './components/Home';
import { FetchData } from './components/FetchData';
import { Counter } from './components/Counter';

export const routes = <Layout><Route exact path='/' component={ Home } /><Route path='/counter' component={ Counter } /><Route path='/fetchdata' component={ FetchData } /></Layout>;

As expected, the Layout component than defines the basic layout and renders the contents into a Bootstrap grid column element. I changed that a little bit to render the contents directly into the fluid container and the menu is now outside the fluid container. This component now contains less code than before.:

import * as React from 'react';
import { NavMenu } from './NavMenu';

export interface LayoutProps {
    children?: React.ReactNode;
}

export class Layout extends React.Component<LayoutProps, {}> {
    public render() {
        return <div><NavMenu /><div className='container-fluid'>
                {this.props.children}</div></div>;
    }
}

I also changed the NavMenu component to place the menu on top of the page using the typical Bootstrap styles. (Visit the repository for more details.)

My chat goes into the Home component, because this is the most important feature of my app ;-) This is why I removed all the contents of the Home component and placed the layout for the actual chat there.

import * as React from 'react';
import { RouteComponentProps } from 'react-router';

import { Chat } from './home/Chat';
import { Users } from './home/Users';

export class Home extends React.Component<RouteComponentProps<{}>, {}> {
    public render() {
        return <div className='row'><div className='col-sm-3'><Users /></div><div className='col-sm-9'><Chat /></div></div>;
    }
}

This component uses two new components: Users to display the online users and Chat to add the main chat functionalities. It seems to be a common way in Rdeact to store sub-components inside a subfolder with the same name as the parent component. So, I created a Home folder inside the components folder and placed the Users component and the Chat component inside of that new folder.

The Users Component

Let's have a look into the more simple Users component first. This component doesn't have any interaction yet. It only fetches and displays the users online. To keep the first snippet simple I removed the methods inside. This file imports all from the module 'react' as React object. Using this we are able to access the Component type we need to derive from:

// components/Home/Users.tsx
import * as React from 'react';

interface UsersState {
    users: User[];
}
interface User {
    id: number;
    name: string;
}

export class Users extends React.Component<{}, UsersState> {
    //
}

This base class also defines a state property. The type of that state is defined in the second generic argument of the React.Component base class. (The first generic argument is not needed here). The state is a kind of a container type that contains data you want to store inside the component. In this case I just need a UsersState with a list of users inside. To display a user in the list we only need an identifier and a name. A unique key or id is required by React to create a list of items in the DOM

I don't fetch the data from the server side yet. This post is only about the UI components, so I'm going to mock the data in the constructor:

constructor() {
    super();
    this.state = {
        users: [
            { id: 1, name: 'juergen' },
            { id: 3, name: 'marion' },
            { id: 2, name: 'peter' },
            { id: 4, name: 'mo' }]
    };
}

Now the list of users is available in the current state and I'm able to use this list to render the users:

public render() {
    return <div className='panel panel-default'><div className='panel-body'><h3>Users online:</h3><ul className='chat-users'>
                {this.state.users.map(user =><li key={user.id}>{user.name}</li>
                )}</ul></div></div>;
}

JSX is a wired thing: HTML like XML syntax, completely mixed with JavaScript (or TypeScript in this case) but it works. It remembers a little bit like Razor. this.state.users.map iterates through the users and renders a list item per user.

The Chat Component

The Chat component is similar, but contains more details and some logic to interact with the user. Initially we have almost the same structure:

// components/Home/chat.tsx
import * as React from 'react';
import * as moment from 'moment';

interface ChatState {
    messages: ChatMessage[];
    currentMessage: string;
}
interface ChatMessage {
    id: number;
    date: Date;
    message: string;
    sender: string;
}

export class Chat extends React.Component<{}, ChatState> {
    //
}

I also imported the module moment, which is moment.js I installed using NPM:

npm install moment --save

moment.js is a pretty useful library to easily work with dates and times in JavaScript. It has a ton of features, like formatting dates, displaying times, creating relative time expressions and it also provides a proper localization of dates.

Now it makes sense to have a look into the render method first:

// components/Home/chat.tsx
public render() {
    return <div className='panel panel-default'><div className='panel-body panel-chat'
            ref={this.handlePanelRef}><ul>
                {this.state.messages.map(message =><li key={message.id}><strong>{message.sender} </strong>
                        ({moment(message.date).format('HH:mm:ss')})<br />
                        {message.message}</li>
                )}</ul></div><div className='panel-footer'><form className='form-inline' onSubmit={this.onSubmit}><label className='sr-only' htmlFor='msg'>Message</label><div className='input-group col-md-12'><button className='chat-button input-group-addon'>:-)</button><input type='text' value={this.state.currentMessage}
                        onChange={this.handleMessageChange}
                        className='form-control'
                        id='msg'
                        placeholder='Your message'
                        ref={this.handleMessageRef} /><button className='chat-button input-group-addon'>Send</button></div></form></div></div>;
}

I defined a Bootstrap panel, that has the chat area in the panel-body and the input fields in the panel-footer. In the chat area we also have a unordered list ant the code to iterate through the messages. This is almost similar to the user list. We only display some more date here. Here you can see the usage of moment.js to easily format the massage date.

The panel-footer contains the form to compose the message. I used a input group to add a button in front of the input field and another one after that field. The first button is used to select an emoji. The second one is to also send the message (for people who cannot use the enter key to submit the message).

The ref attributes are used for a cool feature. Using this, you are able to get an instance of the element in the backing code. This is nice to work with instances of elements directly. We will see the usage later on. The code in the ref attributes are pointing to methods, that get's an instance of that element passed in:

msg: HTMLInputElement;
panel: HTMLDivElement;

// ...

handlePanelRef(div: HTMLDivElement) {
    this.panel = div;
}
handleMessageRef(input: HTMLInputElement) {
    this.msg = input;
}

I save the instance globally in the class. One thing I didn't expect is a wired behavior of this. This behavior is a typical JavaScript behavior, but I expected is to be solved in TypeScript. I also didn't see this in Angular. The keyword this is not set. It is nothing. If you want to access this in methods used by the DOM, you need to kinda 'inject' or 'bind' an instance of the current object to get this set. This is typical for JavaScript and makes absolutely sense This needs to be done in the constructor:

constructor() {
    super();
    this.state = { messages: [], currentMessage: '' };

    this.handlePanelRef = this.handlePanelRef.bind(this);
    this.handleMessageRef = this.handleMessageRef.bind(this);
    // ...
}

This is the current constructor, including the initialization of the state. As you can see, we bind the the current instance to those methods. We need to do this for all methods, that need to use the current instance.

To get the message text from the text field, it is needed to bind an onChange method. This method collects the value from the event target:

handleMessageChange(event: any) {
    this.setState({ currentMessage: event.target.value });
}

Don't forget to bind the current instance in the constructor:

this.handleMessageChange = this.handleMessageChange.bind(this);

With this code we get the current message into the state to use it later on. The current state is also bound to the value of that text field, just to clear this field after submitting that form.

The next important event is onSubmit in the form. This event gets triggered by pressing the send button or by pressing enter inside the text field:

onSubmit(event: any) {
    event.preventDefault();
    this.addMessage();
}

This method stops the default behavior of HTML forms, to avoid a reload of the entire page. And calls the method addMessage, that creates and ads the message to the current states messages list:

addMessage() {
    let currentMessage = this.state.currentMessage;
    if (currentMessage.length === 0) {
        return;
    }
    let id = this.state.messages.length;
    let date = new Date();

    let messages = this.state.messages;
    messages.push({
        id: id,
        date: date,
        message: currentMessage,
        sender: 'juergen'
    })
    this.setState({
        messages: messages,
        currentMessage: ''
    });
    this.msg.focus();
    this.panel.scrollTop = this.panel.scrollHeight - this.panel.clientHeight;
}

Currently the id and the sender of the message are faked. Later on, in the next posts, we'll send the message to the server using Websockets and we'll get a massage including a valid id back. We'll also have an authenticated user later on. As mentioned the current post, is just about to get the UI running.

We get the currentMessage and the massages list out of the current state. Than we add the new message to the current list and assign a new state, with the updated list and an empty currentMessage. Setting the state triggers an event to update the the UI. If I just update the fields inside the state, the UI don't get notified. It is also possible to only update a single property of the state.

If the state is updated, I need to focus the text field and to scroll the panel down to the latest message. This is the only reason, why I need the instance of the elements and why I used the ref methods.

That's it :-)

After pressing F5, I see the working chat UI in the browser

Closing words

By closing this post, the basic UI is working. This was easier than expected, I just stuck a little bit, by accessing the HTML elements to focus the text field and to scroll the chat area and when I tried to access the current instance using this. React is heavily used and the React community is huge. This is why it is easy to get help pretty fast.

In the next post, I'm going to integrate SignalR and to get the Websockets running. I'll also add two Web APIs to fetch the initial data. The current logged on users and the latest 50 chat messages, don't need to be pushed by the Websocket. Using this I need to get into the first functional component in React and to inject this into the UI components of this post.

Creating a chat application using React and ASP.​NET Core - Part 3

$
0
0

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

About SignalR

SignalR for ASP.NET Core is a framework to enable Websocket communication in ASP.NET Core applications. Modern browsers already support Websocket, which is part of the HTML5 standard. For older browser SignalR provides a fallback based on standard HTTP1.1. SignalR is basically a server side implementation based on ASP.NET Core and Kestrel. It uses the same dependency injection mechanism and can be added via a NuGet package into the application. Additionally, SignalR provides various client libraries to consume Websockets in client applications. In this chat application, I use @aspnet/signalr-client loaded via NPM. The package also contains the TypeScript definitions, which makes it easy to use in a TypeScript application, like this.

I added the React Nuget package in the first part of this blog series. To enable SignalR I need to add it to the ServiceCollection:

services.AddSignalR();

The server part

In C#, I created a ChatService that will later be used to connect to the data storage. Now it is using a dictionary to store the messages and is working with this dictionary. I don't show this service here, because the implementation is not relevant here and will change later on. But I use this Service in in the code I show here. This service is mainly used in the ChatController, the Web API controller to load some initial data and in the ChatHub, which is the Websocket endpoint for this chat. The service gets injected via dependency injection that is configured in the Startup.cs:

services.AddSingleton<IChatService, ChatService>();

Web API

The ChatController is simple, it just contains GET methods. Do you remember the last posts? The initial data of the logged on users and the first chat messages were defined in the React components. I moved this to the ChatController on the server side:

[Route("api/[controller]")]
public class ChatController : Controller
{
    private readonly IChatService _chatService;

    public ChatController(IChatService chatService)
    {
        _chatService = chatService;
    }
    // GET: api/<controller>
    [HttpGet("[action]")]
    public IEnumerable<UserDetails> LoggedOnUsers()
    {
        return new[]{
            new UserDetails { Id = 1, Name = "Joe" },
            new UserDetails { Id = 3, Name = "Mary" },
            new UserDetails { Id = 2, Name = "Pete" },
            new UserDetails { Id = 4, Name = "Mo" } };
    }

    [HttpGet("[action]")]
    public IEnumerable<ChatMessage> InitialMessages()
    {
        return _chatService.GetAllInitially();
    }
}

The method LoggedOnUsers simply created the users list. I will change that, if the authentication is done. The method InitialMessages loads the first 50 messages from the faked data storage.

SignalR

The Websocket endpoints are defined in so called Hubs. One Hub is defining one single Websocket endpoint. I created a ChatHub, that is the endpoint for this application. The methods in the ChatHub are handler methods, to handle incoming messages through a specific channel.

The ChatHub needs to be added to the SignalR middleware:

app.UseSignalR(routes =>
{
    routes.MapHub<ChatHub>("chat");
});

A SignalR Methods in the Hub are the channel definitions and the handlers at the same time, while NodeJS socket.io is defining channels and binds an handler to this channel.

The currently used data are still fake data and authentication is not yet implemented. This is why the users name is hard coded yet:

using Microsoft.AspNetCore.SignalR;
using ReactChatDemo.Services;

namespace ReactChatDemo.Hubs
{
    public class ChatHub : Hub
    {
        private readonly IChatService _chatService;

        public ChatHub(IChatService chatService)
        {
            _chatService = chatService;
        }

        public void AddMessage(string message)
        {
            var chatMessage = _chatService.CreateNewMessage("Juergen", message);
            // Call the MessageAdded method to update clients.
            Clients.All.InvokeAsync("MessageAdded", chatMessage);
        }
    }
}

This Hub only contains a method AddMessage, that gets the actual message as a string. Later on we will replace the hard coded user name, with the name of the logged on user. Than a new message gets created and also added to the data store via the ChatService. The new message is an object, that contains a unique id, the name of the authenticated user, a create date and the actual message text.

Than the message gets, send to the client through the Websocket channel "MessageAdded".

The client part

On the client side, I want to use the socket in two different components, but I want to avoid to create two different Websocket clients. The idea is to create a WebsocketService class, that is used in the two components. Usually I would create two instances of this WebsocketService, but this would create two different clients too. I need to think about dependency injection in React and a singleton instance of that service.

SignalR Client

While googling for dependency injection in React , I read a lot about the fact, that DI is not needed in React. I was kinda confused. DI is everywhere in Angular, but it is not necessarily needed in React? There are packages to load, to support DI, but I tried to find another way. And actually there is another way. In ES6 and in TypeScript it is possible to immediately create an instance of an object and to import this instance everywhere you need it.

import { HubConnection, TransportType, ConsoleLogger, LogLevel } from '@aspnet/signalr-client';

import { ChatMessage } from './Models/ChatMessage';

class ChatWebsocketService {
    private _connection: HubConnection;

    constructor() {
        var transport = TransportType.WebSockets;
        let logger = new ConsoleLogger(LogLevel.Information);

        // create Connection
        this._connection = new HubConnection(`http://${document.location.host}/chat`,
            { transport: transport, logging: logger });
        
        // start connection
        this._connection.start().catch(err => console.error(err, 'red'));
    }

    // more methods here ...
   
}

const WebsocketService = new ChatWebsocketService();

export default WebsocketService;

Inside this class the Websocket (HubConnection) client gets created and configured. The transport type needs to be WebSockets. Also a ConsoleLogger gets added to the Client, to send log information the the browsers console. In the last line of the constructor, I start the connection and add an error handler, that writes to the console. The instance of the connections is stored in a private variable inside the class. Right after the class I create an instance and export the instance. This way the instance can be imported in any class:

import WebsocketService from './WebsocketService'

To keep the Chat component and the Users component clean, I created additional service classes for each the components. This service classes encapsulated the calls to the Web API endpoints and the usage of the WebsocketService. Please have a look into the GitHub repository to see the complete services.

The WebsocketService contains three methods. One is to handle incoming messages, when a user logged on the chat:

registerUserLoggedOn(userLoggedOn: (id: number, name: string) => void) {
    // get new user from the server
    this._connection.on('UserLoggedOn', (id: number, name: string) => {
        userLoggedOn(id, name);
    });
}

This is not yet used. I need to add the authentication first.

The other two methods are to send a chat message to the server and to handle incoming chat messages:

registerMessageAdded(messageAdded: (msg: ChatMessage) => void) {
    // get nre chat message from the server
    this._connection.on('MessageAdded', (message: ChatMessage) => {
        messageAdded(message);
    });
}
sendMessage(message: string) {
    // send the chat message to the server
    this._connection.invoke('AddMessage', message);
}

In the Chat component I pass a handler method to the ChatService and the service passes the handler to the WebsocketService. The handler than gets called every time a message comes in:

//Chat.tsx
let that = this;
this._chatService = new ChatService((msg: ChatMessage) => {
    this.handleOnSocket(that, msg);
});

In this case the passed in handler is only an anonymous method, a lambda expression, that calls the actual handler method defined in the component. I need to pass a local variable with the current instance of the chat component to the handleOnSocket method, because this is not available after when the handler is called. It is called outside of the context where it is defined.

The handler than loads the existing messages from the components state, adds the new message and updates the state:

//Chat.tsx
handleOnSocket(that: Chat, message: ChatMessage) {
    let messages = that.state.messages;
    messages.push(message);
    that.setState({
        messages: messages,
        currentMessage: ''
    });
    that.scrollDown(that);
    that.focusField(that);
}

At the end, I need to scroll to the latest message and to focus the text field again.

Web API client

The UsersService.ts and the ChatService.ts, both contain a method to fetch the data from the Web API. As preconfigured in the ASP.NET Core React project, I am using isomorphic-fetch to call the Web API:

//ChatService.ts
public fetchInitialMessages(fetchInitialMessagesCallback: (msg: ChatMessage[]) => void) {
    fetch('api/Chat/InitialMessages')
        .then(response => response.json() as Promise<ChatMessage[]>)
        .then(data => {
            fetchInitialMessagesCallback(data);
        });
}

The method fetchLogedOnUsers in the UsersService service looks almost the same. The method gets a callback method from the Chat component, that gets the ChatMessages passed in. Inside the Chat component this method get's called like this:

this._chatService.fetchInitialMessages(this.handleOnInitialMessagesFetched);

The handler than updates the state with the new list of ChatMessages and scrolls the chat area down to the latest message:

handleOnInitialMessagesFetched(messages: ChatMessage[]) {
    this.setState({
        messages: messages
    });

    this.scrollDown(this);
}

Let's try it

Now it is time to try it out. F5 starts the application and opens the configured browser:

This is almost the same view as in the last post about the UI. To be sure React is working, I had a look into the network tap in the browser developer tools:

Here it is. Here you can see the message history of the web socket endpoint. The second line displays the message sent to the server and the third line is the answer from the server containing the ChatMessage object.

Closing words

This post was less easy than the posts before. Not the technical part, but I refactored the the client part a little bit to keep the React component as simple as possible. For the functional components, I used regular TypeScript files and not the TSX files. This worked great.

I'm still impressed by React.

In the next post I'm going to add Authorization to get the logged on user and to authorize the chat to logged-on users only. I'll also add a permanent storage for the chat message.


Creating a chat application using React and ASP.​NET Core - Part 4

$
0
0

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

Intro

My idea about this app is to split the storages, between a storage for flexible objects and immutable objects. The flexible objects are the users and the users metadata in this case. Immutable objects are the chat message.

The messages are just stored one by one and will never change. Storing a message doesn't need to be super fast, but reading the messages need to be as fast as possible. This is why I want to go with the Azure Table Storage. This is one of the fastest storages on Azure. In the past, at the YooApps, we also used it as an event store for CQRS based applications.

Handling the users doesn't need to be super fast as well, because we only handle one user at one time. We don't read all of the users at one blow, we don't do batch operations on it. So using a SQL Storage with IdentityServer4on e.g. a Azure SQL Database should be fine.

The users online will be stored in memory only, which is the third storage. The memory is save in this case, because, if the app shuts down, the users need to logon again anyway and the list of users online gets refilled. And it is even not really critical, if the list of the users online is not in sync with the logged on users.

This leads into three different storages:

  • Users: Azure SQL Database, handled by IdentityServer4
  • Users online: Memory, handled by the chat app
    • A singleton instance of a user tracker class
  • Messages: Azure Table Storage, handled by the chat app
    • Using the SimpleObjectStore and the Azure table Storage provider

Setup IdentityServer4

To keep the samples easy, I do the logon of the users on the server side only. (I'll go through the SPA logon using React and IdentityServer4 in another blog post.) That means, we are validating and using the senders name on the server side - in the MVC controller, the API controller and in the SignalR Hub - only.

It is recommended to setup the IdentityServer4 in a separate web application. We will do it the same way. So I followed the quickstart documentation on the IdentityServer4 web site, created a new empty ASP.NET Core project and added the IdentiyServer4 NuGet packages, as well as the MVC package and the StaticFile package. I first planned to use ASP.NET Core Identity with the IdentityServer4 to store the identities, but I changed that, to keep the samples simple. Now I only use the in-memory configuration, you can see in the quickstart tutorials, I'm able to use ASP.NET Identity or any other custom SQL storage implementation later on. I also copied the IdentityServer4 UI code from the IdentityServer4.Quickstart.UI repository into that project.

The Startup.cs of the IdentityServer project look s pretty clean. It adds the IdentityServer to the service collection and uses the IdentityServer middleware. While adding the services, I also add the configurations for the IdentityServer. As recommended and shown in the quickstart, the configuration is wrapped in the Config class, that is used here:

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();

        // configure identity server with in-memory stores, keys, clients and scopes
        services.AddIdentityServer()
            .AddDeveloperSigningCredential()
            .AddInMemoryIdentityResources(Config.GetIdentityResources())
            .AddInMemoryApiResources(Config.GetApiResources())
            .AddInMemoryClients(Config.GetClients())
            .AddTestUsers(Config.GetUsers());
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }

        // use identity server
        app.UseIdentityServer();

        app.UseStaticFiles();
        app.UseMvcWithDefaultRoute();
    }
}

The next step is to configure the IdentityServer4. As you can see in the snippet above, this is done in a class called Config:

public class Config
{
    public static IEnumerable<Client> GetClients()
    {
        return new List<Client>
        {
            new Client
            {
                ClientId = "reactchat",
                ClientName = "React Chat Demo",

                AllowedGrantTypes = GrantTypes.Implicit,
                    
                RedirectUris = { "http://localhost:5001/signin-oidc" },
                PostLogoutRedirectUris = { "http://localhost:5001/signout-callback-oidc" },

                AllowedScopes =
                {
                    IdentityServerConstants.StandardScopes.OpenId,
                    IdentityServerConstants.StandardScopes.Profile
                }
            }
        };
    }

    internal static List<TestUser> GetUsers()
    {
        return new List<TestUser> {
            new TestUser
            {
                SubjectId = "1",
                Username = "juergen@gutsch-online.de",
                Claims = new []{ new Claim("name", "Juergen Gutsch") },
                Password ="Hello01!"
            }
        };
    }
    public static IEnumerable<ApiResource> GetApiResources()
    {
        return new List<ApiResource>
        {
            new ApiResource("reactchat", "React Chat Demo")
        };
    }

    public static IEnumerable<IdentityResource> GetIdentityResources()
    {
        return new List<IdentityResource>
        {
            new IdentityResources.OpenId(),
            new IdentityResources.Profile(),
        };
    }
}

The clientid is calles reactchat. I configured both projects, the chat application and the identity server application, to run with specific ports. The chat application runs with port 5001 and the identity server uses port 5002. So the redirect URIs in the client configuration points to the port 5001.

Later on we are able to replace this configuration with a custom storage for the users and the clients.

We also need to setup the client (the chat application) to use this identity server.

Adding authentication to the chat app

To add authentication, I need to add some configuration to the Startup.cs. The first thing is to add the authentication middleware to the Configure method. This does all the authentication magic and handles multiple kinds of authentication:

app.UseAuthentication();

Be sure to add this line before the usage of MVC and SignalR. I also put this line before the usage of the StaticFilesMiddleware.

Now I need to add and to configure the needed services for this middleware.

services.AddAuthentication(options =>
    {
        options.DefaultScheme = "Cookies";
        options.DefaultChallengeScheme = "oidc";                    
    })
    .AddCookie("Cookies")
    .AddOpenIdConnect("oidc", options =>
    {
        options.SignInScheme = "Cookies";

        options.Authority = "http://localhost:5002";
        options.RequireHttpsMetadata = false;
        options.TokenValidationParameters.NameClaimType = "name";

        options.ClientId = "reactchat";
        options.SaveTokens = true;
    });

We add cookie authentication as well as OpenID connect authentication. The cookie is used to temporary store the users information to avoid an OIDC login on every request. To keep the samples simples I switched off HTTPS.

I need to specify the NameClaimType, because IdentityServer4 provides the users name within a simpler claim name, instead of the long default one..

That's it for the authentication part. We now need to secure the chat, This is done by adding the AuthorizeAttribute to the HomeController. Now the app will redirect to the identity servers login page, if we try to access the view created by the secured controller:

After entering the credentials, we need to authorize the app to get the needed profile information from the identity server:

If this is done we can start using the users name in the chat. To do this, we need to change the AddMessage method in the ChatHab a little bit:

public void AddMessage(string message)
{
    var username = Context.User.Identity.Name;
    var chatMessage =  _chatService.CreateNewMessage(username, message);
    // Call the MessageAdded method to update clients.
    Clients.All.InvokeAsync("MessageAdded", chatMessage);
}

I removed the magic string with my name in it and replaced it with the username I get from the current Context. Now the chat uses the logged on user to add chat messages:

I'll not go into the user tracker here, to keep this post short. Please follow the GitHub repos to learn more about tracking the online state of the users.

Storing the messages

The idea is to keep the messages stored permanently on the server. The current in-memory implementation doesn't handle a restart of the application. Every time the app restarts the memory gets cleared and the messages are gone. I want to use the Azure Table Storage here, because it is pretty simple to use and reading the storage is amazingly fast. We need to add another NuGet package to our app which is the AzureStorageClient.

To encapsulate the Azure Storage I will create a ChatStorageRepository, that contains the code to connect to the Tables.

Let's quickly setup a new storage account on azure. Logon to the azure portal and go to the storage section. Create a new storage account and follow the wizard to complete the setup. After that you need to copy the storage credentials ("Account Name" and "Account Key") from the portal. We need them to to connect to the storage account alter on.

Be careful with the secrets

Never ever store the secret information in a configuration or settings file, that is stored in the source code repository. You don't need to do this anymore, with the user secrets and the Azure app settings.

All the secret information and the database connection string should be stored in the user secrets during development time. To setup new user secrets, just right click the project that needs to use the secrets and choose the "Manage User Secrets" entry from the menu:

Visual Studio then opens a secrets.json file for that specific project, that is stored somewhere in the current users AppData folder. You see the actual location, if you hover over the tab in Visual Studio. Add your secret data there and save the file.

The data than gets passed as configuration entries into the app:

// ChatMessageRepository.cs
private readonly string _tableName;
private readonly CloudTableClient _tableClient;
private readonly IConfiguration _configuration;

public ChatMessageRepository(IConfiguration configuration)
{
    _configuration = configuration;

    var accountName = configuration.GetValue<string>("accountName");
    var accountKey = configuration.GetValue<string>("accountKey");
    _tableName = _configuration.GetValue<string>("tableName");

    var storageCredentials = new StorageCredentials(accountName, accountKey);
    var storageAccount = new CloudStorageAccount(storageCredentials, true);
    _tableClient = storageAccount.CreateCloudTableClient();
}

On Azure there is an app settings section in every Azure Web App. Configure the secrets there. This settings get passes as configuration items to the app as well. This is the most secure approach to store the secrets.

Using the table storage

You don't really need to create the actual table using the Azure portal. I do it by code if the table doesn't exists. To do this, I needed to create a table entity object first. This defines the available fields in that Azure Table Storage

public class ChatMessageTableEntity : TableEntity
{
    public ChatMessageTableEntity(Guid key)
    {
        PartitionKey = "chatmessages";
        RowKey = key.ToString("X");
    }

    public ChatMessageTableEntity() { }

    public string Message { get; set; }

    public string Sender { get; set; }
}

The TableEntity has three default properties, which are a Timestamp, a unique RowKey as string and a PartitionKey as string. The RowKey need to be unique. In a users table the RowKey could be the users email address. In our case we don't have a unique value in the chat messages so we'll use a Guid instead. The PartitionKey is not unique and bundles several items into something like a storage unit. Reading entries from a single partition is quite fast, data inside a partition never gets spliced into many storage locations. They will kept together. In the current phase of the project it doesn't make sense to use more than one partition. Later on it would make sense to use e.g. one partition key per chat room.

The ChatMessageTableEntity has one constructor we will use to create a new entity and an empty constructor that is used by the TableClient to create it out of the table data. I also added two properties for the Message and the Sender. I will use the Timestamp property of the parent class for the time shown in the chat window.

Add a message to the Azure Table Storage

To add a new message to the Azure Table Storage, I created a new method to the repository:

// ChatMessageRepository.cs
public async Task<ChatMessageTableEntity> AddMessage(ChatMessage message)
{
    var table = _tableClient.GetTableReference(_tableName);

    // Create the table if it doesn't exist.
    await table.CreateIfNotExistsAsync();

    var chatMessage = new ChatMessageTableEntity(Guid.NewGuid())
    {
        Message = message.Message,
        Sender = message.Sender
    };

    // Create the TableOperation object that inserts the customer entity.
    TableOperation insertOperation = TableOperation.Insert(chatMessage);

    // Execute the insert operation.
    await table.ExecuteAsync(insertOperation);

    return chatMessage;
}

This method uses the TableClient created in the constructor.

Read messages from the Azure Table Storage

Reading the messages is done using the method ExecuteQuerySegmentedAsync. With this method it is possible to read all the table entities in chunks from the Table Storage. This makes sense, because there is a request limit of 1000 table entities. In my case I don't want to load all the data but the latest 100:

// ChatMessageRepository.cs
public async Task<IEnumerable<ChatMessage>> GetTopMessages(int number = 100)
{
    var table = _tableClient.GetTableReference(_tableName);

    // Create the table if it doesn't exist.
    await table.CreateIfNotExistsAsync();
    string filter = TableQuery.GenerateFilterCondition("PartitionKey", 
        QueryComparisons.Equal, "chatmessages");
    var query = new TableQuery<ChatMessageTableEntity>()
        .Where(filter)
        .Take(number);

    var entities = await table.ExecuteQuerySegmentedAsync(query, null);

    var result = entities.Results.Select(entity =>
        new ChatMessage
        {
            Id = entity.RowKey,
            Date = entity.Timestamp,
            Message = entity.Message,
            Sender = entity.Sender
        });

    return result;
}

Using the repository

In the Startup.cs I changed the registration of the ChatService from Singleton to Transient, because we don't need to store the messages in memory anymore. I also add a transient registration for the IChatMessageRepository:

services.AddTransient<IChatMessageRepository, ChatMessageRepository>();
services.AddTransient<IChatService, ChatService>();

The IChatMessageRepository gets injected into the ChatService. Since the Repository is async I also need to change the signature of the service methods a little bit to support the async calls. The service looks cleaner now:

public class ChatService : IChatService
{
    private readonly IChatMessageRepository _repository;

    public ChatService(IChatMessageRepository repository)
    {
        _repository = repository;
    }

    public async Task<ChatMessage> CreateNewMessage(string senderName, string message)
    {
        var chatMessage = new ChatMessage(Guid.NewGuid())
        {
            Sender = senderName,
            Message = message
        };
        await _repository.AddMessage(chatMessage);

        return chatMessage;
    }

    public async Task<IEnumerable<ChatMessage>> GetAllInitially()
    {
        return await _repository.GetTopMessages();
    }
}

Also the Controller action and the Hub method need to change to support the async calls. It is only about making the methods async, returning Tasks and to await the service methods.

// ChatController.cs
[HttpGet("[action]")]
public async Task<IEnumerable<ChatMessage>> InitialMessages()
{
    return await _chatService.GetAllInitially();
}

Almost done

The authentication and storing the messages is done now. What needs to be done in the last step, is to add the logged on user to the UserTracker and to push the new user to the client. I'll not cover that in this post, because it already has more than 410 lines and more than 2700 words. Please visit the GitHub repository during the next days to learn how I did this.

Closing words

Even this post wasn't about React. The authentication is only done server side, since this isn't really a single page application.

To finish this post I needed some more time to get the Authentication using IdentityServer4 running. I stuck in a Invalid redirect URL error. At the end it was just a small typo in the RedirectUris property of the client configuration of the IdentityServer, but it took some hours to find it.

In the next post I will come back a little bit to React and Webpack while writing about the deployment. I'm going to write about automated deployment to an Azure Web App using CAKE, running on AppVeyor.

I'm attending the MVP Summit next week, so the last post of this series, will be written and published from Seattle, Bellevue or Redmond :-)

Creating a chat application using React and ASP.NET Core - Part 5

$
0
0

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

Intro

In this post I will write about the deployment of the app to Azure App Services. I will use CAKE to build pack and deploy the apps, both the identity server and the actual app. I will run the build an AppVeyor, which is a free build server for open source projects and works great for projects hosted on GitHub.

I'll not go deep into the AppVeyor configuration, the important topics are cake and azure and the app itself.

BTW: SignalR was going into the next version the last weeks. It is not longer alpha. The current version is 1.0.0-preview1-final. I updated the version in the package.json and in the ReactChatDemo.csproj. Also the NPM package name changed from "@aspnet/signalr-client" to "@aspnet/signalr". I needed to update the import statement in the WebsocketService.ts file as well. After updating SignalR I got some small breaking changes, which are easily fixed. (Please see the GitHub repo, to learn about the changes.)

Setup CAKE

CAKE is a build DSL, that is built on top of Roslyn to use C#. CAKE is open source and has a huge community, who creates a ton of add-ins for it. It also has a lot of built-in features.

Setting up CAKE is easily done. Just open the PowerShell and cd to the solution folder. Now you need to load a PowerShell script that bootstraps the CAKE build and loads more dependencies if needed.

Invoke-WebRequest https://cakebuild.net/download/bootstrapper/windows -OutFile build.ps1

Later on, you need to run the build.ps1 to start your build script. Now the Setup is complete and I can start to create the actual build script.

I created a new file called build.cake. To edit the file it makes sense to use Visual Studio Code, because @code also has IntelliSense. In Visual Studio 2017 you only have syntax highlighting. Currently I don't know an add-in for VS to enable IntelliSense.

My starting point for every new build script is, the simple example from the quick start demo:

var target = Argument("target", "Default");

Task("Default")
  .Does(() =>
  {
    Information("Hello World!");
  });

RunTarget(target);

The script then gets started by calling the build.ps1 in a PowerShell

.\build.ps1

If this is working I'm able to start hacking the CAKE script in. Usually the build steps I use looks like this.

  • Cleaning the workspace
  • Restoring the packages
  • Building the solution
  • Running unit tests
  • Publishing the app
    • In the context of non-web application this means packaging the app
  • Deploying the app

To deploy the App I use the CAKE Kudu client add-in and I need to pass in some Azure App Service credentials. You get this credentials, by downloading the publish profile from the Azure App Service. You can just copy the credentials out of the file. Be careful and don't save the secrets in the file. I usually store them in environment variables and read them from there. Because I have two apps (the actual chat app and the identity server) I need to do it twice:

#addin nuget:?package=Cake.Kudu.Client

string  baseUriApp     = EnvironmentVariable("KUDU_CLIENT_BASEURI_APP"),
        userNameApp    = EnvironmentVariable("KUDU_CLIENT_USERNAME_APP"),
        passwordApp    = EnvironmentVariable("KUDU_CLIENT_PASSWORD_APP"),
        baseUriIdent   = EnvironmentVariable("KUDU_CLIENT_BASEURI_IDENT"),
        userNameIdent  = EnvironmentVariable("KUDU_CLIENT_USERNAME_IDENT"),
        passwordIdent  = EnvironmentVariable("KUDU_CLIENT_PASSWORD_IDENT");;

var target = Argument("target", "Default");

Task("Clean")
    .Does(() =>
          {	
              DotNetCoreClean("./react-chat-demo.sln");
              CleanDirectory("./publish/");
          });

Task("Restore")
	.IsDependentOn("Clean")
	.Does(() => 
          {
              DotNetCoreRestore("./react-chat-demo.sln");
          });

Task("Build")
	.IsDependentOn("Restore")
	.Does(() => 
          {
              var settings = new DotNetCoreBuildSettings
              {
                  NoRestore = true,
                  Configuration = "Release"
              };
              DotNetCoreBuild("./react-chat-demo.sln", settings);
          });

Task("Test")
	.IsDependentOn("Build")
	.Does(() =>
          {
              var settings = new DotNetCoreTestSettings
              {
                  NoBuild = true,
                  Configuration = "Release",
                  NoRestore = true
              };
              var testProjects = GetFiles("./**/*.Tests.csproj");
              foreach(var project in testProjects)
              {
                  DotNetCoreTest(project.FullPath, settings);
              }
          });

Task("Publish")
	.IsDependentOn("Test")
	.Does(() => 
          {
              var settings = new DotNetCorePublishSettings
              {
                  Configuration = "Release",
                  OutputDirectory = "./publish/ReactChatDemo/",
                  NoRestore = true
              };
              DotNetCorePublish("./ReactChatDemo/ReactChatDemo.csproj", settings);
              settings.OutputDirectory = "./publish/ReactChatDemoIdentities/";
              DotNetCorePublish("./ReactChatDemoIdentities/ReactChatDemoIdentities.csproj", settings);
          });

Task("Deploy")
	.IsDependentOn("Publish")
	.Does(() => 
          {
              var kuduClient = KuduClient(
                  baseUriApp,
                  userNameApp,
                  passwordApp);
              var sourceDirectoryPath = "./publish/ReactChatDemo/";
              var remoteDirectoryPath = "/site/wwwroot/";

              kuduClient.ZipUploadDirectory(
                  sourceDirectoryPath,
                  remoteDirectoryPath);

              kuduClient = KuduClient(
                  baseUriIdent,
                  userNameIdent,
                  passwordIdent);
              sourceDirectoryPath ="./publish/ReactChatDemoIdentities/";
              remoteDirectoryPath = "/site/wwwroot/";

              kuduClient.ZipUploadDirectory(
                  sourceDirectoryPath,
                  remoteDirectoryPath);
          });

Task("Default")
    .IsDependentOn("Deploy")
    .Does(() =>
          {
              Information("Your build is done :-)");
          });

RunTarget(target);

To get this script running locally, you need to set each of the environment variables in the current PowerShell session:

$env:KUDU_CLIENT_PASSWORD_APP = "super secret password"
# and so on...

If you only want to test the compile and publish stuff, just set the dependency of the default target to "Publish" instead of "Deploy". Doing this the deploy part will not run, you don't deploy in accident and you save some time while trying this.

Use CAKE in AppVeyor

On AppVeyor the environment variables are set in the UI. Don't set them in the YAML configuration, because they are not properly save and everybody can see them.

The most simplest appveyor.yml file looks like this.

version: 1.0.0-preview1-{build}
pull_requests:
  do_not_increment_build_number: true
branches:
  only:
  - master
skip_tags: true
image: Visual Studio 2017 Preview
build_script:
- ps: .\build.ps1
test: off
deploy: off
# this is needed to install the latest node version
environment:
  nodejs_version: "8.9.4"
install:
  - ps: Install-Product node $env:nodejs_version
  # write out version
  - node --version
  - npm --version

This configuration only builds the master and the develop branch, which makes sense if you use git flow, as I used to do. Otherwise change it to just use the master branch or whatever branch you want to build. I skip tags to build and any other branches.

The image is Visual Studio 2017 (preview only if you want to try the latest features)

I can switch off tests, because this is done in the CAKE script. The good thing is, that the XUnit test output, built by the test runs in CAKE , gets anyway published to the AppVeyor reports. Deploy is also switched off, because it's done in CAKE too.

The last thing that needs to be done is to install the latest Node.JS version. Otherwise the already installed pretty much outdated version is is used. This is needed to download the React dependencies and to run Webpack to compile and bundle the React app.

You could also configure the CAKE script in a way that test, deploy and build calls different targets inside CAKE. But this is not really needed and makes the build a little less readable.

If you now push the entire repository to your repository on GitHub, you need to go to AppVeyor and to setup a new build project by selecting your GitHub repository. An new AppVeyor account is easily set up using an existing GitHub account. When the build project is created, you don't need to setup more. Just start a new build and see what happens. Hopefully you'll also get a green build like this:

Closing words

This post was finished one day after the Global MVP Summit 2018 on a pretty sunny day in Seattle

I spent two nights before the summit starts in Seattle downtown and the two nights after. Both times it was unexpectedly sunny.

I finish this series with this fifth blog post and learned a little bit about React and how it behaves in an ASP.NET Core project. And I really like it. I wouldn't really do a single page application using React, this seems to be much easier and faster using Angular, but I will definitely use react in future to create rich HTML UIs.

It works great using the React ASP.NET Core project in Visual Studio. It is great that Webpack is used here, because it saves a lot of time and avoids hacking around the VS environment.

Recap the MVP Global Summit 2018

$
0
0

Being a MVP has a lot of benefits. Getting free tools, software and Azure credits are just a few of them. The direct connection to the product group has a lot more value than all software. Even more valuable is the is the fact of being a part of an expert community with more than 3700 MVPs from around the world.

In fact there are a lot more experts outside the MVP community which are also contributing to the communities of the Microsoft related technologies and tools. Being an MVP also means to find those experts and to nominate them to also get the MVP award.

The most biggest benefit of being an MVP is the yearly MVP Global Summit in Redmond. Also this year Microsoft invites the MVPs to attend the MVP Global Summit. More than 2000 MVPs and Regional Directors were registered to attend the summit.

I also attended the summit this year. It was my third summit and the third chance to directly interact with the product group and with other MVPs from all over the world.

The first days in Seattle

My journey to the summit starts at Frankfurt airport where a lot of German, Austrian and Swiss MVPs start their journey and where many more MVPs from Europe change the plain. The LH490 and LH491 flights around the summits are called the "MVP plains" because of this. This always feels like a yearly huge school trip.

The flight was great, sunny the most time and I had an impressive view over Greenland and Canada:

Greenland

After we arrived at SEATEC, some German MVP friends and me took the train to Seattle downtown. We checked in at the hotels and went for a beer and a burger. This year I decided to arrive one day earlier than the last years and to stay in Seattle downtown for the first two nights and the last two nights. This was a great decision.

Pike Place Seattle

I spent the nights just a few steps away from the pike place. I really love the special atmosphere at this place and this area. There are a lot of small stores, small restaurants, the farmers market and the breweries. Also the very first Starbucks restaurant is at this place. It's really a special place. This also allows me to use the public transportation, which works great in Seattle.

There is a direct train from the airport to Seattle downtown and an express bus from Seattle downtown to the center of Bellevue where the conference hotels are located. For those of you, who don't want to spent 40USD or more for Uber, Taxy or a Shuttle, the train to Seattle costs 3USD and the express bus 2,70USD. Both need around 30 minutes, maybe you need some time to wait a few minutes in the underground station in Seattle.

The Summit days

After checking-in into the my conference hotel on Sunday morning, I went to the registration, but it seemed I was pretty early:

Summit Registration

But it wasn't really right. The most of the MVPs where in the queue to register for the conference and to get their swag.

Like the last years, the summit days where amazing, even if we don't really learn a lot of really new things in my contribution area. The most stuff in the my MVP category is open source and openly discussed on GitHub and Twitter and in the blog posts written by Microsoft. Anyway we learned about some cool ideas, which I unfortunately cannot write down here, because it is almost all NDA content.

So the most amazing things during the summit are the events and parties around the conference and to meet all the famous MVPs and Microsoft employees. I'm not really a selfie guy, but this time I really needed to take a picture with the amazing Phil "Mister ASP.NET MVC" Haack.

Phil Haak

I'm also glad to met Steve Gorden, Andrew Lock, David Pine, Damien Bowden, Jon Galloway, Damien Edwards, David Fowler, Immo Landwerth, Glen Condron, and many, many more. And of course the German speaking MVP Family from Germany (D), Austria (A) and Switzerland (CH) (aka DACH)

Special Thanks to Alice, who manages all the MVPs in the DACH area.

I'm also pretty glad to meet the owner of millions of hats, Mr. Jeff Fritz in person who ask me to do a lightning talk in front of many program managers during the summit. Five MVPs should tell the developer division program managers stories about the worst or the best things about the development tools. I was quite nervous, but it worked out well, mostly because Jeff was super cool. I told a worse story about the usage of Visual Studio 2015 and TFS by a customer with a huge amount of solutions and a lot more VS projects in it. It was pretty wired to also tell Julia Liuson (Corporate Vice President of Visual Studio) about that problems. But she was really nice, asked the right questions.

BTW: The power bank (battery pack) we got from Jeff, after the lightning talk, is the best power bank I ever had. Thanks Jeff.

On Thursday, the last Summit day for the VS and dev tools MVPs, there was a hackathon. They provided different topics to work on. There was a table for working with Blazor, another one for some IoT things, F#, C# and even VB.NET still seems to be a thing ;-)

My idea was to play around with Blazor, but I wanted to finalize a contribution to the ASP.NET documentation first. Unfortunately this took longer than expected, this is why I left the table and took a place on another table. I fixed a over-localization issue in the German ASP.NET documentation and took care about an issue on LightCore. On LightCore we currently have an open issue regarding some special registrations done by ASP.NET Core. We thought it was because of special registrations after the IServiceProvider were created, but David Fowler told me the provider is immutable and he points me to the registrations of open generics. LightCore already provides open generics, but implemented the resolution in a wrong way. In case a registrations of a list of generics is not found, LightCore should return an empty list instead of null.

It was amazing how fast David Fowler points me to the right problem. Those guys are crazy smart. Just a few seconds after I showed him the missing registration, I got the right answer. Glen Condron told me right after, how to isolate this issue and test it. Problem found and I just need to fix it.

Thanks guys :-)

The last days in Seattle

I also spent the last two nights at the same location near the Pike Place. Right after the hackathon, I grabbed my luggage at the conference hotel and used the express bus to go to Seattle again. I had a nice dinner together with André Krämer at the Pike Brewery. On the next Morning I had a amazingly yummy breakfast in a small restaurant at the Pike Place market, with a pretty cool morning view to the water front. Together with Kostja Klein, we had a cool chat about this and that, the INETA Germany and JustCommunity.

The last day usually is also the time to buy some souvenirs for the Kids, my lovely wife and the Mexican exchange student, who lives in hour house. I also finished the blog series about React and ASP.NET Core.

At the last morning in Seattle, I stumbled over the Pike Street into the Starbucks to take a small breakfast. It was pretty early at the Pike Place:

Pike Place Seattle

Leaving the Seattle area and the Summit feels a little bit of leaving a second home.

I'm really looking forward to the next summit :-)

BTW: Seattle isn't about rainy and cloudy weather

Have I already told you, that every time I visited Seattle, it was sunny and warm?

It's because of me, I think.

During the last summits it was Sunny when I visit Seattle downtown. In summer 2012, I was in a pretty warm and sunny Seattle, together with my family.

This time it was quite warm during the first days. It started to rain, when I left Seattle to go to the summit locations in Bellevue and Redmond and it was sunny and warm again when I moved back to Seattle downtown.

It's definitely because of me, I'm sure. ;-)

Or maybe the rainy cloudy Seattle is a different one ;-)

Topics I'll write about

Some of the topics I'm allowed to write about and I definitely will write about in the next posts are the following:

  • News on ASP.NET Core 2.1
  • News on ASP.NET (yes, it is still alive)
  • New features in C# 7.x
  • Live Share
  • Blazor

Why I use paket now

$
0
0

I never really had any major problem using the NuGet client. By reading the Twitter timeline, it seems I am the only one without problems. But depending on what dev process you like to use, there could be a problem. This is not really a NuGet fault, but this process makes the usage of NuGet a little bit more complex than it should be.

As mentioned in previous posts, I really like to use Git Flow and the clear branching structure. I always have a production branch, which is the master. It contains the sources of the version which is currently in production.

In my projects I don't need to care about multiple version installed on multiple customer machines. Usually as a web developer, you only have one production version installed somewhere on a webserver.

I also have a next version branch, which is the develop branch. This contains the version we are currently working on. Besides of this, we can have feature branches, hotfix branches, release branches and so on. Read more about Git Flow in this pretty nice cheat sheet.

The master branch get's compiled in release mode and uses a semantic version like this. (breaking).(feature).(patch). The develop branch, get's compiled in debug mode and has an version number that tells NuGet that it is a preview version: (breaking).(feature).(patch)-preview(build). Where build is the build number generated by the build server.

The actual problem

We use this versioning, build and release process for web projects and shared libraries. And with those shared libraries it starts to get complicated using NuGet.

Some of the shared libraries are used in multiple solutions and shared via a private NuGet feed, which is a common way, I think.

Within the next version of a web project we also use the next versions of the shared libraries to test them. In the current versions of the web projects we use the current versions of the shared libraries. Makes kinda sense, right? If we do a new production release of a web project, we need to switch back to the production version of the shared libraries.

Because in the solutions packages folder, NuGet creates package sub-folders containing the version number. And the project references the binaries from those folder. Changing the library versions, needs to use the UI or to change the packages.config AND the project files, because the reference path contains the version information.

Maybe switching the versions back and forth doesn't really makes sense in the most cases, but this is the way how I also try new versions of the libraries. In this special case, we have to maintain multiple ASP.NET applications, which uses multiple shared libraries, which are also dependent to different versions of external data sources. So a preview release of an application also goes to a preview environment with a preview version of a database, so it needs to use the preview versions of the needed libraries. While releasing new features, or hotfixes, it might happen that we need to do an release without updating the production environments and the production databases. So we need to switch the dependencies back to the latest production version of the libraries .

Paket solves it

Paket instead only supports one package version per solution, which makes a lot more sense. This means Paket doesn't store the packages in a sub-folder with a version number in its name. Changing the package versions is easily done in the paket.dependencies file. The reference paths don't change in the project files and the projects immediately use the other versions, after I changed the version and restored the packages.

Paket is an alternative NuGet client, developed by the amazing F# community.

Paket works well

Fortunately Paket works well with MSBuild and CAKE. Paket provides MSBuild targets to automatically restore packages before the build starts. Also in CAKE there is an add-in to restore Paket dependencies. Because I don't commit Paket to the repository I use the command line interface of Paket directly in CAKE:

Task("CleanDirectory")
	.Does(() =>
	{
		CleanDirectory("./Published/");
		CleanDirectory("./packages/");
	});

Task("LoadPaket")
	.IsDependentOn("CleanDirectory")
	.Does(() => {
		var exitCode = StartProcess(".paket/paket.bootstrapper.exe");
		Information("LoadPaket: Exit code: {0}", exitCode);
	});

Task("AssemblyInfo")
	.IsDependentOn("LoadPaket")
	.Does(() =>
	{
		var file = "./SolutionInfo.cs";		
		var settings = new AssemblyInfoSettings {
			Company = " YooApplications AG",
			Copyright = string.Format("Copyright (c) YooApplications AG {0}", DateTime.Now.Year),
			ComVisible = false,
			Version = version,
			FileVersion = version,
			InformationalVersion = version + build
		};
		CreateAssemblyInfo(file, settings);
	});

Task("PaketRestore")
	.IsDependentOn("AssemblyInfo")
	.Does(() => 
	{	
		var exitCode = StartProcess(".paket/paket.exe", "install");
		Information("PaketRestore: Exit code: {0}", exitCode);
	});

// ... and so on

Conclusion

No process is 100% perfect, even this process is not. But it works pretty well in this case. We are able to do releases and hotfix very fast. The setup of a new project using this process is fast and easy as well.

The whole process of releasing a new version, starting with the command git flow release start ... to the deployed application on the web server doesn't take more than 15 minutes. Depending on the size of the application and the amount of tests to run on the build server.

I just recognized, this post is not about .NET Core or ASP.NET Core. The problem I described only happens to classic projects and solutions that store the NuGet packages in the solutions packages folder.

Any Questions about that? Do you wanna learn more about Git Flow, CAKE and Continuous Deployment? Just drop me a comment.

Running and Coding

$
0
0

I wasn't really sporty before two years, but anyway active. I was also forced to be active with three little kids and a sporty and lovely women. But anyway, a job where I mostly sit in a comfortable chair, even great food and good southern German beers also did its work. When I first met my wife, I had around 80 Kg, what is good for my size of 178cm. But my weight increased up to 105Kg until Christmas 2015. This was way too much I thought. Until then I always tried to reduce it by doing some more cycling, more hiking and some gym, but it didn't really worked out well.

Anyway, there is not a more effective way to loose weight than running. It is btw. tree times more effective than cycling. I tried it a lot in the past, but it pretty much hurts in the lower legs and I stopped it more than once.

Running the agile way

I tried it again in Easter 2016 in a little different way, and it worked. I tried to do it the same way as in a perfect software project:

I did it in an agile way, using pretty small goals to get as much success as possible.

Also I bought me fitness watch to count steps, calories, levels and to measure the hart rate while running, to get some more challenges to do. At the same time I changed food a lot.

It sounds weird and funny, but it worked really well. I lost 20Kg since then!

I think it was important to not set to huge goals. I just wanted to loose 20Kg. I didn't set a time limit, or something like this.

I knew it hurts in the lower legs while running. I started to learn a lot of running and the different stiles of running. I chose the way of easy running which worked pretty well with natural running shoes and barefoot shoes. This also worked well for me.

Finding time to run

Finding time was the hardest thing. In the past I always thought that I'm too busy to run. I discussed it a lot with the family and we figured out the best time to run was during lunch time, because I need to walk the dog anyway and this also was an option to run with the dog. This was also a good thing for our huge dog.

Running at lunch time had another good advantage: I get the brain cleaned a little bit after four to five hours of work. (Yes, I usually start between 7 to 8 in the morning.) Running is great when you are working on software projects with a huge level of complexity. Unfortunately when I'm working in Basel, I cannot go run, because there is now shower available. But I'm still able to run three to four times a week.

Starting to run

The first runs were a real pain. I just chose a small lap of 2,5km, because I needed to learn running as the first step. Also because of the pain in the lower legs, I chose to run shorter tracks up-hill. Why up-hill? Because this is more exhausting than running leveled-up. So I had short up-hill running phases and longer quick walking phases. Just a few runs later the running phases start to be a little bit longer and longer.

This was the first success just a few runs later. That was great. it was even greater when I finished my first kilometer after 1,5 months running every second day. That was amazing.

On every run there was a success and that really pushed me. But I not only succeeded on running, I also started to loose weight, which pushed me even more. So the pain wasn't too hard and I continued running.

Some weeks later I ran the entire lap of 2.5km. I was running the whole lap not really fast but without a walking pause. Some more motivation.

I continued running just this 2.5km for a few more weeks to get some success on personal records on this lap.

Low carb

I mentioned the change with food. I changed to low-carb diet. Which is in general a way to reduce the consumption of sugar. Every kind of sugar, which means bread, potatoes, pasta, rice and corn as well. In the first phase of three months I almost completely stopped eating carbs. After that phase, started to eat a little of them. I also had one cheating-day per week when I was able to eat the normal way.

After 6 Months of eating less carbs and running, I lost around 10Kg, which was amazing and I was absolutely happy with this progress.

Cycling as a compensation

As already mentioned I run every second day. The days between I used my new mountain bike to climb the hills around the city where I live. Actually, it really was a kind of compensation because cycling uses other parts of the legs. (Except when I run up-hill).

Using my smart watch, I was able to measure that running burns three times more calories per hour in average than cycling in the same time. This is a measurement done on my person only and cannot adopt to any other person, but actually it makes sense to me.

Unfortunately cycling during the winter was a different kind of pain. It hurts the face, the feet and the hands. It was too cold. so I stopped it, if the temperature was lower than 5 degrees.

Extending the lap

After a few weeks running the entire 2.5Km, I increased the length to 4.5. This was more exhausting than expected. Two kilometers more needs a completely new kind of training. I needed to enforce myself to not run too fast at the beginning. I needed to start to manage my power. Again I started slowly and used some walking pauses to get the whole lap done. During the next months the walking pauses had decrease more and more until I didn't need a walking pause anymore on this lap.

The first official run

Nine months later I wanted to challenge myself a little bit and attended the first public run. It was a new years eve run. Pretty cold than, but unexpectedly a lot of fun. I was running with my brother which was a good idea. The atmosphere before and during the run was pretty special and I still like it a lot. I got three challenges done during this run. I reached the finish (1) and I wasn't the last one who passed the finish line (2). That was great. I also got a new personal record on the 5km (3).

This was done one year and three months ago. I did exactly the same run again last new years eve and got a new personal record, was faster than my brother and reached the finish. Amazing. More success to push myself.

The first 10km

During the last year I increased the number of kilometers, attended some more public runs. In September 2015 I finished my first public 10km run. Even more success to push me foreword.

I didn't increase the number of kilometer fast. Just one by one kilometer. Trained one to three months on this range and added some more kilometer. Last spring I started to do a longer run during the weekends, just because I had time to do this. On workdays it doesn't make sense to run more than 7 km because this would also increase the time used for the lunch break. I tried to just use one hour for the lunch run, including the shower and changing the cloths.

Got it done

Last November I got it done: I actually did loose 20kg since I started to run. This was really great. It was a great thing to see a weight less than 85kg.

Conclusion

How did running changed my life? it changed it a lot. I cannot really live without running for more than two days. I get really nervous than.

Do I feel better since I started running? Because of the sports I am more tired than before, I have muscle ache, I also had two sport accidents. But I'm pretty much more relaxed I think. Physically the most time it feels bad but in a wired positive way because I feel I've done something.

Also some annoying work was done more easily. I really looking foreword to the next lunch break to run the six or seven kilometer with the dog, or to ride the bike up and down the hills and to get the brain cleaned up.

I'm running on almost every weather except it is too slippery because of ice or snow. Fresh snow is fine, mud is fun, rain I don't feel anymore, sunny is even better and heat is challenging. Only the dog doesn't love warm weather.

Crazy? Yes, but I love it.

Yo you want to follow me on Strava?

Viewing all 490 articles
Browse latest View live


Latest Images