Disclaimer: this is an automatic aggregator which pulls feeds and comments from many blogs of contributors that have contributed to the Mono project. The contents of these blog entries do not necessarily reflect Xamarin's position.

February 5

Podcast: Simplify Your Code With C# 6

This week on the Xamarin Podcast, Mike and I are joined by James Montemagno to overview all of the fantastic features introduced in C# 6 to simplify your code and bring readability to a new level.

Subscribe or Download Today

Knowing the latest in .NET, C#, and Xamarin is easier than ever with the Xamarin Podcast! The Xamarin Podcast is available from iTunes, Stitcher, and SoundCloud. Do you have an interesting story, project, or advice for other .NET mobile developers? If so, we’d love to share it with the Xamarin community! Tweet @pierceboggan or @MikeCodesDotNet to share your blog posts, projects, and anything else you think other mobile developers would find interesting. Be sure to download today’s episode breaking down all the awesome features of C# 6, and don’t forget to subscribe!

The post Podcast: Simplify Your Code With C# 6 appeared first on Xamarin Blog.

Report an Exception with Xamarin Insights Contest Winner

Proactively monitoring the health of your mobile apps is crucial to ensuring a bug-free, positive user experience. Xamarin Insights makes it extremely simple to do this by identifying what issues real users are facing and how to fix them. Xamarin Insights was promoted to general availability in Xamarin 4, giving all Xamarin subscribers access to free crash reporting with detailed crash reports and crashed-user identification.

Two weeks ago, we invited you to start monitoring the health of your apps by adding Xamarin Insights to your mobile app(s) with just a few lines of code and tweeting an unexpected exception discovered with your free crash reporting.

There were some exceptional entries, but I’m happy to announce that the winner of the “Report an Exception with Xamarin Insights” contest, and brand new Xamarin swag bag, is Ken Pespisa for his submission:


A big thank you from all of us here at Xamarin to everyone who entered the “Report an Exception with Xamarin Insights” contest and shared how Xamarin Insights came to the rescue for their mobile app! Everyone who submitted a valid entry will be receiving 10 Xamarin Test Cloud hours to help test their mobile apps on thousands of devices.

Didn’t get a chance to enter in this contest?

Be sure to follow us on Twitter @XamarinHQ to keep up with Xamarin announcements, walkthroughs, case studies, contests, and more!

The post Report an Exception with Xamarin Insights Contest Winner appeared first on Xamarin Blog.

The Making of The Robot Factory

We talked to Tinybop’s Rob Blackwood, lead iOS engineer, Jessie Sattler, production designer, and Cameron Erdogan, iOS engineer, about their experience using to Unity to build The Robot Factory.

The Robot Factory, was the 2015 iPad App of the Year on the App Store. It is the sixth app (of eight total, now) that the studio launched and the first app in a new series of creative building apps for kids. As the first app in that series, it was the first they built with Unity. It will also be the first app TinyBop made available for Apple TV.

Moving to Unity enabled the development and design teams to work together more quickly and efficiently. Rob Blackwood, lead iOS engineer, and Jessie Sattler, production designer, walk through how they work together to bring the app to life. Cameron Erdogan, junior iOS engineer, chimes in about preparing The Robot Factory for tvOS with Unity. Every app TinyBop builds is a learning process which helps refine and improve their processes on the next app.

Building tools for development & design

Rob: As software engineers, it’s our duty to architect solutions for all the concepts that make up the app. We need to identify the app’s systems and rules and implement them through code. For example, a system we created in The Robot Factory determines the way a robot moves. In an app about plants, we created systems to represent the different seasons of the deciduous forest. There are also many cases where we must create tools for tweaking these systems and rules. These tools can then be used by a production designer to create the right look and feel for the app.

Jessie: As a production designer at Tinybop, I’m in charge of putting together the visual elements that live inside the app (to put it simply). We commission a different illustrator for each app, which gives us a range of styles and techniques. It’s my job to translate all the artwork into interactive, moving parts. I build scenes in Unity, animate characters and objects, and make sure everything runs smoothly between the art/design and the programming/development of our apps.

Rob: When we began our first app, we were a very small team using a development environment that required pure programming to develop for. We developed a tool for layout and physics simulation but it was not very sophisticated. As our team grew, we realized we had a bottleneck on the engineering side since most everything had to be programmed and then built manually before it could be tested. Not having immediate visual feedback when developing also meant a lot more iteration on the code, a time-consuming task. Not having an automated build system, like Unity Cloud Build, meant an engineer had to sink time into manually delivering a build to devices or sending it up to Testflight.

Jessie: Our previous editor lacked a friendly interface for someone who wasn’t primarily working in the code. I relied heavily on the engineers to perform simple tasks that were not accessible to me. Unity has alleviated the engineers of menial production tasks, and at the same time enabled me to perfect things to the slightest detail. We also couldn’t see the result of what we were making until we built to device, whereas in Unity I can live preview the project as I work.

Rob: The most important thing Unity has done for us is allow us to easily separate engineering from production design. Unity is a very graphics-driven environment which means production can do much of the visual layout before ever having to code a single line. This also allows us to continually integrate and iterate as the engineers develop more and more systems. The production team can get immediate feedback as they design because Unity lets you play and stop the app at any moment you like. We also use Unity Cloud Build which lets us push new builds out to actual iOS devices as frequently as every 20 minutes. S o, everyone can test and give feedback on the current state.

Jessie: Using Unity has made collaboration with engineers a dream! I can specify what visual effects I want to achieve. Then, we work together to build tools and scripts for me to use directly in the editor. The visual nature of Unity makes it much easier for me to have the control I need as an artist to get the projects to look the way we want them to. It also facilitates our iterative process. I can go back and forth with our engineers to find solutions that meet both our aesthetic and technical requirements.

robots5_2048x1536

In The Robot Factory, giving robots locomotion based on the parts they were created with was a big challenge. Using pure physics to move the robots made it too difficult to control, and having pre-planned walk cycles was boring and predictable. I worked with the engineers to create tools to draw the path of motion for each robot part, within a set of restraints, as well as each part’s gravity and rotational limits. We were able to maintain enough physics-based movement to get unique locomotion, but users still had enough control and part reliability to navigate their robots through a world.

Adapting for different apps & artwork

Rob: We’ve always given priority to the art and we try to not funnel the artist toward too many particulars, style-wise. This can sometimes be difficult from a technical standpoint because it means our strategies for creating the feel of animations and interactions often need to change. The Robot Factory artwork has a lot of solid-colored shapes and hard edges. We were able to identify a fairly small set of re-useable elements that could be combined to create most every robot part—each one comprising as many as 50 small pieces—that could then be animated independently. (This was important because real robots have a ton of moving parts, as everyone knows!) This contrasted sharply in our most recent app, The Monsters, where we wanted the monsters kids created to appear more organic and even paintable. In this instance, we created actual skeletons and attached something akin to skin so that they could bend naturally and be colored and textured dynamically when a child interacted with it. So while there are many challenges to adapting to different artistic styles, the benefit is that we are much closer to the artist’s vision, which is always more interesting.

robots7_2048x1536

Jessie: Each illustrator brings a different style, thus a different set of challenges for every app. On the production side, we have to decide what aspects of the art are integral to keep intact, and what can be translated through simulations and programmatically generated art. A lot comes down to balancing three needs: widely scoped content, efficient development, and beautiful visuals. It’s a big challenge to create reusable techniques that we can carry over app to app. Many instances call for unique solutions. Where we would rig, bone, and animate meshed skeletons in one case, another app might need large, hi-res sprites, or small repeatable vector shapes and SVGs. Having disparate techniques means longer production time, but because we deem the quality of art and design in our apps so important, it is a necessary step in the process.

Moving on over to tvOS with Unity

Cameron: I didn’t have to change much code to get The Robot Factory up and running on Apple TV. After downloading the alpha build of Unity with tvOS, I made a branch off of our original app’s repository. After a day or two, I was able to get the app to compile onto the Apple TV. I had to remove a few external, unsupported-on-Apple TV libraries to get it to work, but the majority of the work was done by Unity: I merely switched the build target from iOS to tvOS. Pretty much all of the classes and frameworks that work with Unity on iOS work on tvOS, too.

After I got it to compile, I had to alter the controls and UI to make sense on TV. To do that, I used Unity’s Canvas UI system, which played surprisingly nicely with the Apple TV Remote. The last main thing I did was add cloud storage, since Apple TV has no local storage. To do that, I wrote a native iOS plug-in, which again was integrated easily with Unity.

robots3_2048x1536

Looking ahead

Jessie: We currently build apps with 2D assets in 3D space. This allows us to create certain dimensional illusions that help bring life to our apps. I’ve been experimenting with using more 3D shapes in our apps and working with new 3D particle controls in Unity 5.3. I’m excited about tastefully enhancing 2D worlds with 3D magic.

Rob: As we look to the future, we’d like to expand our apps to even more platforms. Unity attempts to make this step as seamless as possible by exporting to multiple platforms with just a little bit of additional engineering on our end. Like our experience moving to the tvOS platform, we hope Unity will do much of the heavy lifting for us. And by the way, we’re hiring senior Unity engineers right now. If you love Unity and building advanced simulations, look us up at http://www.tinybop.com/jobs.

The Robot Factory is available for iOS and Apple TV on the App Store:

Congratulations to Tinybop and thanks for sharing your story.

February 4

Consulting Partners Bring Real-World Experiences to Xamarin Evolve 2016

Since launching the Xamarin Consulting Partner program in 2012, the network has grown to over 350 partners worldwide. We’re excited to showcase the expertise from the following partners at Xamarin Evolve 2016 and we encourage you to attend to learn from these successful companies.

Zühlke: Is Your App Secure?

There’s a lot of discussion about security on the web, Kerry W Lothrop, Lead Software Architect, Zuehlke Groupbut what about app security? What do developers need to look out for when attempting to write a secure app? How should we handle sensitive data? What should we consider when designing an API consumed by a mobile app?

Kerry W Lothrop, Lead Software Architect at Zuehlke Group will demonstrate the different security aspects that Android and iOS developers should be aware of, the corresponding infrastructure to consider at the beginning of their projects, and some techniques to help ensure a secure mobile app.

Magenic: Understanding Implications of Build Options

Xamarin.iOS and Xamarin.Android have several build optionsKevin Ford, Mobile Practice Lead, Magenic that can have a large impact on runtime performance, compile times, and even the size of an app binary. What changes when I switch between linker options or select the SGen generational garbage collector? Should I enable incremental builds? We’ll compare these different options and discuss how to prepare your libraries for linking or dealing with a library that wasn’t. Understanding these build options can have huge benefits in the application you deploy.

Pariveda Solutions with their client Compass Professional Health Services: Healthcare Redefined: How We Used Xamarin to Make Healthcare Simpler and Smarter
MattLineberger

With the rise of mobile technology and consumers’ desire to manage their healthcare via mobile devices, Compass saw an opportunity to transform their business and brought in technology consulting firm Pariveda Solutions to help them execute their mobile-first vision.

Given the diversity of the Compass client base (from truck drivers to CEOs), the first design consideration was that the app had to support multiple devices from the beginning. Working with ParivedaCliff Sentell, CTO, Compass Professional Health Services, Compass was able to meet the goal of deploying across multiple devices, while also significantly reducing development time, lowering testing costs and enabling data backed decisions from analytics captured with Xamarin Insights.
 
 
 
You won’t want to miss the expertise shared by these partners at Xamarin Evolve 2016, so be sure to register today to reserve your spot!

Register Now

The post Consulting Partners Bring Real-World Experiences to Xamarin Evolve 2016 appeared first on Xamarin Blog.

February 3

Easy App Theming with Xamarin.Forms

Popular Twitter Client Tweetbot for iOS "Night Theme"Beautiful user interfaces sell mobile apps, and designing a successful user experience for your app is a great first step for success. But what about all of the little details that combine to create a fantastic design, such as colors and fonts? Even if you create what you believe to be the perfect design, users will often find something to dislike about it.

Why not let the user decide exactly how they would like their app to look? Many popular apps have taken this approach. Tweetbot has light and dark modes and the ability to change fonts to find the one that works best on the eyes during late-night Twitter sessions. Slack takes user customization to the next level by allowing users to customize the entire theme of the app through hexadecimal color values. Properly supporting theming also brings some tangible benefits to code, such as minimizing duplicated hardcoded values throughout apps to increase code maintainability.

Xamarin.Forms allows you to take advantage of styling to build beautiful, customizable UIs for iOS, Android, and Windows. In this blog post, we’re going to take a look at how to add theming to MonkeyTweet, a minimalistic (we mean it!) Twitter client, by replicating Tweetbot’s light and dark mode as well as Slack’s customizable theming.

Introduction to Resources

Resources allow you to share common definitions throughout an app to help you reduce hardcoded values in your code, resulting in massively increased code maintainability. Instead of having to alter every value in your app when a theme changes, you only have to change one: the resource.

In the code below, you can see several duplicated values that could be extremely tedious to replace and are ideal candidates for using resources:

Resources are grouped together and stored in a ResourceDictionary, a key-value store that is optimized for use with a user interface. Because a ResourceDictionary is a key-value store, you must supply the XAML keyword x:Key for each resource defined:

#33302E
White
24

You can define a ResourceDictionary at both the page and app-level, depending on the particular scope needed for the resource at hand. If a particular resource will be shared among multiple pages, it’s best to define it at the app-level in App.xaml to avoid duplication, as we do below with the MonkeyTweet app:


    
		
			#33302E
			White
        
    

Now that we have defined reusable resources in our application ResourceDictionary, how do we reference these values in XAML? Let’s take a look at the two main types of resources, StaticResource and DynamicResource, and how we can utilize them to add a light and dark mode to MonkeyTweet.

Static Resources

The StaticResource markup extension allows us to reference predefined resources, but have one key limitation: resources from the dictionary are only fetched one time during control instantiation and cannot be altered at runtime. The syntax is very similar to that for bindings; just set the property’s value to “{StaticResource Resource_Name}”. Let’s update our ViewCell to use the resources we defined:

Dynamic Resources

StaticResources are a great way to reduce duplicated values, but what we need is the ability to alter the resource dictionary at runtime (and have those resource updates reflected where referenced). DynamicResource should be used for dictionary keys associated with values that might change during runtime. Additionally, unlike static resources, dynamic resources don’t generate a runtime exception if the resource is invalid and will simply use the default property value.

We want MonkeyTweet’s user interface to be able to switch between light and dark modes at runtime, so DynamicResource is perfect for this situation. All we need to do is change StaticResources to DynamicResources. Updating our resources on-the-fly is super easy as well:

App.Current.Resources ["backgroundColor"] = Color.White;
App.Current.Resources ["textColor"] = Color.Black;

Users can now switch between a light and dark theme with the click of a button:
Monkey Tweet with a dark and light theme applied via dynamic resources.

Introduction to Styles

When building a user interface and theming an app, you may find yourself repeatedly configuring controls in a similar way. For example, all controls that display text may use the same font, font attributes, and size. Styles are a collection of property-value pairs called Setters. Rather than having to repeatedly set each of these properties to a particular resource, you can create a style, and then simply set the Style property to handle the theming for you.

Building Custom Styles

To define a style, we can take advantage of the application-wide resource dictionary to make this style available to all controls. Just like resources, each style must contain a unique key and target class name for the style. A style is made up of one or more Setters, where a property name and value for that property must be supplied. The TargetType property defines which controls the theme can apply to; you can even set this to VisualElement to have the style apply to all subclasses of VisualElement. Setters can even take advantage of resources to further increase maintainability.


    
		
			#33302E
			White
			
        	
        
    

We can apply this style by setting the Style property of a control to the name of the style’s unique key. All properties from the style will be applied to the control. If a property is explicitly defined on the control that is also part of a referenced style, the property set explicitly will override the value in the style.

Our style is a dynamic resource behind the scenes, so they can be altered at runtime. I’ve created a custom page that allows users to enter their own hexadecimal colors to theme MonkeyTweet thanks to Xamarin.Forms resources and styles:

Feb 03, 2016 15:34

Conclusion

In this blog post, we took a look at theming applications with the Xamarin.Forms’ styles by theming our MonkeyTweet application to have a customizable, user-defined theme. We only just scratched the surface of styling; there are lots of other cool things you can do with styling, including style inheritance, implicit styling, platform-specific styling, and prebuilt styles. Be sure to download the MonkeyTweet application to apply your own theme and see just how easy it is to build beautiful, themed UIs with Xamarin.Forms!

The post Easy App Theming with Xamarin.Forms appeared first on Xamarin Blog.

Live Webinar: Xamarin vs. Hybrid HTML: Making the Right Choice for the Enterprise

Selecting the right mobile platform for your enterprise can be a high-risk gamble that will affect thousands of your employees and millions of your customers. Building the right app will either digitally transform your business or derail your efforts and force you to start over while the industry and customers leave you behind.

The two leading choices for building cross-platform native apps are Xamarin or hybrid mobile solutions that utilize HTML and JavaScript. How do you know which option is the best fit for you? Which solution provides superior user experience (UX), performance, faster development time, full hardware access, and a lower TCO?

Magenic, a leading solution provider, built an enterprise-focused application using the Xamarin Platform and a hybrid HTML framework to quantitatively compare the differences between the two approaches. In this webinar, Steven Yi from Xamarin and Kevin Ford of Magenic will break down the essential advantages, roadblocks, and observations they found to help you make the best choice for your strategic mobile initiatives.

Sign up below to join us on Thursday, February 18, 2016 at 8:30 am PT / 11:30 am ET / 4:30 pm GMT.
 

Register

About the Speakers

Kevin Ford
Kevin Ford is the Mobile Practice Lead with Magenic, leading development with native mobile technologies, Xamarin, and Cordova. He has worked with application development using the Microsoft stack for over twenty years. He is an accomplished architect, speaker and thought leader.
 
 
Steven Yi, Xamarin
Steven Yi is the Head of Product Marketing at Xamarin. Prior to Xamarin he held senior leadership roles in product management and strategy for Microsoft Azure and Red Hat, as well as architecting and developing large-scale applications.

The post Live Webinar: Xamarin vs. Hybrid HTML: Making the Right Choice for the Enterprise appeared first on Xamarin Blog.

Light Probe Proxy Volume: 5.4 Feature Showcase

Unity 5.4 has entered beta and a stand out feature is the Light Probe Proxy Volume (LPPV). I just wanted to share with you all what it is, the workflow and some small experiments to show it in action.

Correct as of 30.01.2016 – Subject to changes during 5.4 beta.

What Is A Light Probe Proxy Volume?

The LPPV is a component which allows for more light information to be used on larger dynamic objects that cannot use baked lightmaps, think Skinned Meshes or Particle Systems. Yes! Particle Systems receiving Baked Light information, awesome!

How To Use The LPPV Component?

The LPPV component is a dependency of the Light Probe Group. The component is located under Component -> Rendering -> Light Probe Proxy Volume, by default, the component looks like this:
Light Probe Proxy Volume Component_1

It’s a component you will need to add to the GameObject such as a Mesh or even a Light Probe Group. The GameObject you want to be affected by the LPPV needs to have a MeshRenderer / Renderer that has the Light Probes property set to “Use Proxy Volume:

Light Probe Proxy Volume Component_3

You can borrow another existing LPPV component which is used by other GameObjects by using the Proxy Volume Override, just drag and drop it into the property field for each Renderer you want to use it. An example: If you added the LPPV component to the Light Probe Group object, you can then share that across all renderers with the Proxy Volume Override property:

Use Proxy Volume

Setting up the Bounding Box:

There’s three options for setting up your Bounding Box:

  • Automatic Local
  • Automatic World
  • Custom

Automatic Local:

Default property setting – the bounding box is computed in local space, interpolated light probe positions will be generated inside this box. The bounding box computation encloses the current Renderer and all the Renderers down the hierarchy that have the Light Probes property set to Use Proxy Volume, same behaviour for Automatic World.

Light Probe Proxy Volume Component_1

Automatic World:

A world-aligned bounding box is computed. Automatic Global and Automatic Local options should be used in conjunction with Proxy Volume Override property on other Renderers. Additionally you could have a whole hierarchy of game objects that use the same LPPV component set on a parent in the hierarchy.

The Difference between this mode and Automatic Local is that in Automatic Local the bounding box is more expensive to compute when a large hierarchy of game objects uses the same LPPV component from a parent game object, but the resulting bounding box may be smaller in size, meaning the lighting data is more compact.

Custom:

Empowers you to edit the bounding box volume yourself in the UI, changing the size and origin values in the Inspector or by using the tools to edit in the scene view. Bounding box is specified in local space of the GameObject. You will need to ensure that all the Renderers are within the Bounding Box of the LPPV in this case.

Light Probe Proxy Volume Component

Setting Up Resolution / Density:

After setting up your bounding box, you need to then consider the density / resolution of the Proxy Volume. To do this there’s two options available under Resolution Mode:

Automatic:

Default property setting – set a value for the density i.e. number of probes per unit. Number of probes per unit is calculated in the X, Y and Z axis, so defined by the bounding box size.

Custom:

Set up custom resolution values in the X, Y and Z axis using the drop down menu. Values start at 1 and increment to a power of 2 up to 32. You can have 32x32x32 interpolating probes

Interpolating Probes

Performance Measurements To Consider When Using LPPV:

Keep in mind the interpolation for every batch of 64 interpolated light probes will cost around 0.15ms on CPU (i7 – 4Ghz) (at the time of Profiling). The Light probe interpolation is multi-threaded, anything less than or equal to 64 interpolation light probes will not be multi-threaded and will run on the main thread.

Using Unity’s built-in Profiler you can see BlendLightProbesJob on the main thread using the Timeline viewer, if you increase the amount of interpolated light probes to more than 64 you will see BlendLightProbesJob on the worker thread as well:

BlendLightProbesJob

The behaviour for just one batch of 64 interpolated light probes is it will run only on the main thread and if there are more batches (>64) it will schedule one on the main thread and others on the worker threads, but this behaviour is just for one LPPV. If you have a lot of LPPVs with less than 64 interpolated light probes each, they will all run on the main thread.

Hardware Requirements:

The component will require at least Shader Model 4 graphics hardware and API support, including support for 3D textures with 32-bit floating-point format and linear filtering.

Sample shader for particle systems that uses ShadeSHPerPixel function:

The Standard shaders have support for this feature. If you want to add this to a custom shader, use ShadeSHPerPixel function. Check out this sample to see how to use this function:

Shader "Particles/AdditiveLPPV" {

Properties 
{
    _MainTex ("Particle Texture", 2D) = "white" {}
    _TintColor ("Tint Color", Color) = (0.5,0.5,0.5,0.5)
}

Category 
    {
    Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"}
    Blend SrcAlpha One
    ColorMask RGB
    Cull Off Lighting Off ZWrite Off

    SubShader 
    {
        Pass 
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #pragma multi_compile_particles
            #pragma multi_compile_fog
            // Don’t forget to specify the target
            #pragma target 3.0

            #include "UnityCG.cginc"
            // You have to include this header to have access to ShadeSHPerPixel
            #include "UnityStandardUtils.cginc"

            fixed4 _TintColor;
            sampler2D _MainTex;

            struct appdata_t 
            {
                   float4 vertex : POSITION;
                   float3 normal : NORMAL;
                   fixed4 color : COLOR;
                   float2 texcoord : TEXCOORD0;
            };

            struct v2f 
            {
                   float4 vertex : SV_POSITION;
                   fixed4 color : COLOR;
                   float2 texcoord : TEXCOORD0;
                   UNITY_FOG_COORDS(1)
                   float3 worldPos : TEXCOORD2;
                   float3 worldNormal : TEXCOORD3;
            };

            float4 _MainTex_ST;
            v2f vert (appdata_t v)
            {
                  v2f o;
                  o.vertex = UnityObjectToClipPos(v.vertex);
                  o.worldNormal = UnityObjectToWorldNormal(v.normal);
                  o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
                  o.color = v.color;
                  o.texcoord = TRANSFORM_TEX(v.texcoord,_MainTex);
                  UNITY_TRANSFER_FOG(o,o.vertex);
                  return o;
             }
            
             fixed4 frag (v2f i) : SV_Target
             {
                    half3 currentAmbient = half3(0, 0, 0);
                    half3 ambient = ShadeSHPerPixel(i.worldNormal, currentAmbient, i.worldPos);
                    fixed4 col = _TintColor * i.color * tex2D(_MainTex, i.texcoord);
                    >col.xyz += ambient;
                    UNITY_APPLY_FOG_COLOR(i.fogCoord, col, fixed4(0,0,0,0)); // fog towards black due to our blend mode
                    return col;
             }
             ENDCG
         }
      }
   }
}

February 2

Turn Events into Commands with Behaviors

Utilizing data binding in mobile apps can greatly simplify development by automatically synchronizing an app’s data to its user interface with minimal set up. Previously, we looked at the basics of data binding, and then explored some more advanced data binding scenarios where values are formatted and converted as they are passed between source and target by the binding engine. We then examined a Xamarin.Forms feature called commanding, that allows data bindings to make method calls directly to a ViewModel, such as when a button is clicked.

In this blog post, I’m going to explore a Xamarin.Forms feature called behaviors, which in the context of commanding, enables any Xamarin.Forms control to use data bindings to make method calls to a ViewModel.

Introduction to Behaviors

Behaviors let you add functionality to UI controls without having to subclass them. Instead, the functionality is implemented in a behavior class and attached to the control as if it was part of the control itself. Behaviors enable you to implement code that you would normally have to write as code-behind, because it directly interacts with the API of the control in such a way that it can be concisely attached to the control and packaged for reuse across more than one app. They can be used to provide a full range of functionality to controls, from adding an email validator to an Entry, to creating a rating control using a tap gesture recognizer.

Implementing a Behavior

The procedure for implementing a behavior is as follows:

  1. Inherit from the Behavior<T> class, where T is the type of control that the behavior should apply to.
  2. Override the OnAttachedTo method and use it to perform any set up.
  3. Override the OnDetachingFrom method to perform any clean up.
  4. Implement the core functionality of the behavior.

This results in the structure shown in the following code example:

public class CustomBehavior : Behavior<View>
{
	protected override void OnAttachedTo (View bindable)
	{
		base.OnAttachedTo (bindable);
		// Perform setup
	}
	protected override void OnDetachingFrom (View bindable)
	{
		base.OnDetachingFrom (bindable);
		// Perform clean up
	}
	// Behavior implementation
}

The OnAttachedTo method is fired immediately after the behavior is attached to the UI control. This method is used to wire up event handlers or perform other set up that’s required to support the behavior functionality. For example, you could subscribe to the ListView.ItemSelected event and execute a command when the event fires. The behavior functionality would then be implemented in the event handler for the ItemSelected event.

The OnDetachingFrom method is fired when the behavior is removed from the UI control and is used to perform any required clean up. For example, you could unsubscribe from the ListView.ItemSelected event in order to prevent memory leaks.

Consuming a Behavior

Every Xamarin.Forms control has a behavior collection to which behaviors can be added, as shown in the following code example:

<Editor>
	<Editor.Behaviors>
		<local:CustomBehavior />
	</Editor.Behaviors>
</Editor>

At runtime the behavior will respond to interaction with the control, as per the behavior implementation.

Invoking a Command in Response to an Event

In the context of commanding, behaviors are a useful approach for connecting a control to a command. In addition, they can also be used to associate commands with controls that were not designed to interact with commands. For example, they can be used to invoke a command in response to an event firing. Therefore, behaviors address many of the same scenarios as command-enabled controls, while providing a greater degree of flexibility.

The sample application contains the ListViewSelectedItemBehavior class, that executes a command in response to the ListView.ItemSelected event firing.

Implementing Bindable Properties

In order to execute a user specified command, the ListViewSelectedItemBehavior defines two BindableProperty instances, as shown in the following code example:

public class ListViewSelectedItemBehavior : Behavior<ListView>
{
	public static readonly BindableProperty CommandProperty =
            BindableProperty.Create ("Command", typeof(ICommand), typeof(ListViewSelectedItemBehavior), null);
	public static readonly BindableProperty InputConverterProperty =
            BindableProperty.Create ("Converter", typeof(IValueConverter), typeof(ListViewSelectedItemBehavior), null);
	public ICommand Command {
		get { return (ICommand)GetValue (CommandProperty); }
		set { SetValue (CommandProperty, value); }
	}
	public IValueConverter Converter {
		get { return (IValueConverter)GetValue (InputConverterProperty); }
		set { SetValue (InputConverterProperty, value); }
	}
    ...
}

When this behavior is consumed by a ListView, the Command property should be data bound to an ICommand to be executed in response to the ListView.ItemSelected event firing, and the Converter property should be set to a converter that returns the SelectedItem from the ListView.

Implementing the Overrides

The ListViewSelectedItemBehavior overrides the OnAttachedTo and OnDetachingFrom methods of the Behavior<T> class, as shown in the following code example:

public class ListViewSelectedItemBehavior : Behavior<ListView>
{
    ...
	public ListView AssociatedObject { get; private set; }
	protected override void OnAttachedTo (ListView bindable)
	{
		base.OnAttachedTo (bindable);
		AssociatedObject = bindable;
		bindable.BindingContextChanged += OnBindingContextChanged;
		bindable.ItemSelected += OnListViewItemSelected;
	}
	protected override void OnDetachingFrom (ListView bindable)
	{
		base.OnDetachingFrom (bindable);
		bindable.BindingContextChanged -= OnBindingContextChanged;
		bindable.ItemSelected -= OnListViewItemSelected;
		AssociatedObject = null;
	}
    ...
}

The OnAttachedTo method subscribes to the BindingContextChanged and ItemSelected events of the attached ListView. The reasons for the subscriptions are explained in the next section. In addition, a reference to the ListView the behavior is attached to is stored in the AssociatedObject property.

The OnDetachingFrom method cleans up by unsubscribing from the BindingContextChanged and ItemSelected events.

Implementing the Behavior Functionality

The purpose of the behavior is to execute a command when the ListView.ItemSelected event fires. This is achieved in the OnListViewItemSelected method, as shown in the following code example:

public class ListViewSelectedItemBehavior : Behavior<ListView>
{
    ...
	void OnBindingContextChanged (object sender, EventArgs e)
	{
		OnBindingContextChanged ();
	}
	void OnListViewItemSelected (object sender, SelectedItemChangedEventArgs e)
	{
		if (Command == null) {
			return;
		}
		object parameter = Converter.Convert (e, typeof(object), null, null);
		if (Command.CanExecute (parameter)) {
			Command.Execute (parameter);
		}
	}
	protected override void OnBindingContextChanged ()
	{
		base.OnBindingContextChanged ();
		BindingContext = AssociatedObject.BindingContext;
	}
}

The OnListViewItemSelected method, which is executed in response to the ListView.ItemSelected event firing, first executes the converter referenced through the Converter property, which returns the SelectedItem from the ListView. The method then executes the data bound command, referenced through the Command property, passing in the SelectedItem as a parameter to the command.

The OnBindingContextChanged override, which is executed in response to the ListView.BindingContextChanged event firing, sets the BindingContext of the behavior to the BindingContext of the control the behavior is attached to. This ensures that the behavior can bind to and execute the command that’s specified when the behavior is consumed.

Consuming the Behavior

The ListViewSelectedItemBehavior is attached to the ListView.Behaviors collection, as shown in the following code example:

<ListView ItemsSource="{Binding People}">
	<ListView.Behaviors>
		<local:ListViewSelectedItemBehavior Command="{Binding OutputAgeCommand}"
            Converter="{StaticResource SelectedItemConverter}" />
	</ListView.Behaviors>
</ListView>
<Label Text="{Binding SelectedItemText}" />

The Command property of the behavior is data bound to the OutputAgeCommand property of the associated ViewModel, while the Converter property is set to the SelectedItemConverter instance, which returns the SelectedItem of the ListView from the SelectedItemChangedEventArgs.

The result of the behavior being consumed is that when the ListView.ItemSelected event fires due to an item being selected in the ListView, the OutputAgeCommand is executed, which updates the SelectedItemText property that the Label binds to. The following screenshots show this:

ExecuteCommand

Generalizing the Behavior

It’s possible to generalize the ListViewSelectedItemBehavior so that it can be used by any Xamarin.Forms control, and so that it can execute a command in response to any event firing, as shown in the following code example:

<ListView ItemsSource="{Binding People}">
	<ListView.Behaviors>
		<local:EventToCommandBehavior EventName="ItemSelected" Command="{Binding OutputAgeCommand}"
            Converter="{StaticResource SelectedItemConverter}" />
	</ListView.Behaviors>
</ListView>
<Label Text="{Binding SelectedItemText}" />

For more information, see the EventToCommandBehavior class in the sample application.

Wrapping Up Behaviors

In the context of commanding, behaviors are a useful approach for connecting a control to a command. In addition, they can also be used to associate commands with controls that were not designed to interact with commands. For example, they can be used to invoke a command in response to an event firing. Therefore, behaviors address many of the same scenarios as command-enabled controls, while providing a greater degree of flexibility.

For more information about behaviors, see our Working with Behaviors document.

The post Turn Events into Commands with Behaviors appeared first on Xamarin Blog.

Texturing high-fidelity characters: Working with Quixel Suite for Unity 5

Following the release of the assets from our demo “The Blacksmith”, we got many requests for more information on the artistic side of creating high-end characters.

Check out this tutorial which was just published in the recent days by the team at Quixel.
It introduces you to how you can texture your character with Quixel Suite and set it up in Unity 5.

The video uses the main character from “The Blacksmith” demo as a sample, and covers the process step-by-step, with useful tips along the way.

If you want to try it yourself, remember you can download the characters and the environments from “The Blacksmith” from the Unity Asset Store, free of charge, and you are welcome to use them for any purpose, including commercial.

Happy texturing!

February 1

Xamarin Events in February

Join one of these many user groups, conferences, webinars, and other events to help celebrate something we all love this February — native mobile development in C#!

February 2016 Banner
Here are just a handful of the many developer events happening around the world this month:

SWETUGG se

  • Stockholm, Sweden: February 2
  • Get Started Building Cross-Platform Apps with Xamarin (Xamarin MVP Johan Karlsson speaking)

XLSOFT Japan Japan

  • Tokyo, Japan: February 5
  • Xamarin with NuGet and CI

Xamarin Costa Rica Mobile .NET Developer Group cr

  • San Jose, Costa Rica: February 8
  • Introduction to Mobile Development with C#/.NET and Xamarin

Gauteng Xamarin User Group za

  • Johannesburg­, South Africa: February 9
  • Xamarin 4: Everything You Need to Build Great Apps

South Florida Xamarin User Group us

  • Fort Lauderdale, FL: February 9
  • Demo Day: Share Your Xamarin Apps

Mobile-Do Developers Group do

  • Santo Domingo, Dominican Republic: February 12
  • Data Persistence with SQLite and SQLite.Net

Concord .Net User Group us

  • Concord, NH: February 16
  • Cross-Platform .NET Development with Carl Barton – Xamarin MVP

Orlando Windows Phone User Group us

  • Orlando, FL: February 17
  • Xamarin.Forms for .NET Developers

.NET Coders Brazil

  • São Paulo, Brazil: February 18
  • Deliver Top Quality Apps Using Xamarin Test Cloud and Xamarin Test Recorder

Boston Mobile C# Developers’ Group us

  • Boston, MA: February 18
  • Powerful Backends with Azure Mobile Services

Mobilize Enterprise Applications with Oracle and Xamarin au

  • Melbourne, Australia: February 23
  • Build Better, More Engaging Mobile Apps with Oracle and Xamarin in Melbourne

Chicago .NET Mobile Developers us

  • Chicago, IL: February 24
  • FreshMvvm : A Lightweight MVVM Framework for Xamarin.Forms

Mobilize Enterprise Applications with Oracle and Xamarin au

  • Sydney, Australia: February 26
  • Build Better, More Engaging Mobile Apps with Oracle and Xamarin in Sydney

XHackers in

  • Bangalore, India: February 27
  • MVVM & DataBinding + Intro to Game Dev with Xamarin

Didn’t see an event in your area?

Not to worry! Check out the Xamarin Events Forum for even more Xamarin meetups, hackathons, and other events happening near you.

Interested in getting a developer group started?

We’re here to help! Here are a few tools to help you out:

Also, we love to hear from you, so feel free to send us an email or tweet @XamarinEvents to let us know about events in your neck of the woods!

The post Xamarin Events in February appeared first on Xamarin Blog.

Profiling with Instruments

In the Enterprise Support team, we see a lot of iOS projects. At some point, in any iOS development, developers often end up running their game and sitting there thinking “Why the hell is this running so slowly?”. There are some great sets of tools for analysing performance out there and, one of the best is Instruments. Read on to find out how to use it to find your issues!   

To use Instruments, or any of XCode’s debugging tools, you will need to build a Unity project for the iOS Build Target (with the Development Build and Script Debugging options unchecked). Then you will need to compile the resultant XCode project with XCode in Release mode and deploy it to an attached iOS device.

After starting Instruments (by either a long press on the play button, or selecting Products>Profile), select the Time Profiler. To begin a profiling run, select the built application from the application selector, then press the red Record button. The application will launch on the iOS device with Instruments connected, and the Time Profiler will begin recording telemetry. The telemetry will appear as a blue graph on the Instruments timeline.

blogpic

P.S. To clean up the call hierarchy, the Details pane on the right-hand side of the Call Tree has two options, located in the “Settings” submenu (click on the gear icon in the Details pane). Select Flatten Recursion and Hide System Libraries.

A list of method calls will appear in the detail section of the Instruments window. Each top-level method call represents a thread within the application.

In general, the main method is the location of all hotspots of interest, as it contains all managed code.

Expanding the main method will yield a deep tree of method calls. The major branch is between two methods:

  • [startUnity] and UnityLoadApplication (These method names sometimes appear in ALL CAPS).
  • PlayerLoop

[startUnity] is of interest as it contains all time spent initializing the Unity engine. A method named UnityLoadApplication will be found beneath it.It is beneath UnityLoadApplication that startup time can be profiled.

image00

Once you have a nice time-slice of your application profiled, pause the Profiler, and start expanding the tree.  As you work down the tree, you will notice the time in ms reduces in the left hand column.d What you are looking for are items that cause a significant reduction in the time.  This will be a performance hotspot.  Once you have found one, you can go back to your code-base, and find out WTF is going on that is taking so much time.  It could be that it is a totally necessary operation, or it could be some time in the distant past you hacked some  pre-production code in that has made it over to your production project, or …well… it could for a million reasons really.  How/if you decide to fix this hotspot would be largely up to you, as you know your codebase better than anyone :D.

Instruments can also be used to look for performance sinks that are distributed broadly — ones that lack a single large hotspot, but instead show up as a few milliseconds of lost time in many different places in a codebase.  To do this, type either a partial or full function name into Instruments’ symbol search box, located above and to the right of the call tree. If profiling a slice of gameplay, expand PlayerLoop and collapse all the methods beneath it. If profiling startup time, expand UnityLoadApplication and collapse the methods beneath it.  The total number of milliseconds wasted on a specific operation can be roughly estimated by looking at the total time spent in PlayerLoop or UnityLoadApplication and subtracting the number of milliseconds located in the self column.

Common methods to look for:
– “Box(“, “box” and “box” — these indicate that C# value boxing is occurring; most instances of boxing are trivially fixed
– “Concat” — string concatenation is often easily optimized away
– “CreateScriptingArray” — All Unity APIs that return arrays will allocate new copies of arrays. Minimize calls to these methods.
– “Reflection” — reflection is slow. Use this to estimate the time lost to reflection and eliminate it where possible.
– “FindObjectOfType” — Use this to locate repeated or unnecessary calls to FindObjectOfType, or other known-slow Unity APIs.
– “Linq” — Examine the time lost to creating and discarding Linq queries; consider replacing hotspots with manually-optimized methods.

As well as profiling CPU time, Instruments also allows you to profile memory usage.  Instruments’ Allocations profiler provides two probes that offer detailed views into the memory usage of an application. The Allocations probe permits inspection of the objects resident within memory during a specific time-span. The VM Tracker probe permits monitoring of the dirty memory heap size, which is the primary metric used by iOS to determine when an application must forcibly closed.

Both probes will run simultaneously when selecting the Allocations profiler in Instruments. As usual, begin a profiling run by pressing the red Record button.

To set up the Allocations probe correctly, ensure the following settings are correct in the Detail tab on the right-hand side of Instruments.   Under Display Settings (middle option), ensure Allocation Lifespan is set to Created & Persistent.  Under Record Settings (left option), ensure Discard events for freed memory is checked.

The most useful display for examining memory behavior is the Statistics display, which is the default display when using the Allocations probe. This display shows a timeline. When used with the recommended settings, the graph displays blue lines indicating the time and magnitude of memory allocations which are still currently live.By watching this graph, you can watch for long-lived or leaked memory by simply repeating the scenario under test and ensuring that no blue lines remain alive between runs.

Another useful display is the Call Trees display. It displays the line of code at which allocations are performed, along with the amount of memory consumption the line of code is responsible for.

Below you can see that about 25% of the total memory usage of the application under test is solely due to shaders. Given that the shaders’ location in the loading thread, these must be the standard shaders bundled with default Unity projects which are then loaded at application startup time.

image01

As before, once you have identified a hotspot, what you do with it is totally dependent on your project.

So there you go.  A brief guide to Instruments. 1000(ish) words and no A-Team references. We don’t want to get into trouble like last time. Copyright violations are officially Not Funny™.

The Enterprise Support team is creating more of these guides, and we will be posting the full versions of our Best Practice guides in the coming months!

We love it when a plan comes together.

January 31

Show me the way

If you need further proof that OpenStreetMap is a great project, here’s a very nice near real-time animation of the most recent edits: https://osmlab.github.io/show-me-the-way/

Show me the way

Seen today at FOSDEM, at the stand of the Humanitarian OpenStreetMap team which also deserves attention: https://hotosm.org


Comments | More on rocketeer.be | @rubenv on Twitter

January 29

Don’t Miss Big Medium’s Josh Clark at Xamarin Evolve 2016

Josh Clark HeadshotIf anyone has secrets to spill about how to make your next mobile app a hit, Josh Clark is your man, and he’ll be sharing some of them with you at Xamarin Evolve 2016, the world’s largest cross-platform mobile development conference!

Josh has written several books on mobile app design, including Designing for Touch and Tapworthy: Designing Great iPhone Apps, and he founded his agency Big Medium to help brands such as Samsung, Time Inc, eBay, and Entertainment Weekly get the most out of their mobile strategies.

Josh joins a great and growing lineup of industry-leading speakers, including MythBusters’ Grant Imahara, and legendary technical author Charles Petzold, as well as several other Xamarin gurus. Tickets are going fast, so be sure to register today!
 

Register Now

The post Don’t Miss Big Medium’s Josh Clark at Xamarin Evolve 2016 appeared first on Xamarin Blog.

Microsoft VS Dev Essentials + Xamarin University

Xamarin University CrestMicrosoft launched its Visual Studio Dev Essentials program at Microsoft Connect (); 2015, and today we’re excited to announce that we’re giving Dev Essentials members free access to select Xamarin University content!

Visual Studio Dev Essentials is available to all developers free of charge. Through the program, developers receive various benefits, including developer tools, software, cloud service credits, as well as education and training. Now, it’s even better because it includes Xamarin University class recordings and materials from our Xamarin Mobile Fundamentals course.

Xamarin University includes 60+ courses taught live by mobile experts on a wide range of topics and out of those, we’ve carefully selected a subset of the curriculum for the Dev Essentials program, making the 60-75 minute lecture recordings available for on-demand viewing at any time.

Our initial course line-up* includes the following lectures and all of the associated materials, including interactive lab projects:

  • Intro to iOS 101 and 102
  • Intro to Android 101 and 102
  • Intro to Xamarin.Forms
  • 2 guest lectures from industry luminaries (Azure Mobile Services and Intro to Prism)

*We may alter the content from time to time

It’s simpler than ever for developers to get started with Visual Studio and Xamarin! Xamarin is bundled with VS 2015 to give teams mobile templates from day one, easily connects to Azure for critical mobile functionality, and now developers can access industry-leading training for rapid onboarding and skill development.

Visual Studio Dev Essentials activation page

We look forward to powering even more successful mobile apps for the .NET community!

Get Started

Visit Visual Studio Dev Essentials to sign up and activate your Xamarin University Mobile Training benefits.

Learn more about Xamarin University at xamarin.com/university.

The post Microsoft VS Dev Essentials + Xamarin University appeared first on Xamarin Blog.

Unity Comes to New Nintendo 3DS

We announced our intention to support Nintendo’s recently released New Nintendo 3DS platform at Unite Tokyo and we’ve been very busy in the meantime getting it ready.  Now we’re pleased to announce it’s available for use today!

The first question people usually ask is “do you support the original Nintendo 3DS too?”  To which the answer is a qualified “yes”. We can generate ROM images which are compatible with the original Nintendo 3DS, and there are certainly some types of game which will run perfectly well on it, but for the majority of games we strongly recommend targeting the New Nintendo 3DS for maximum gorgeousness.

We’ve been working very closely with select developers to port a few of their existing games to New Nintendo 3DS. We’ve been busy profiling, optimizing, and ironing out the niggles using real-world projects, so you can be confident your games will run as smoothly as possible. In fact, one game has already successfully passed through Nintendo’s exacting mastering system; Wind Up Knight 2 went on sale at the end of last year!

Wind Up Knight 2

Wind Up Knight 2 – Japanese Version. (c) 2016 Robot Invader

Unity’s internal shader code underwent a number of significant changes in the transition from version 5.1 to 5.2.  This brought many benefits, including cleaner and more performant code, and also fixed a number of issues we had on console platforms.  We’re not able retrofit those fixes to the 5.1 based version, so we shall only be actively developing our shader support from version 5.2 onwards.

We’ve been putting Unity for New Nintendo 3DS version 5.2 through its paces for a few months, and it’ll be made available once it’s proved itself by getting a game through Nintendo’s mastering system too.  That should be in the near future, but it’s not something that’s easy to put a date on.

So far, we’ve been in development with a Nintendo 3DS-specific version of the Unity editor, but now we’ve switched our focus towards upgrading to the latest version, with a view to shipping as a plug-in extension to the regular editor.  We have a 5.3 based version running internally, and we’re working hard to get it merged into our mainline code-base.

It should be mentioned that some features are not yet implemented in this first public release, notably UNet and Shadow Maps (although Light-Maps are supported). We’re prioritising new features according to customer demand, but right now our main goal is to get into the regular editor.

In common with other mobile platforms, there are some limitations as to what can be achieved with the hardware. For instance, Unity’s Standard Shader requires desktop-class graphics hardware so it’s not something we can support on Nintendo 3DS. However, as with other platforms, if you try to use a shader which is unsupported then Unity will fall-back to a less complex shader that gives the best possible results.

Preparing your game for New Nintendo 3DS

This platform is unique in several ways, so games will need some modification to make best use of its features.

  • There are two screens, so you will need to redesign your user interface to accommodate the additional display.  The lower screen is touch sensitive, so it makes sense to put menus and other interactive UI items there.

The device’s coolest feature is that the picture is 3D, without needing glasses!  However, this does mean that the distance of objects is visible to the player in a way that it isn’t on other platforms.  So graphical effects which “cheat” to simulate distance won’t work.  For example, 2½-D games which use an orthographic projection and parallax layers will show up as completely flat.

  • There is less memory available than on other platforms, but that’s not as big an issue as it might seem at first. Textures can be down-sized drastically since the screen resolution is much lower than typically found on smartphones and tablets.
  • Unity for New Nintendo 3DS was one of the first platforms to use our in-house IL2CPP technology exclusively; we don’t use Mono at all. This brings substantial performance benefits, but there are a couple of downsides:

All compilation is done AOT (when the project is built). We don’t support JIT compilation (at runtime).

Various other platforms are also AOT-only, so if you’re porting a game from one of those platforms then you won’t have any problems. However, if you’re porting from a platform which does allow JIT compilation, then you might run into issues. In particular, some middleware JSON parsers which use introspection can be problematic. The good news is that Unity now comes with its own high-performance JSON parser, which doesn’t suffer from such issues.

hammers

Opening a celebratory barrel of sake, at Unite Tokyo.

How to Get Involved

Unity for New Nintendo 3DS is available at no charge. Just like with Nintendo’s Wii U, if you sign up to develop games for the platform, you get to use Unity for free!

Simply visit Nintendo’s Developer Portal and enrol in the Nintendo Developer Program*, then you’ll be able to download Unity for New Nintendo 3DS.

Of course, you will need some development hardware too. Devkits and testing units can also be purchased via Nintendo’s Developer Portal.

* Conditions apply, see site for details.

January 28

Getting Started with Azure Mobile Apps’ Easy Tables

Every front end needs a great backend. This is true now more than ever in today’s connected world, where it’s extremely important to have your data with you at all times, even if you are disconnected from the internet. There are tons of great solutions available for Xamarin developers including Couchbase, Parse, Amazon, Oracle, and, of course, Microsoft Azure. Microsoft Azure Logo

In the past, we looked at the brand new Azure Mobile Apps, which gives you a full .NET backend providing complete control over how data is stored and retrieved for your mobile apps. What if you need a backend right now with minimal setup? That is where Azure Mobile Apps’ brand new Easy Tables come in. How easy are Easy Tables? So easy that you can add backend data storage to your app in as little as 15 minutes! The best part is that if you’ve used any Azure Mobile Services or Azure Mobile Apps, you’ll feel right at home.

Keeping Track of Coffee Consumption

If you follow me on social media or have seen me present, then you know I love coffee; I’m drinking one as I type this! That’s why I thought that for this post I would build a cross-platform app to help me keep track of just how many coffees I’m consuming each day. I call it “Coffee Cups”, and we’ll be building it throughout this post. Here’s the final version in action:

Creating a New Azure Mobile App

Inside of the Azure portal, simply select New -> Web + Mobile -> Mobile App, which is the starting point to configure your Azure Mobile App backend.

Creating a new Mobile app in the Microsoft Azure portal.

When selecting Mobile App in Azure, you will need to configure the service name (this is the URL where your backend web app/ASP.NET website will live), configure your subscription, and set your resource group and plan. I call Seattle home, so I’ve selected the default West US locations:

Blade configuration for a new mobile app.

Give Azure a few minutes to deploy your mobile app, and once deployed, the Azure portal will bring you directly to the configuration screen with the settings tab open. All of the settings we’ll adjust can be found under the Mobile section:

Easy tables section within Settings blade.

Add Data Connection

We can’t have a backend for our mobile apps without a database. Under the Data Connections section, select Add, and then configure a new SQL database as I’ve done below:

Adding a new database to our mobile app with Easy Tables.

Make sure that you keep Allow azure services to access server checked for your mobile app to allow services to properly connect to the database server. Also, be sure to keep your password in a secure place as you may need it in the future.

Click OK on all open blades and Azure will begin to create the database for us on the fly. To see the progress of database creation live, click on the notifications button in the upper-righthand corner:

Creating data connection.

When the data connection is created, it will appear in the Mobile Apps data connections blade, which means it’s time to set up the data that will go in the new database.

Available data connections for our mobile app

Adding a New Table

Under Mobile settings is a new section called Easy Tables, which enable us to easily set up and control the data coming to and from the iOS and Android apps. Select the Easy Tables section, and we’ll be prompted with a big blue warning asking us to configure Easy Tables/Easy APIs:

Add a new table.

Since we already setup the database, the only thing left to do is Initialize the app.

Instructions for adding a new table by connecting database and configuring App Service to use Easy Tables.

After a few moments, the app will be fully initialized so we can add our first table of the database named CupOfCoffee. If you are adding Azure Mobile Apps to an existing application, this table name should match the class name of the data you with to store. The beautiful part of Easy Tables is that it will automatically update and add the columns in the table dynamically based on the data we pass in. For this example, I’ll simply allow full anonymous access to the table, however it is possible to add authentication with Facebook, Twitter, Google, Microsoft, and other OAuth login providers.

Add a table dialog.

Adding Sync to Mobile Apps

With our backend fully set up on Azure, it’s now time to integrate the Azure Mobile SDK into your mobile apps. The first step is to add the Azure Mobile Apps NuGet package that’s named Azure Mobile SQLiteStore. This package sits on top and includes the Mobile Services Client SDK to connect to our backend and adds full online/offline synchronization and should be added to all application projects. For example CoffeeCups is written using Xamarin.Forms, so I added the NuGet to my PCL, iOS, Android, and Windows projects:

Azure Mobile SQLiteStore in NuGet.

Initialize the Azure Mobile Client

Add the Azure Mobile Client SDK initialization code in the platform projects. For iOS, the following code must be added to the FinishedLaunching method of the AppDelegate class:

Microsoft.WindowsAzure.MobileServices.CurrentPlatform.Init();
SQLitePCL.CurrentPlatform.Init();

For Android, add the following to the OnCreate method of your MainActivity:

Microsoft.WindowsAzure.MobileServices.CurrentPlatform.Init();

The Data Model

It’s now time to create the data model that we’ll use locally to display information, but also save in our Azure Mobile App backend. It should have the same name as the table that we created earlier. There are two string properties required to ensure the Mobile Apps SDK can assign a unique identifier and version in case any modifications are made to the data. Here is what the CupOfCoffee model looks like:

public class CupOfCoffee
{
    [Newtonsoft.Json.JsonProperty("Id")]
    public string Id { get; set; }
    [Microsoft.WindowsAzure.MobileServices.Version]
    public string AzureVersion { get; set; }
    public DateTime DateUtc { get; set; }
    public bool MadeAtHome{ get; set; }
    [Newtonsoft.Json.JsonIgnore]
    public string DateDisplay {  get { return DateUtc.ToLocalTime().ToString("d"); } }
    [Newtonsoft.Json.JsonIgnore]
    public string TimeDisplay {  get { return DateUtc.ToLocalTime().ToString("t"); } }
}

Notice the model has a UTC DateTime that will be stored in the backend, but also has two helper properties for a display date and time. They are marked with a JsonIgnore property and will not be persisted in the database.

Accessing Mobile App Data

It’s now time to use the Mobile App SDK to create a MobileServiceClient that will enable us to perform create, read, update, and delete (CRUD) operations that will be stored locally and synchronized with our backend. All of this logic will be housed in a single class, which for this app is called “AzureDataService” and has a MobileServiceClient and a single IMobileServiceSyncTable. Additionally, it has four methods to Initialize the service, get all coffees, add a coffee, and synchronize the data with the backend:

public class AzureDataService
{
    public MobileServiceClient MobileService { get; set; }
    IMobileServiceSyncTable coffeeTable;
    public async Task Initialize()
    {
    }
    public async Task<IEnumerable> GetCoffees()
    {
    }
    public async Task AddCoffee(bool madeAtHome)
    {
    }
    public async Task SyncCoffee()
    {
    }
}

Creating the Service and Table

Before we can get or add any of the data we must create the MobileServiceClient and the SyncTable. This is done by passing in the URL of the Azure Mobile App and specifying the file in which to store the local database:

public async Task Intialize()
{
    //Create our client
    MobileService = new MobileServiceClient("https://coffeecups.azurewebsites.net");
    const string path = "syncstore.db";
    //setup our local sqlite store and intialize our table
    var store = new MobileServiceSQLiteStore(path);
    store.DefineTable();
     await MobileService.SyncContext.InitializeAsync(store, new MobileServiceSyncHandler());
    //Get our sync table that will call out to azure
    coffeeTable = MobileService.GetSyncTable();
}

Synchronize Data

Mobile devices often lose connectivity, so it’s vital that our app continue to function properly even in low or no connectivity environments. Azure makes this extremely easy; with just a few lines of code, Azure automatically syncs our local database and the backend when connectivity is reestablished:

public async Task SyncCoffee()
{
    //pull down all latest changes and then push current coffees up
    await coffeeTable.PullAsync("allCoffees", coffeeTable.CreateQuery());
    await MobileService.SyncContext.PushAsync();
}

Retrieving Data

The IMobileServiceSyncTable offers a very nice asynchronous and LINQ queryable API to get data from our backend. This can be done to grab all of the data or filter it down by a property, such as an Id. In this instance, we’ll simply synchronize our local data with the backend before fetching all cups of coffee, sorted by the date:

public async Task<IEnumerable> GetCoffees()
{
    await SyncCoffee();
    return await coffeeTable.OrderBy(c => c.DateUtc).ToEnumerableAsync();
}

Inserting Data

In addition to getting all of the latest data, we can insert, update, and even delete data that will be kept in sync between devices.

public async Task AddCoffee(bool madeAtHome)
{
    //create and insert coffee
    var coffee = new CupOfCoffee
    {
      DateUtc = DateTime.UtcNow,
      MadeAtHome = madeAtHome
    };
    await coffeeTable.InsertAsync(coffee);
    //Synchronize coffee
    await SyncCoffee();
}

Now from my mobile app I can simply create my AzureDataService and start querying and adding coffees through the day, all in under 70 lines of code.

Completed Coffee Consumption app built with Xamarin and Azure Easy Tables.

Learn More

In this post we’ve covered setting up the basic Azure Mobile App’s Easy Tables to perform CRUD and sync operations from our mobiles apps. In addition to this, you have full access to the service code and can customize it right from the portal (using the edit script option in Easy Tables and even custom scripts with Easy APIs). Be sure to check out all of the additional services provided by Azure Mobile App Services such as authentication and push notifications on the Azure portal. You can try out Azure Mobile Apps by downloading this Coffee Cups sample from my GitHub or by testing pre-built samples\ apps on the Try App Services page.

The post Getting Started with Azure Mobile Apps’ Easy Tables appeared first on Xamarin Blog.

January 26

Rise of the sub-$100 Tablets: Christmas by the numbers from Unity Technologies

Unity Xmas InfographicThe holiday season is a big time not only for Unity developers but also the gaming industry at large. With all of the lounging about digesting holiday meals, there were a lot of folks spending time relaxing in front of their favorite game. That means there’s an enormous amount of data being generated, and subsequently a huge opportunity to examine that data to better understand our customers.

In the spirit of giving back, we decided to take a quick peek into the mobile gaming landscape in the US, and the result is our “Christmas By The Numbers” infographic. The chart, which follows up on our inaugural By The Numbers Report released in September 2015, looks at Christmas Day 2015 in the US. One of the reasons the chart is particularly interesting is the reach that we are able to analyze. As a result of our large install base, the number of mobile devices we looked at on Friday, December 18th was 3.6 million. That number more than doubled the following Friday (Christmas) to 7.8 million, showing a surge in mobile device activity over the holidays.

Retailers and manufacturers are always trying to understand the trends, and a post-mortem is a great tool to see how well you tracked to your strategy. When looking at specific devices used to install Unity games, we noticed some strong trends when it came to both models and types of devices. Chart A shows that on Christmas Day, 3 tablets under $100 – RCA Voyager Pro 7”, Voyager II and Amazon Fire 7” – accounting for a whopping 18% of the devices we tracked. It would appear that budget tablets were a popular item under the Christmas tree, especially as they didn’t even show up in our Friday before Christmas top 10.

Furthermore, 6 of the top 10 devices used on Christmas day were tablets, which together accounted for 27% of all devices. This contrasts sharply with the previous Friday, where only 3 of the top 10 devices were tablets, indicating that tablets might have been the gift of choice this Christmas.

Our last perspective of the numbers, Chart C, tracked the total number of game installs that were powered by Unity. When comparing Christmas to the previous Friday, we saw over double the amount activity, with December 18th at ~4.3 million installs and Christmas day ~10.8 million installs. And it seemed that people couldn’t get enough of their games on Christmas, as shown in Chart B. Peak installs occurred at 11am (likely after gift opening), plateauing for most of the day until a secondary peak at 6pm. Nothing like a shiny new toy to keep people gaming all day long.

We hope our Christmas infographic provided some useful insights, which are just a first step towards providing deeper insights, benchmarking, and metrics for game developers. As the leading game development platform, Unity is uniquely positioned to gather and extrapolate trends using data. By sharing this data, our goal is to help game developers better understand the gaming landscape and players’ behavior. Keep your eyes out for our upcoming Q4 By The Numbers Report, to be released in the coming weeks.

ABOUT DATA COLLECTION AND PRIVACY

Unity Analytics is a new service available to mobile game developers aimed at providing greater insights on player behavior.  In the near future, Unity Analytics will provide Unity developers with access to the most comprehensive real-time market data set available in the game industry. This real-time data will provide far deeper insights into games, devices, and users, helping Unity developers to become more successful and to build games that players love to play. For customers who are interested in a comprehensive customized game intelligence package please contact Unity Analytics (analytics@unity3d.com) for details.

The game developer may allow Unity to collect certain device properties and the player location when the players installs a mobile game built with Unity software. Unity compiles and publishes certain de-identified, aggregated data to help Unity, mobile game developers and mobile device companies better understand their user base and the devices they use.

Aggregated data was collected from iOS and Android devices across the contiguous US on December 18th and 25th, from 12am to 12pm.

Device counts are calculated using a “unique device identifier (UUID).” Different platforms and versions handle the UUID differently, which may result in the same device being counted multiple times due to multiple UUIDs for that device, or a device not being counted at all due to not having a UUID. As a result, iOS device counts may be overstated.

For more information on Unity’s privacy practices, please review the Unity Privacy Policy: http://unity3d.com/legal/privacy-policy

Come and join us in Amsterdam!

Unite Europe is returning to Amsterdam from May 31st to June 2nd and we need your help once again.  We listened to your feedback from last year and expanded the main conference to run for three days, so that you have extra opportunities to present.

We are opening the call for speakers for Unite Europe as of today.  Closing date for submission of talks is March 20th .

Unite is all about the opportunity to share experiences and to learn, so we welcome talks on all aspects of using Unity. Having said that, we are looking to have two underlying themes for the conference this year: Art and Community.  Looking at some of the content being produced using Unity these days, I am just staggered by the quality and variety of art that people are able to produce.

Whether it is with cutting edge graphics technology as shown by the team at Zerolight, the great use of colour and light in Firewatch or the dizzying lines of Manifold Garden, I continue to be amazed by the results achieved.  We would like to encourage developers to submit talks based on the use of art in their projects and help share these experiences with the community.

This can be for example how you went about establishing your art style and were able to achieve the desired look and feel of your environments and characters.  Alternatively, you could talk about how you set up your art production pipeline and the type of issues you addressed, or give a talk about how you developed your own shaders or tools for a project.

The other thing that continues to impress me is the generosity, passion and power of the community.  I’ve been working in technology companies for over 25 years and have never encountered such a remarkable group of people as the Unity forums before.  The founders of Unity had the idea of building and supporting a community of users from day one and this remains a major priority within the company today.

We would invite you to show us how you were able to interact with members of the community, such as development of packages for the Asset Store, projects that grew out of a game jam or how User Groups were established in your region.  One feature of the London Unity User Group that is always interesting are the Open Mic sessions where people can deliver a short talk or call for help between presentations.  We are looking to do something similar this year at Unite Europe with the introduction of “Lightning Talks” (title is copyright Andy Touch :).

The last hour session of the first and second day will offer 6 slots for ten minute talks on the main stage.  Topics will be submitted prior to the conference and we will let you know if your submission is successful before the day.  These talks can be about anything Unity related but must be a maximum of 10 minutes long and not involve lengthy set up of equipment.  Josh Naylor and Andy Touch will be provided with cattle prods and air horns to make sure that people don’t overrun.  We will be back in touch closer to the time with further details.

I would like to urge you to come to Amsterdam and contribute to the event by giving a talk.  A great city, a great community and a great opportunity to share and learn makes for a fabulous three day experience.  Finally we would like to provide some tips on how to travel around Amsterdam in a safe and fun way.

I look forward to seeing you all there!

19264218591_e77f2d7a4b_o 19234290016_c538347772_o 19164352268_49baf43869_o 18955304783_b6a33c18ce_o 18953675124_6e587404b5_o 18637903694_22b803c115_o 19351148945_9958bde18f_o 19342069972_d449692bfa_o

January 25

Unity Drives the Democratization of Development in 2016 with Eight Unite Conferences Globally

We are thrilled to announce dates and locations for our worldwide Unite conference series in 2016. Throughout the year, we will host eight Unite conferences around the world, including events in Amsterdam, Melbourne, São Paulo, Seoul, Shanghai, Singapore, Tokyo and Los Angeles. Each Unite conference will give local developers access to Unity engineers, hands-on workshops showcasing games made with Unity, peer networking, and new and upcoming products and services. The series is open to all – from students and indies to AAA studios – so we hope you can join us!

According to John Riccitiello, CEO of Unity Technologies: “I love seeing the amazing things that developers from the biggest publishers to the smallest indies are making on the Unity platform. Each year our conference series grows and provides the perfect listening platform for us to make sure we’re solving the hard problems game creators encounter. They also bring together developers to collaborate and to learn from each other, a key part of what makes our community stronger.”

Unite2016Logo_EuropeUnite Europe 2016 returns to Amsterdam, Netherlands at the Westergasfabriek venue from May 31st – June 2nd, with a Unity Training Day on May 30th.  With over 75 talks across 3 days and 4 tracks, Unite Europe 2016 will be Unity’s biggest European conference to date. We are seeking speaker submissions for the event, so if you are interesting in participating, visit uniteeurope2016.unityproposals.com/


The conference series culminates with Unite ‘16 Los Angeles – Unity’s flagship conference for 2016 – on 1-3 November, held at the LoewsUnite2016Logo_LosAngeles Hollywood Hotel. The event will bring together thousands of developers around the globe for over 70 sessions, allowing you to learn the latest tips, tricks and updates directly from Unity executives, developers and industry influencers. Unite Los Angeles will also feature the Unity Awards to honor the most exceptional local games and experiences made with Unity over the past year. Finally, attendees will have access to an exhibition floor with over 30 exhibitors and a variety of networking receptions and parties.

The eight Unite conferences in 2016 are:

  • Unite Tokyo – 4-5 April
  • Unite Seoul – 7-8 April
  • Unite Shanghai – 11-12 April
  • Unite Amsterdam – 31 May – 2 June
  • Unite Southeast Asia – October
  • Unite São Paulo – 11 September
  • Unite Melbourne – November
  • Unite Los Angeles – 1-3 November

To learn more about the Unite conference series, visit: http://unite.unity.com.

GGX in Unity 5.3

In Unity 5.3 Standard Shader, we have switched to GGX as the BRDF of choice for both analytical lights, such as point/directional light, but also for image based lighting. Furthermore, a complete overhaul has been performed on our implementation for convolution of cube maps to achieve both accurate and noiseless results at low execution time (latter part is in Unity 5.4). The most characteristic difference between GGX and normalized Phong is that the microfacet distribution profiles associated with GGX has a higher and more narrow spike, followed by a prevailing tail as we see here.

Profiles for GGX and Normalized Phong.

The impact of this on the final lit result is that GGX has a brighter highlight, followed by a trailing halo as shown below, which gives a more realistic appearance.

Comparison between GGX and conventional normalized phong.

Cross-industry Compatible Materials

In academics, physically based BRDFs use roughness as the parameter to control the microfacet distribution function. Academic roughness is defined as the root mean square slope of the profile. A common misunderstanding is that roughness maps in CG are the same as academic roughness, which is not the case. The reason academic roughness is not used for texture maps or sliders is because the “blur levels” are not evenly distributed, which is both very difficult to work with, but also leverages the limited bit precision of a texture map poorly. To avoid confusion, Unity uses smoothness instead of roughness maps, where smoothness is converted into academic roughness in the shader, using the formula (1-smoothness)^2. Distribution wise this is equivalent to Burley’s roughness, but reversed such that the most blurry response maps to 0.0 and perfect mirror reflection maps to 1.0, which we find more intuitive.

The significance to such a standardized distribution is that it allows you to import content into Unity made with external tools and achieve similar results. Most CG painting tools today support smoothness maps. To be clear, an identical match is not guaranteed, but proportionality between diffuse, specular brightness and overall blurriness of the specular reflection should be close. The following comparison shot between Unity 5 and Substance Painter, was kindly provided by Wes McDermott from Allegorithmic.

As we see the visuals are very similar. I would also like to thank Wes and Allegorithmic for their collaboration and helpful iteration on this. For more details on the subject people are encouraged to check out their detailed course on PBR and Unity 5.

Coming in Unity 5.4

In Unity 5.4 we have focused on improving the speed of cube map convolution and getting exceptionally clean visuals for the image based lighting (IBL). Below we see a comparison between a sphere lit in Unity 5.4 vs. a conventional path tracer at 50000 rays per pixel.

As we see, there is a significant amount of noise using the conventional path tracer on the right, even at 50000 rays per pixel. The reason for this is because a basic path tracer (BRDF importance sampling) struggles with environment maps, which contain hot singularities such as a sun at physically proportional intensity. Unity 5.4 is now resilient to this problem and the off-line cube map convolution is roughly 2 times faster than Unity 5.2.

Monologue

Monologue is a window into the world, work, and lives of the community members and developers that make up the Mono Project, which is a free cross-platform development environment used primarily on Linux.

If you would rather follow Monologue using a newsreader, we provide the following feed:

RSS 2.0 Feed

Monologue is powered by Mono and the Monologue software.

Bloggers