Disclaimer: this is an automatic aggregator which pulls feeds and comments from many blogs of contributors that have contributed to the Mono project. The contents of these blog entries do not necessarily reflect Xamarin's position.

September 28

A Step-by-Step Guide to Building a Profitable Mobile Services Business Through Mobile DevOps

Mobile is unlocking new strategic competitive advantages and revenue streams for businesses, which in turn is driving businesses to spend billions in mobile investments, creating tremendous opportunity for Systems Integrators, Consulting Partners, and Digital Agencies as clients turn to outside experts for strategic guidance on how to execute their mobile initiatives.

According to Gartner the market demand for mobile app development services will grow at least five times faster than internal IT organization capacity to deliver them through 2017.

The Enterprise App Explosion: Scaling One to 100 Mobile Apps, Gartner, May 7, 2015

Xamarin Partners are uniquely positioned to help them spend these investments wisely and achieve mobile success. In this white paper, you’ll learn:

  • What the three core service opportunities are for technology partners today
  • How these three service lines align with the unique DevOps approach that mobile development requires
  • How you can start implementing these practices today to help grow your and your clients’ mobile businesses

 

Get the white paper
 

The post A Step-by-Step Guide to Building a Profitable Mobile Services Business Through Mobile DevOps appeared first on Xamarin Blog.

September 27

Android Archiving and Publishing Made Easy

With the release of Xamarin for Visual Studio 4.2 and this week’s Service Release 0, archiving and publishing your Android applications directly from Visual Studio just got a whole lot easier and more streamlined. The new Archive Manager inside of Visual Studio enables you to easily package, sign, and directly ship your Android apps for Ad-Hoc and Google Play distribution.

Archiving and Packaging

Creating your first archive for distribution is as easy as right-clicking on your Android project and selecting Archive:

openarchivemanager

This will automatically build your Android application, create an APK using the version name and code from your Android Manifest, and create the first Archive. This Archive is in a pre-release state that allows you to write release notes, check app size, browse app icons, and distribute your application.

firstpackage

Distributing the App Ad-Hoc

Clicking on the Distribute… button will open the new Distribute workflow automatically in Ad-Hoc mode and will enable us to create, import, and store a keystore that will be used for signing the package.

distribute1

Since this is our first project, we can create a new keystore and fill in the required fields. Once this is done, or if we are importing an existing keystore, it will be saved in secure storage so I can easily sign my applications in the future without having to search my machine for it.

create

Now, we can use the keystore by tapping on it and then clicking Save As, which will sign the app and allow us to save it to disk, which we can then send to a distribution service such as HockeyApp.

Distributing to Google Play

While we are often creating development and test builds, there are times we may want to publish directly to Google Play for production, which the Archive Manager also enables us to do during the distribution flow. Assuming that we have already created our app inside of the Google Play developer console and that we have turned on Alpha or Beta testing and published at minimum one release, back in the Archive Manager, select an archive to distribute and then click on the Distribute… button. This brings up an Ad-Hoc distribution flow, but we can click the back button and will then see an option for Google Play distribution:

googleplaychannels

Selecting Google Play will bring us back to our keystore selection to sign the app, but this time we’ll see that there is a new Continue button that will allow us to add our Google Play account when clicked.

add-google-play

To set up a Google Play API, it’s as easy as signing into our Google Play developer account, going to API Access in settings, and creating a new OAuth Client. This will give us our Client Id and Client Secret to enter into the dialog.

registergoogle

Click Register to finish registration, which will launch a web browser to finalize the oAuth flow to add your account.
oauth

Once the account is registered we can select it and continue to selecting a distribution channel to publish our application in:
distribute

There you have it: now you can create a keystore, package an Android app for Ad-Hoc distribution, and take it all the way to production on Google Play without ever leaving Visual Studio!

Learn More

To learn more about preparing an Android application for release, be sure to read through our full documentation. You can find an in-depth overview of each step of the archiving and publishing process for both Visual Studio and Xamarin Studio in our documentation for Ad-Hoc and Google Play distribution.

The post Android Archiving and Publishing Made Easy appeared first on Xamarin Blog.

September 26

Speech Recognition in iOS 10

Speech is increasingly becoming a big part of building modern mobile applications. Users expect to be able to interact with apps through speech, so much so that speech is developing into a user interface itself. iOS contains multiple ways for users to interact with their mobile device through speech, mainly via Siri and Keyboard Dictation. iOS 10 vastly improves developers’ ability to build intelligent apps that can be controlled not only via a typical user interface, but by speech as well through the new SiriKit and Speech Recognition APIs.

Prior to iOS 10, Keyboard Dictation was the only way for developers to enable users to interact with their apps through speech. This comes with many limitations for developers, namely the fact that it only worked through user interface elements that support TextKit, is limited to live audio, and doesn’t support attributes such as timing and confidence. Speech Recognition in iOS 10 doesn’t require us to use any particular user interface elements, supports both prerecorded and live speech, and provides lots of additional context for translations, such as multiple interpretations, confidence levels, and timing information. In this blog post, you will learn how to use the new iOS 10 Speech Recognition API to perform speech-to-text in a mobile app.

Introduction to Speech Recognition

The Speech Recognition API is available as part of the iOS 10 release from Apple. To ensure that you can build apps using the new iOS 10 APIs, confirm that you are running the latest Stable build from Xamarin in the updater channel in Visual Studio or Xamarin Studio. Speech recognition can be added to our iOS applications in just a few steps:

  1. Provide a usage description in the app’s Info.plist file for the NSSpeechRecognitionUsageDescriptionKey.
  2. Request authorization to use speech recognition by calling SFSpeechRecognizer.RequestAuthorization.
  3. Create a speech recognition request and pass the speech recognition request to a SFSpeechRecognizer to begin recognition.

Providing a Usage Description

Privacy is a big part of building mobile applications; both iOS and Android have recently revamped the way apps can request user permissions such as the ability to use the camera or microphone. Because the audio is temporarily transmitted to and stored on Apple servers to perform translation, user permission is required. Be sure to take into account various other privacy considerations when deciding to use the Speech Recognition API.

To enable us to use the Speech Recognition API, open Info.plist and add the key NSSpeechRecognitionUsageDescription as the Property, String as the Key, and a message you would like to display the to the user when requesting permission to use speech recognition as the Value.

Info.plist for requesting user permissions.

Note: If the app will be performing live speech recognition, you will need add an additional permission with property value `NSMicrophoneUsageDescription`.

Request Authorization for Speech Recognition

Now that we have added our key(s) to Info.plist, it’s time to request permission from the user by using the SFSpeechRecognizer.RequestAuthorization method. This method has one parameter, `Action>`, that allows us to handle the various scenarios that could occur when we ask the user for permission:

  • SFSpeechRecognizerAuthorizationStatus.Authorized: Permission granted from the user.
  • SFSpeechRecognizerAuthorizationStatus.Denied: Permission denied from the user.
  • SFSpeechRecognizerAuthorizationStatus.NotDetermined: Awaiting Permission approval from user.
  • SFSpeechRecognizerAuthorizationStatus.Restricted: Device does not allow usage of SFSpeechRecognizer

Recognizing Speech

Now that we have permission, let’s write some code to use the new Speech Recognition API! Create a new method named RecognizeSpeech that takes in an NSUrl as a parameter. This is where we will perform all of our speech-to-text logic.

public void RecognizeSpeech(NSUrl url)
{
    var recognizer = new SFSpeechRecognizer();
    // Is the default language supported?
    if (recognizer == null)
        return;
    // Is recognition available?
    if (!recognizer.Available)
        return;
}

SFSpeechRecognizer is the main class for speech recognition in iOS 10. In the code above, we “new up” an instance of this class. If speech recognition is not available in the current device language, the recognizer will be null. We can then check if speech recognition is available and authorized before using it.

Next, we’ll create and issue a new SFSpeechUrlRecognitionRequest with a local or remote NSUrl to select which prerecorded audio to recognize. Finally, we can use the SFSpeechRecognizer.GetRecognitionTask method to issue the speech recognition call to the server. Because recognition is performed incrementally, we can use the callback to update our user interface as results return. When speech recognition is completed, SFSpeechRecognitionResult.Final will be set to true, and we can use SFSpeechRecognitionResult.BestTranscription.FormattedString to access the final transcription.

// Create recognition task and start recognition
var request = new SFSpeechUrlRecognitionRequest(url);
recognizer.GetRecognitionTask(request, (SFSpeechRecognitionResult result, NSError err) =>
{
    // Was there an error?
    if (err != null)
    {
        var alertViewController = UIAlertController.Create("Error", $"An error recognizing speech occurred: {err.LocalizedDescription}", UIAlertControllerStyle.Alert);
        PresentViewController(alertViewController, true, null);
    }
    else
    {
        // Update the user interface with the speech-to-text result.
        if (result.Final)
            SpeechToTextView.Text = result.BestTranscription.FormattedString;
    }
});

That’s it! Now we can run our app and perform speech-to-text using the new Speech Recognition APIs as part of iOS 10.

Performing More Complex Speech & Language Operations

The Speech Recognition APIs from iOS 10 are great, but what if we need something a bit more complex? Microsoft Cognitive Services has a great set of language APIs for handling speech and natural language, from speaker recognition to understanding speaker intent. For more information about Microsoft Cognitive Services language and speech APIs, check out the Microsoft Cognitive Services webpage.

Wrapping Up

In this blog post, we took a look at the new Speech Recognition APIs that are available to developers as part of iOS 10. For more information on the Speech Recognition APIs, visit our documentation. Mobile applications that want to build conversational user interfaces should also check out the documentation on iOS 10’s SiriKit. To download the sample from this blog post, visit my GitHub.

The post Speech Recognition in iOS 10 appeared first on Xamarin Blog.

September 22

Iowa Caucuses Launch Inaugural Polling Apps with Xamarin

As the 2016 election continues to heat up, we’re putting a spotlight on where it all began: the Iowa Caucuses. The February 1, 2016 Iowa Caucus kicked off the US Presidential nominations, and early poll results traditionally play a huge role in the Republican and Democratic Parties’ candidate selection. This year, both parties partnered with Microsoft and InterKnowlogy, a Microsoft Gold Partner, to create Xamarin-based mobile apps, boosting the accuracy and security of the Caucus, as well as making it easier for precinct voters to cast their ballots.

Iowa Caucus AppsDuring the 2012 Iowa Caucuses, the Republican Party incorrectly reported its winning candidate, and the complex caucus voting rules and reporting process made the true outcome almost impossible to determine. The touchtone-phone based system was prone to error, most notably precincts submitting duplicate entries that skewed results.

Determined to avoid issues and increase public confidence in election results, both Parties realized mobile technologies offered the best solution, but delivering apps that met the standards required for such an important event weren’t without challenges.

The Iowa Caucus Apps’ criteria, at a glance:

  • As consumer-facing apps, both Parties needed phone and tablet versions to distribute via all major public app stores, resulting in 12 apps across Android, iOS, and Windows.
  • Security and fidelity were a must, especially user authentication. While the app was publicly available, only registered Caucus Precinct Chairs were granted access to the reporting functionality. Timing was also important: Precinct Chairs needed to access reporting immediately when voting opened, but not a moment beforehand. To validate user identity, InterKnowlogy incorporated two-factor authentication.
  • Since Iowa Caucus participants cover all demographics, including less tech-savvy citizens, the apps needed to be highly intuitive and responsive, requiring little training and eliminating the ability to mistakenly report information.
  • The apps needed to handle complex logic, calculate and validate results according to party rules, catch invalid entries, and include prompts for conditional voting processes. Before results were submitted and announced to the public, they needed to be validated with any anomalies flagged for analysis.

After a diligent requirements gathering and user experience design process, the InterKnowlogy team faced an extremely aggressive four month timeline. However, using Xamarin, Microsoft Azure, and their deep Microsoft expertise, they successfully delivered apps across all platforms with just five .NET developers dedicated to the project. On Caucus day in Des Moines, the final apps captured 90% of caucus results within three hours in a secure, accurate, and trusted manner.

 

View the Case Study

 
Start building your own native Android, iOS, and Windows apps with Xamarin today at xamarin.com/download.

The post Iowa Caucuses Launch Inaugural Polling Apps with Xamarin appeared first on Xamarin Blog.

September 21

Xamarin at Microsoft Ignite

Xamarin will be in full force at Microsoft Ignite September 26–30!

If you’re heading to Georgia, you can find us at the “Mobile Development & Xamarin” totem in the Developer Tools section of the Cloud + Enterprise area of the expo floor.

You’ll also have the opportunity to attend Pierce Boggan’s Pre-Day Training session, “Build Cross-platform Enterprise Mobile Apps with Visual Studio and Xamarin” on Sunday, September 25. Additionally, Xamarin’s Dan Waters will present a theater session on how to “Ship Better Mobile Apps Faster with Continuous Delivery”. Other sessions include:

Visit Microsoft Ignite to view the full agenda and add a calendar reminder to join the event online if you won’t be attending in person.

The post Xamarin at Microsoft Ignite appeared first on Xamarin Blog.

September 20

Enhanced Notifications in Android N with Direct Reply

One of my favorite parts of Android has to be it’s notification system enabling developers to directly connect to their users outside of the main application. With the launch of Android N, notifications are getting a visual make-over including a new material design with re-arranged and sized content to make them easier to digest and some new details specific for Android N such as app name and an expander. Here is a nice visual overview of the change from Android M to N:

Android N Notification

Visuals aren’t the only thing getting updated in Android N, as there are a bunch of great new features for developers to take advantage of. Bundled Notifications allow developers to group notifications together by using the Builder.SetGroup() method. Custom Views have been enhanced and it is now possible to use the system notification headers, actions, and expanded layouts with a custom view. Finally, my favorite new feature has to be Direct Reply, allowing users to reply to a message within a notification so they don’t even have to open up the application. This is similar to how Android Wear applications could send text back to the main application.

2016-09-19_1810

Getting Started

In previous versions of Android, all developers could handle was notification and action tap events, which would launch an Activity or a service/broadcast receiver when using an action. The idea of Direct Reply is to extend an action with a RemoteInput to enable users to reply to a message without having to launch the application. It’s best practice to handle responding to messages inside of an Activity as the user may decide to tap on the notification or may be on an older operating system.

A pre-requisite to implementing Direct Reply is that we must have a broadcast receiver or service implemented that can receive and process the incoming reply from the user. For this example, we’ll be launching a notification from our MainActivity that will send an Intent with a value of “com.xamarin.directreply.REPLY” that our broadcast receiver will filter on.

First, ensure that the latest Android Support Library v4 NuGet is installed in the Android application to use the compatibility mode for notifications.

In our MainActivity, we’ll create a few constant strings that can be referenced later in the code:

int requestCode = 0;
public const string REPLY_ACTION = "com.xamarin.directreply.REPLY";
public const string KEY_TEXT_REPLY = "key_text_reply";
public const string REQUEST_CODE = "request_code";

Create a Pending Intent

An Android Pending Intent is a description of an Intent and target action to perform with it. In this case, we want to create one that will trigger our reply action if the user is on Android N, or that will launch the Main Activity if the user is on an older devices.

Intent intent = null;
PendingIntent pendingIntent= null;
//If Android N then enable direct reply, else launch main activity.
if ((int)Build.VERSION.SdkInt >= 24)
{
    intent = new Intent(REPLY_ACTION)
			.AddFlags(ActivityFlags.IncludeStoppedPackages)
			.SetAction(REPLY_ACTION)
			.PutExtra(REQUEST_CODE, requestCode);
    pendingIntent = PendingIntent.GetBroadcast(this, requestCode, intent, PendingIntentFlags.UpdateCurrent);
}
else
{
    intent = new Intent(this, typeof(MainActivity));
    intent.AddFlags(ActivityFlags.ClearTop | ActivityFlags.NewTask);
    pendingIntent = PendingIntent.GetActivity(this, requestCode, intent, PendingIntentFlags.UpdateCurrent);
}

Create and Attach RemoteInput

The key to direct reply is to create and attach a RemoteInput, which will tell Android that this action that we’re adding is a direct reply and thus should allow the user to enter text.

var replyText = "Reply to message...";
//create remote input that will read text
var remoteInput = new Android.Support.V4.App.RemoteInput.Builder(KEY_TEXT_REPLY)
						        .SetLabel(replyText)
                                                        .Build();

After we have the RemoteInput we can create a new action and attach it to it a new action:

var action = new NotificationCompat.Action.Builder(Resource.Drawable.action_reply,
                                                   replyText,
                                                   pendingIntent)
                                                  .AddRemoteInput(remoteInput)
                                                  .Build();

Build and Send Notification

With our action with remote input created, it’s finally time to send the notification.

var notification = new NotificationCompat.Builder(this)
					 .SetSmallIcon(Resource.Drawable.reply)
					 .SetLargeIcon(BitmapFactory.DecodeResource(Resources, Resource.Drawable.avatar))
					 .SetContentText("Hey, it is James! What's up?")
					 .SetContentTitle("Message")
					 .SetAutoCancel(true)
                                         .AddAction(action)
					 .Build();
using (var notificationManager = NotificationManagerCompat.From(this))
{
	notificationManager.Notify(requestCode, notification);
}

Now our notification is live with the remote input visible:
notification

Processing Input

When the user inputs text into the direct reply, we’re able to retrieve the text from the Intent that is passed in with just a few lines of code:

var remoteInput = RemoteInput.GetResultsFromIntent(Intent);
var reply = remoteInput?.GetCharSequence(MainActivity.KEY_TEXT_REPLY) ?? string.Empty;

This should be done in a background service or broadcast receiver with the “com.xamarin.directreply.REPLY” Intent Filter specified.

Here’s our final BroadcastReceiver that will pop up a toast message and will update the notification to stop the progress indicator in the notification:

[BroadcastReceiver(Enabled = true)]
[Android.App.IntentFilter(new[] { MainActivity.REPLY_ACTION })]
/// 
/// A receiver that gets called when a reply is sent
/// 
public class MessageReplyReceiver : BroadcastReceiver
{
	public override void OnReceive(Context context, Intent intent)
	{
		if (!MainActivity.REPLY_ACTION.Equals(intent.Action))
			return;
		int requestId = intent.GetIntExtra(MainActivity.REQUEST_CODE, -1);
		if (requestId == -1)
			return;
		var reply = GetMessageText(intent);
		using (var notificationManager = NotificationManagerCompat.From(context))
		{
			//Create new notification to display, or re-build existing conversation to update with new response
			var notificationBuilder = new NotificationCompat.Builder(context);
			notificationBuilder.SetSmallIcon(Resource.Drawable.reply);
			notificationBuilder.SetContentText("Replied");
			var repliedNotification = notificationBuilder.Build();
			//Call notify to stop progress spinner.
			notificationManager.Notify(requestId, repliedNotification);
		}
		Toast.MakeText(context, $"Message sent: {reply}", ToastLength.Long).Show();
	}
	/// 
	/// Get the message text from the intent.
	/// Note that you should call 
	/// to process the RemoteInput.
	/// 
	/// The message text.
	/// Intent.
	static string GetMessageText(Intent intent)
	{
		var remoteInput = RemoteInput.GetResultsFromIntent(intent);
		return remoteInput?.GetCharSequence(MainActivity.KEY_TEXT_REPLY) ?? string.Empty;
	}
}

Learn More

To learn more about the great new features in Android N, including Notification enhancements, be sure to read our full Android N Getting Started Guide. You can find a full example of Direct Reply and other notification enhancements in our Samples Gallery.

The post Enhanced Notifications in Android N with Direct Reply appeared first on Xamarin Blog.

September 19

New iOS 10 Privacy Permission Settings

If you’ve ever built an iOS application, you’ll already be familiar with requesting app permissions (and mostly likely are familiar with Android, too, since the Marshmallow release). If an app wanted access to a users location or to use push notifications prior to iOS 10, it would prompt the user to grant permission. iOS10 Graphic

In iOS 10, Apple has changed how most permissions are controlled by requiring developers to declare ahead of time any access to a user’s private data in their Info.plist. In this blog post, you’ll learn how to ensure your existing Xamarin apps continue to work flawlessly with iOS 10’s new permissions policy.

Example iOS 9 Permissions Request

For instance, if we wanted to integrate photos into our application, we would want to request permission with the following code:

PHPhotoLibrary.RequestAuthorization(status =>
{
  switch(status)
  {
    case PHAuthorizationStatus.Authorized:
      break;
    case PHAuthorizationStatus.Denied:
      break;
    case PHAuthorizationStatus.Restricted:
      break;
    default:
      break;
  }
 });

The above code would bring up a dialog box requesting permissions that we could handle, with the message was directly by the system.

What’s New in iOS 10

Starting in iOS 10, nearly all APIs that require requesting authorization and other APIs, such as opening the camera or photo gallery, require a new key value pair to describe their usage in the Info.plist. This is very similar to the requirement for NSLocationWhenInUseUsageDescription or NSLocationAlwaysUsageDescription to be put into the Info.plit when using Geolocation and iBeacon APIs. The difference now is that the application will crash when the app attempts authorization without these keys set. These include use of:

  • Bluetooth Sharing
  • Calendar
  • CallKit/VoIP
  • Camera
  • Contacts
  • Health
  • HomeKit
  • Location
  • Media Library
  • Microphone
  • Motion
  • Photos
  • Reminders
  • Speech Recognition
  • SiriKit
  • TV Provider

These new attributes only take effect when we start compiling against the iOS 10 SDK, which means we must provide keys when using these APIs. If we want to use the Media Plugin for Xamarin and Windows, for example, to take or browse for a photo, we must add the follow privacy settings into the Info.plist file:

properties

When we attempt to pick a photo, our message will be shown to the users:

popup

Each of the privacy keys map to specific values that are set in the Info.plist. Opening it in a text editor, we’ll see the following:

NSCameraUsageDescription
This app needs access to the camera to take photos.
NSPhotoLibraryUsageDescription
This app needs access to photos.

Here’s a mapping of each of the values in case you need to manually add them to the Info.plist:

  • Bluetooth Sharing – NSBluetoothPeripheralUsageDescription
  • Calendar – NSCalendarsUsageDescription
  • CallKit – NSVoIPUsageDescription
  • Camera – NSCameraUsageDescription
  • Contacts – NSContactsUsageDescription
  • Health – NSHealthShareUsageDescription & NSHealthUpdateUsageDescription
  • HomeKit – NSHomeKitUsageDescription
  • Location – NSLocationUsageDescription, NSLocationAlwaysUsageDescription, NSLocationWhenInUseUsageDescription
  • Media Library – NSAppleMusicUsageDescription
  • Microphone – NSMicrophoneUsageDescription
  • Motion – NSMotionUsageDescription
  • Photos – NSPhotoLibraryUsageDescription
  • Reminders – NSRemindersUsageDescription
  • Speech Recognition – NSSpeechRecognitionUsageDescription
  • SiriKit – NSSiriUsageDescription
  • TV Provider – NSVideoSubscriberAccountUsageDescription

Learn More

To learn more about these keys, be sure to read through Apple’s Cocoa Keys documentation. To learn more about the new APIs and changes in iOS 10, be sure to read through our Introduction to iOS 10 guide and our new iOS Security and Privacy Enhancements documentation.

The post New iOS 10 Privacy Permission Settings appeared first on Xamarin Blog.

September 16

Xamarin Around the World with Xamarin Dev Days

Xamarin Dev Days are the place to find free hands-on Xamarin training, live demos, and a fun environment to build your very own cloud-based Xamarin.Forms application with Azure. User groups around the world are working to provide events in their cities offering developers the opportunity to learn native mobile development for iOS, Android, and Windows from the ground up. Xamarin Dev Days have been so popular, we are here to announce another set of brand new cities all across the globe.

What are Xamarin Dev Days?XDD Agenda

They are community run, comprehensive introductions to building mobile apps with Xamarin, Xamarin.Forms, and creating cloud-connected mobile apps with Microsoft Azure. After lunch, there will be an opportunity to put new skills into practice with a hands-on workshop. Whether you are a brand new or experienced C#/.Net developer, every attendee will walk away with a better understanding of how you can build, test, and monitor native iOS, Android, and Windows apps.

 

xdd-praavi

MORE Xamarin Dev Days!

9/23: Abuja, Nigeria
10/1: Bogota, Columbia
10/1: Bangalore, India
10/8: Hanoi, Vietnam
10/8: Monterrey, Mexico
10/8: London, United Kingdom
10/8: Dakar, Senegal
10/8: Dallas, TX
10/15: Jaipur, India
10/15: Cádiz, Spain
10/15: Ankara, Turkey
10/22: Toronto, Canada
10/29: Chiapas, Mexico
10/29: Gliwice, Poland
10/29: Moka, Mauritius
11/05: Sousse, Tunisia
11/05: Kernersville, NC
11/12: Cleveland, OH
11/18: Berlin, Germany
11/19: Cranbury, NJ
11/19: Bournemouth, United Kingdom
11/25: Bari, Italy
11/26: Paris, France
12/10: Dubai, UAE

If you’re looking for an event in your area, visit the Xamarin Dev Days website for a full list of all of the Xamarin Dev Days. You can also use this interactive map to help find a Xamarin Dev Days in your area:

Want a Xamarin Dev Days in Your City?

Apply as a Xamarin Dev Days host! We’ll provide you with everything you need for a fantastic Xamarin Dev Days event, including all of the speaker content and lab walkthrough, a hosting guideline to help organize your event, and assistance with registration and promotion. Hurry and apply for your city now—the deadline for events in 2016 closes soon!

Sponsoring Xamarin Dev Days

We’re working with tons of Xamarin Partners and community members to help facilitate the Xamarin Dev Days series. If your company is interested in participating in these awesome events, apply as a sponsor and get global recognition and access to our worldwide developer community!

The post Xamarin Around the World with Xamarin Dev Days appeared first on Xamarin Blog.

Scaling from Side Project to 200,000+ Downloads with Xamarin and Microsoft Azure

As mobile technology evolves, developers everywhere are building new, innovative apps that capture our interest and improve our lives, from creating unique social media communities to developing digital assistants and bots.

With over 200,000 downloads and 4+ stars, Foundbite strikes the right balance of practical and engaging, with apps that allow users to add sound to static images, creating “foundbites” that bring experiences, events, and places to life for their friends, followers, and fans.

James Mundy, Foundbite Founder and Lead Developer, shares how he got started with mobile development and how he was able to use his C# skills to get Foundbite into the hands of Android, iOS, and Windows users everywhere.

Tell us a little bit about your company and role. Have you always been a developer?

I started developing Foundbite while studying Physics at university in 2012. Now, we’re a London-based team of three building an app that allows you to share and explore sounds from around the world.

I started building the app as a side project. I was able to secure some funding from Microsoft and Nokia to bring it to Windows Phone first, so the very first version of our app was built in C#. Since I’d written several Windows Phone apps before this, it was a good fit.

Tell us about your app / what prompted you to build it.

The idea behind Foundbite is to allow people to share and explore the sounds of the world around them from their phone. With Foundbite, users record five seconds to five minutes of sound, add photos to give the sound context, and tag it with a location.

Users can share their creations with friends (through Facebook and Twitter) and the public Foundbite community. We also have an interactive global map that allows users to search, find, and listen to sounds from places all over the world, getting a real feeling for what it’s like to be there.

What is the most compelling or exciting aspect of your app?

The feature that resonates most with our users is its truly global nature—we’ve had uploads from the UK, US, Taiwan, Iran, China, and more—and the ability to explore a map, find a place you’re interested in or haven’t heard of before, and then listen to the sounds that another user has recorded. Recording the sound of a place really does ignite your imagination and give you a feel for what it’s like to be there.

Some Foundbite examples include: the Tennis World Tour Finals at O2 Arena, a bullet train passing in Taiwan, and the crowd cheering at the Seattle Seahawks’ stadium, plus many more on the website.

How long did it take to ship your app, from design to deploy?

Thanks to Xamarin, our whole code base is shared at around 60% across Windows, iOS, and Android platforms. This makes maintaining code and diagnosing bugs far easier, but the main advantage is that we’ve been able to deploy three highly rated apps to three different platforms with a team of just two full time developers.

We use Microsoft Azure for our backend, so we have a full Microsoft and .NET Stack. We use Azure Notification Hubs, Azure Search, Redis, Azure SQL, Azure App Service, so we also have code shared between our app client projects and our server side code, which is ideal!

How long would it have taken you without Xamarin?

It would have taken us significantly longer to develop the apps. We had experience with C# already, and would have had to learn Objective C/Swift and Java and have been replicating a lot of code in these other languages that we had already written in C# for the Windows app.

Even though we were building the apps in C#, there was still a lot of learning to do regarding how to use the iOS and Android platform APIs and getting to grips with the nuances of each platform. Overall, the APIs were well documented, and there’s a very active Xamarin Forums and StackOverflow community to turn to for help. Even without that, it’s very easy to adapt samples written in Swift/Objective C to C#.

Are you using mobile DevOps / CI?

We’re starting to use Xamarin Test Cloud and TFS build server to improve our internal processes and improve the quality and reliability of the builds we push out to our users.

What’s your team planning to build next?

We’ve got lots more features planned, like the ability to combine several Foundbites into a collection to document a trip or event even better. Thanks (again) to Xamarin, we hope to roll this out to our users nearly simultaneously across all platforms.

What advice do you have for developers who are just starting out or investigating mobile development? Any best resources?

I’d recommend starting simple and using GitHub to find other mobile (Xamarin or otherwise) projects that developers have done and open sourced. I found this to be particularly useful in working out how apps were built and how to solve problems as I built my own app.

What would you say to a developer or enterprise just starting mobile development?

I’d definitely advise starting off with Xamarin—there’s less repeated code, you can have a more versatile, smaller team with the potential for everyone to be able to work on each platform, and a quicker development cycle, which are all advantageous for any company, whether big or small.

Using Xamarin as an early stage company has enabled us to write less, better code with a smaller team to reach more customers quicker.

To learn how our customers around the world are building amazing apps, visit xamarin.com/customers, and start building your own today at xamarin.com/download.

The post Scaling from Side Project to 200,000+ Downloads with Xamarin and Microsoft Azure appeared first on Xamarin Blog.

September 15

Start Building Azure-Connected Apps with the Xamarin Shopping Demo App

picture1Today I’m excited to announce that we’re making our latest sample app, the Shopping Demo App, available to download from GitHub. We worked closely with the Microsoft Azure team to create this great business-to-consumer sample app available for iOS, Android, and Windows 10.

The Shopping Demo App is a classifieds marketplace that uses a wide range of Microsoft Azure services to create a mobile-unique experience. Users authenticate with either Facebook or Twitter to begin an interactive experience of searching, selling, or buying items. Sellers upload photos and list prices, and it even uses push notifications to let vendors know when their items are sold. Buyers and sellers can rate the app using Microsoft Cognitive Services’ emotion detection capabilities. Microsoft Cognitive Services’ Emotion API detects smiles, frowns, or neutral expressions and assigns a star rating accordingly.

We developed the Shopping Demo App to highlight how any developer can create powerful, scalable mobile apps with Xamarin and Azure. Developers can quickly connect to more than 100 Azure services, including App Service, Storage, Data Sync, and Cognitive Services. You can use this backend project to jumpstart your own mobile backend, as it tackles common mobile scenarios, such as user authentication, offline storage and data sync, and the ability to scale to millions of requests and users. We’ve also created five Quick Starts, breaking each Shopping Demo App Azure service into simple, easy-to-follow modules.

picture2picture3picture4
 
 
 
 
 
 
 
 
 
 
 
 

Learn More

Getting started with Shopping Demo App couldn’t be easier—all code for the mobile apps and backend are available on GitHub.

If you already have an Azure subscription, you can easily publish using a deployment project. If you don’t, be sure to get started with your free 30-day Azure trial at azure.com/xamarin.

The post Start Building Azure-Connected Apps with the Xamarin Shopping Demo App appeared first on Xamarin Blog.

Monologue

Monologue is a window into the world, work, and lives of the community members and developers that make up the Mono Project, which is a free cross-platform development environment used primarily on Linux.

If you would rather follow Monologue using a newsreader, we provide the following feed:

RSS 2.0 Feed

Monologue is powered by Mono and the Monologue software.

Bloggers