ailon's DevBlog: Development related stuff in my life

AdDuplex on Hanselminutes

3/12/2012 3:57:20 PM

image

I’ve been honored to be interviewed by Scott Hanselman on his great podcast – Hanselminutes. We’ve talked about AdDuplex, startups in general and doing them from Lithuania in particular. Check it out.

Tags:

More on Cloud Storage/CDN pricing

2/21/2012 3:26:02 PM

I’ve blogged about my attempts to understand the cost structure of serving (publicly over HTTP) large amounts of small files from Azure Blob Storage (or CDN for that matter). It was more complicated than it had to be and I’m still not sure I understand the reasoning behind it, but at least the answer was clear – you pay for both bandwidth and transactions if you serve files publicly from Azure Blob Storage.

Last week I had a pleasure to attend TechDays Belgium and I just couldn’t miss an opportunity to communicate my feedback on this issue to Scott Guthrie himself. I’ve also talked about it with Windows Azure MVPs Maarten Balliauw and Panagiotis Kefalidis. The funny part about this – they all start their responses with …

… but transactions are dirt-cheap!

Even though my beef/confusion is with pricing structure rather than actual costs, let’s get this “cheap” argument out of the way. Suppose you serve 100 millions of ~7kb files out of your blob storage per month. This could be design elements of your site, your CSS or JavaScript files or small banner ads (in my case). CDN would probably be a more appropriate solution for these scenarios, but the pricing structure and costs (more on this later) are identical so it doesn’t matter in this context.

Let’s see… We serve 100 million 7kb requests. That’s roughly 700GB of bandwidth. Since the pricing for North American and European bandwidth differs from rest of the world, let’s assume that 500GB goes to NA/EU and 200gb elsewhere. By entering this data into Azure pricing calculator we get this:

image

Transactions would cost us approximately as much as bandwidth. So no, transactions are not dirt-cheap. At least not in all scenarios.

But it’s not about the price - it’s about pricing structure

It took me a while, but now I understand how these things are priced. What I don’t understand is why? The most often repeated selling point of all the cloud services is “Pay for what you use”. So forgive me if I want to understand what I use when I’m billed for transactions. I understand that storage space costs something, I understand that bandwidth costs something, but I don’t understand where is the cost of “transaction”. It looks just like a HTTP request to me and for some odd reason I don’t get billed for HTTP requests to my web roles, huh? Does it cost Microsoft twice as much to serve 700gb in 7kb files than 700gb in 700kb files?

image

Don’t get me wrong, I’m not saying that there’s no cost associated with so called transactions. I’m just saying that it’s not communicated well enough.

What about CDN?

It was suggested by Maarten and Panagiotis that there are no transaction costs for CDN. I was pretty sure that we’ve researched that option and there was a transaction cost too, but who am I to argue with 2 Azure MVPs ;) Back home I’ve checked that I was right and that there is a very good reason why they thought otherwise. Here’s how the CDN portion of advanced pricing calculator looks:

image

Yep. No slider for transactions. When you hover the “?” you start getting some hints:

image

And only when you click the “Learn more” link it becomes crystal clear:

image

So, WTF do you want, Alan!?

Well, when I’m sold into “pay for what you use” model I want to actually understand what I use. And as I mentioned above it’s not very clear to me in this case. To be honest, I think the only reason why this pricing structure exists is because it was just copied from Amazon S3 in the early days:

image

and for CDN:

image

Except Amazon calls transaction “requests” which is way more clear in my opinion. Other CDN providers like CacheFly or MaxCDN/NetDNA don’t charge for transactions/requests (at least at the first glance).

So, most of all I’d like to see the transaction/request costs go away even if it doesn’t make the service cheaper overall. I don’t care how big margins on the bandwidth prices are as long as the end price is competitive. I would understand that I “pay for what I use”. Unfortunately this would skew the total costs for different scenarios dramatically. So I don’t expect it to change in a foreseeable future.

At the very least I would like to see this cost structure explained clearly. I figured it out but it took me some time and from conversations I had on twitter it’s pretty clear to me that everyone except those who are deep into Azure (MVPs, MS people) couldn’t answer my question by just looking at the site.

Here’s my feedback on what could and should be improved (if changing the pricing structure is not a feasible option):

  1. Explain what “transaction” is and why it is billed. Why are public HTTP requests to the Blob storage billed and HTTP requests to the Web Role are not? (I think “request” is a much better word saving a lot of explaining.)
  2. Mention public HTTP blob serving scenario explicitly. Billing in public HTTP blob serving scenario is confusing in the current state of things. Use of the word “transaction” clearly doesn’t help.
  3. Include a transaction cost slider in the CDN portion of the advanced pricing calculator. It’s really confusing currently and looks like a hidden cost. Even Azure MVPs were fooled by it. It’s obvious that mere mortals would be too.

So, this is what I think. I know it looks like a rant, but I just want to make the product better and provide feedback from the average customer’s point of view. This cost structure makes Azure and Amazon CDN (and storage) services unattractive to projects serving large quantities of small files and I’m just not sure that there’s a good reason for Microsoft and Amazon to shut these customers out.

Tags:

Upcoming Trips

1/20/2012 7:26:20 PM

I’m 36 and, believe it or not, I have never been on anything that could pass as a business trip until 2 years ago. I was on 1 such trip in 2010, 5 in 2011 and in February 2012 alone I’m going to go on 4 (well, technically the “month” covers January 31st and March 1st, but who’s counting?).

So here’s a list. Come say “hi”, if you are nearby.

UK Windows Phone User Group (January 31st, 2012)

image

I will be presenting my “Developer’s Guide to Windows Phone App Marketing and Monetization” at January meeting of WPUG in London. It’s free and there was even a careless promise of a free round. Really no reason not come. Plus all you Brits get a chance to make fun of my accent too.

TechCrunch Baltics, Riga, Latvia (February 9th)

image

I was fighting my conscience on whether I should go there, but it said that I should get out of my comfort zone of hanging with developers and go and hang out with entrepreneurs, angels and VCs instead. So, here we go.

MS TechDays, Belgium (February 14th-15th)

image

After booking this I’ve realized it’s going to be the first Valentine’s Day without my wife in ~18 years we are together. But ScottGu is keynoting, so what can I do, honey!?

Mobile World Congress, Barcelona (February 27th – March 1st)

image

I was too cheap to shell out 2000+ Euro for the full pass, especially considering it mostly includes what looks like boring sessions of telco CEOs. So I’m going on an Exhibition Pass which covers App Planet (sub-)conference and it should be the most interesting part of it for me anyway. There will be some Nokia developer conference on the first day of it. Should be interesting and you have to apply for it and be approved by the organizers (I think). So will see how it goes. Looking forward to it and at least +15C in February!

Are you coming to any of these events? Comment here, drop me a line or ping me on twitter. And if you see me there, don’t hesitate to say “hi”!

Tags: , ,

Newsflash: You can’t track everything

1/17/2012 8:28:32 PM

3063463065_92724c4379_z
Photo by Konstantinos Papakonstantinou

Back in the pre-internet days advertisers could hardly track anything and they had to calculate RoI on their offline ad campaigns based on some assumptions, approximations or secondary data. They were aware that their data wasn’t accurate so they understood that all of their conclusions based on the scarce data aren’t facts, but just their best educated guesses.

These days on the internet we have referrers, cookies and other stuff that lets us track the whole path of our users from our ad somewhere, to their first visit, to the purchase of our product. Sure, quite often we can see that customer A came from site B, looked through our site, returned to it in a few days and made a purchase. Hooray!

Based on this data we start to believe that we can track everything and now we can measure RoI of our campaigns by simply comparing money we’ve spent on it and amount it generated in sales based on data provided by our tracking/analytics software. This type of measuring success is prevalent in blogs, podcasts and books on entrepreneurship these days and we are used to looking at it as the absolute truth. Because we have the data to prove it!

Unfortunately we can only track something and not everything.

Let me give you a couple of examples.

We are tracking the sales funnels for amCharts. We get pretty good data for quite a large portion of sales and can tell where they have originated. That said the most popular source of sales is Google search for “amcharts”. Yes, “amcharts”. Not just “charts” or any other generic term, but our exact name. This means that the majority of sales come from people who already knew something about amCharts. This could be someone who has heard about amCharts from a friend. Or someone who has clicked on our ad while doing chart library research at home on his iPad and then came back via Google search from his computer at work the next day. Or a CEO (or some other guy with a credit card) who has been told by his developer to buy amCharts. All of these sales could have initially originated from a campaign that could’ve been declared a complete waste of money based on the tracking data we have.

Another example with a different angle. One of the best music albums I’ve bought last year was Velociraptor! by Kasabian. Let me try to track the chain of events that led me to the purchase. I’ve heard about the band and some of their songs before, but have never bought any of their music. The catalyst of the purchase was a remix of the song “Days Are Forgotten” by DJ Z-trip. I’ve heard it on Z-Trip’s site, then went to the Zune store on my PC a couple of days or even months later and bought the album. I’m pretty sure there’s no trace of this chain anywhere. So Kasabian’s record label (or whoever cares) has no idea that money spent on commissioning Z-Trip and LL Cool J to do the remix resulted in the sale. But lets go deeper. Why did I go to Z-Trip’s site in the first place? Because he was DJing at the party of MIX11 conference I’ve attended last year. So I guess part of the referral credits should go to Microsoft? But why did I pay attention to the name of the DJ at MIX11 and have no idea who was DJing at MIX10? Because I already knew who Z-Trip was, even though I’ve completely forgotten by that time. Back in the early 2000s I’ve listened to Linkin Park a lot and their lead-vocalist did vocals on one of the songs on Z-Trip’s album. And I don’t know who was responsible for turning me onto Linkin Park.

As you can see human mind can trace some events back along a chain of events that none of the tracking software can pick up. In the above mentioned case it even failed at the very first step which would definitely be of interest for a music bands management.

The bottom line is that the fact that we can track something gives us an illusion that we can track everything, but the next couple of times when you buy something online try to analyze if the seller of the product can trace your purchase back to the original source of your interest in this product. And when you notice that they can’t, think about your own campaigns and how you believe you know the RoI on them.

Tags: , ,

The Intricacies of Azure Blob Storage Pricing

1/16/2012 6:50:42 PM

We are in the process of designing new major features for AdDuplex. So we were discussing some implementation/architecture choices for a future release. Part of the implementation we were planning to pursue included serving of content directly from public container in Azure Blob Storage over HTTP.

Since Azure (as most of the Cloud solutions) is a “pay for what you need/use” type of arrangement we had to look at pricing for Windows Azure Blob Storage and decide if our proposed implementation was the best choice from the cost effectiveness perspective.

The price for the blob storage consists of 3 components: storage space, bandwidth/traffic and transactions. Space pricing is clear, bandwidth pricing is clear, but what is transaction??

The explanation next to the transaction slider in the pricing calculator doesn’t help much:

You pay based on the average amount of data you store during a billing cycle and the number of read/write transactions you make to it during that period.

Again: what’s the definition of transaction in this context? Browsing the Windows Azure site doesn’t help much.

The most comprehensive resource on the web (at least the one I was able to find) explaining Azure Storage billing in detail is this blog post by Brad Calder from 1.5 years ago. Let’s see if it helps.

We finally have some definition of transaction:

Transactions – the number of requests performed against your storage account

Judging by this succinct definition we may conclude that we should consider each request to a PDF we’ve posted publicly to the blob storage as a transaction. OK, but let’s read further (emphasis mine)

Each individual Blob, Table and Queue REST request to the storage service is considered as a potential transaction for billing. Applications can then control their transaction costs by controlling how often and how many requests they send to the storage service.

OK, so if I place a PDF into blob storage and it is accessible publicly as http://mystorageaccount.blob.core.windows.net/mycontainer/mydoc.pdf and then it’s picked up by CNN.com and linked from there, there’s no way I can control “how many requests I send to the storage service”. So I would guess this case should not be a subject to transaction billing.

Then we have this:

Each and every REST call to Windows Azure Blobs, Tables and Queues counts as 1 transaction (whether that transaction is counted towards billing is determined by the billing classification discussed later in this posting)

Again, there’s no definition what is considered a “REST call” here, but GET request over HTTP is probably a “REST call”, right? So, by this point I got totally confused and I decided to let the twitter enlighten me. After some back-and-forth Neil Mackenzie (Azure MVP) concluded:

image

OK, I believe Neil, but how am I supposed to know that someone accessing my public file is “A single GetBlob request to the blob services”? Still, at this point, I was convinced that each such request is basically billed twice: once for the bandwidth and the second time for transaction. But just to make sure I decided to make an official support request to clarify this once and for all and I got this answer:

As discussed over the call every single request coming to our blob is considered as a transaction. Hence we count this transaction as a storage transaction and this component will be shown in the invoice.

So, there you have it. I’m not sure if the concept is logically flawed or just a structured way to charge you more, but I’m sure it has to be explained in simpler terms and definitely cover this simple public blob scenario.

Don’t get me wrong, I understand that if there’s a cost associated with each transaction someone has to pay for it. It’s perfectly clear with bandwidth or storage. You can argue about the prices, but it’s pretty obvious it costs something. But with these “transactions”… I don’t know. I need more clarity.

What do you think? Is this a logical billing structure? Do you understand where’s the per-transaction cost to Microsoft that is then passed to you in this scenario? Should you be charged a transaction fee for each request of a public image on your site? Honestly, it doesn’t make much sense to me.

Tags:

My Startup Series: How I Built and Sold almost-Digg 5 Years Before Digg

1/10/2012 8:26:59 PM

After my first startup was killed by the evil IP thieves I’ve lost faith in entrepreneurship… I’m just kidding. I was just finishing school, then university, then getting married, then getting my first “real job” at a bank, etc.

422697043_6fc7d03cd7_b
Photo by Joe Shlabotnik

The Meeting

By 1999 I worked at a small company (with a big name). There was huge financial crisis in Russia and our CEO had lots of bets on several projects that fell through due to the events in the eastern neighbor. So the salary was always a couple of months behind. But we were expecting our daughter, therefore switching jobs wasn’t on my radar at the time. So I set up on a mission to find some side work.

I’ve responded to an ad of a local company looking for freelancers to work on some web project for some US company. I’ve been offered the job as was one other guy. We’ve met to discuss that project for a couple of times (I’m not even sure I remember what it was) and then were told that the project fell through and our services were no longer needed. Little did I know that I will end up working with the dude till this day.

So we were out of our freelancing gig, without anything to replace it with, but still willing to do something.

The most popular site on the internet at the time was Yahoo! (I think). And it wasn’t the huge behemoth it is now. It was mostly a manually managed directory of web sites on the internet. Yeah, it was actually possible to manually manage a list of all the meaningful sites on the internet at that time. I could have navigated to a category of interest and see all the sites about, say, web development.

That was great, but how do I know when one of these sites posts new content? Believe it or not there were no RSS readers (or RSS feeds for that matter) and stuff like that at the time. So the only way to know when there is a new article on 4 Guys from Rolla – a hugely popular ASP developer site of the time – was to actually visit the site.

AC not DC

So my idea was to create a directory of content for web developers. Or as we called it “The Content Directory for Web Professionals”. I’ve pitched the idea to Martynas after he promised not to screw me over and implement it without me. Classic first time entrepreneur move. Fortunately he thought it was a good idea too and turned out to be a cool guy in general.

We have started working on the project. Martynas did the public part of the site and I did the administrative part. It’s funny that even in 1999, coming up with a decent .com domain name that was not taken, wasn’t easy. After a lot of deliberations and domain name checks we’ve settled on ArticleCentral.com.

On some day in 1999 ArticleCentral went live.

image

For the next several years we were doing daily rounds around the sites in our database and [selectively] list new articles. Users would come to ArticleCentral, check the new articles, suggest other articles and rate them (sounds familiar?). It was possible to filter articles by category and rating, search through our article database. We even had a “tracker” – a piece of JavaScript that you could embed into your own site and show newest content from ArticleCentral. I totally forgot about that and, frankly, was shocked when I remembered that we had that in 1999 :) One may argue that the web didn’t come a long way since then.

image

Later on we’ve added a sister site for hardware articles and reviews.

We had several mailing lists sending out thematic updates to thousands of web developers and designers. We were writing editorials for our weekly newsletters and we had a weekly poll. After several years coming up with editorials and poll ideas became a real chore. Fortunately later in the life of the project we were approached by a young guy (I think he was still in high school at the time) who was willing to write the editorials and think of new poll ideas and we happily delegated these to him. After ArticleCentral he got “promoted” to HotScripts where he still blogs regularly.

We’ve sold quite some advertising on our site and in the mailing lists at rates that would make any modern content publisher salivate. Unfortunately traffic at the time was a joke looking from 2012, so great rates didn’t materialize into nice red Ferraris and beach houses.

The Exit

Anyway, by 2001-2002 the dotcom era was long over. We were pretty bored with the project and it was too early (on the internet scale) for us to come up with something that would transform AC into what later materialized as Digg. We decided that it was time to make an EXIT. Even though we didn’t know the term at the time. So we have just published a splash page on the site that it was for sale.

This was a long shot, but we were contacted by a couple of parties and, while I was on vacation in Turkey in September of 2002, closed the deal. I doubt that I’m allowed to disclose the amount of the deal, but lets just say that it paid for the vacation and I still had some change left.

This concludes a story of how I became a serial entrepreneur with one successful exit. (Haha. Sounds cool when I put it this way). But I have a couple more startup stories up my sleeve.

Tags: , ,

My Startup Series: How Intellectual Property Theft Killed My First Startup

1/6/2012 6:28:06 PM

I got my first computer when I was about 13-14. It was a Sinclair ZX Spectrum Plus. I had it hooked up to a black and white TV that was probably smaller than my current phone. Well, maybe not the phone but probably smaller than my Kindle. And you had to load software from cassette tapes.

spectrum_plus
My first computer. Photo from Planet Sinclair.

USSR was living its final years but it still was USSR. There was no way to buy legal games or applications for the computer. To get some games you had to go to some basement and buy a service of recording pirated games to your own cassette (getting cassettes wasn’t a small feat either, but that’s another story). Another option was to copy games from friends or a “pusher” – someone who didn’t own a basement, but was selling pirated games anyway.

A friend of mine knew such a pusher. But at the time parents bought me my ZX Spectrum the guy was away and I couldn’t get any games. All I had was a computer manual. Funny thing is that computers of the time had programming tutorials right in their manuals. So out of boredom I taught myself some basic BASIC. This has probably defined all my life and the fact that I basically don’t play games.

511
Scan of the Sinclair ZX Spectrum Plus Manual page from Retronaut.

Anyway, the pusher came back and delivered some games and I played them, but I was already hooked on programming.

After some small scale projects I set out to make a game. At that time the most popular TV show in USSR was a “Wheel of fortune” rip-off called “Поле чудес” (The Field of Wonders). So it was only natural that I wanted to make a computer game for that. I don’t recall how much time I’ve spent on it, but after some time it was ready and I’ve hosted a game with my parents and their friends. One of my father’s childhood friends was a programmer and he complemented me on the game, so I thought I was an awesome developer. I’ve shown the game to my “pusher” and he complemented me on it too. He even asked me to record a copy for him, so he can play at home.

Поле_Чуде

I was young, I was born in USSR and I had no entrepreneurial aspirations at the time. I just made some product and was happy when people told me it was cool.

One day I went to a “basement software store”. There were printed catalogs of all the pirated games and applications you can get recorded on your cassettes. I’ve noticed The Field of Wonders on the list made by someone else and was excited to see what other programmers did and how does my game stack up against theirs. So I paid the guys to record me that game among others and went home.

When I loaded the game, my jaw dropped. It was my own game with all the copyrights and logos replaced with some other logos. When my friend came over he recognized the name of the “company” as the one our “pusher” used. The guy just took my game “rebranded” it and made some money. I’m pretty sure he didn’t make anything worth mentioning, but I didn’t make anything at all. I’ve actually lost a few cents by paying those basement pirates for my own game! So I was pretty upset, but I didn’t care much. I was even proud that my software was good enough for someone to steal and rebrand. I didn’t buy games from that pusher anymore, though.

That’s the story of my first startup and one of the milestones letting me pretend to be a serial entrepreneur. I’ll blog about my later endeavors in future posts.

Tags: , ,

Not in Love with Connected TV Idea Anymore

1/2/2012 6:46:07 PM

image

In the spring of 2010 I’ve bought Samsung’s TV with Internet@TV feature. (read my review here). I loved it and I loved the fact that I have all the media playing and online stuff in one unit and don’t need to bother with all the wires and extra remotes. At the time I easily ignored the fact that 2009 models didn’t get the software update to the new system and stayed at the previous version.

Fast forward just one year and Samsung released new models with Smart TV feature (and they’ll probably announce a new generation at CES in just a few days). I’m not sure what’s the difference there, but it’s new and I’m not getting it on my TV (at least that’s what I’ve been told by a Samsung representative). My TV still works just fine, I still like it and I didn’t experience any problems with outdated software, codecs, etc… yet.

I’m not sure why Samsung abandons their TV customers even faster than their Android phone customers, but I’m pretty sure that even though I’m comfortable with upgrading my phone every 2 years, there’s no way in hell I’m upgrading my TV every 2 years. I don’t know maybe their end play is in changing mentality so people are comfortable upgrading TVs every 2 years, but I seriously doubt this is doable. They’ll have to make us walk in circles in the desert for 40 years until we all die and new generation accepts the idea.

I expect my TV to “last” for at least 5 years. And there are no signs I will miss anything in it for that period except advancements in those internet connectivity/media playing areas.

I assume the problem is not only the greed of electronics manufacturers but also the fact that processing power, storage, etc. are secondary functions of the TV and they can’t afford to make future proof hardware in the competitive market.

At the same time I can’t afford to upgrade $1000+ TV to get upgrade to the feature perfectly performed by a sub-$100 device. I’d rather throw away that $100 thing when it becomes outdated and buy a new one. And connect it to the same 2 year old TV.

wdfWDTV_Live_G3

Another option is to buy a “gaming” console like Xbox 360 or PS3. Or probably wait for the next generation and then buy them. These are more expensive but, unlike TVs, they can (and actually have to) afford to invest in 5-7 years future-proof hardware.

In any case, unless the situation changes, I’ve lost my love for the connected TV idea and think that until the industry gets to that boring stagnation phase, the concept doesn’t make sense. Unless Apple manages to take the idea and make it sexy somehow.

Tags:

Windows Phone App Promotion and Monetization Talk Video

12/29/2011 6:04:58 PM

Here’s a video of my talk on WP7 app promotion and monetization I did a couple of weeks ago at Lotus 8 conference in Riga. Hope you find it useful in marketing your apps.

The talk starts at about 1h 54 min. mark. Unfortunately I don’t know how to embed a video with starting position, so you may need to seek forward or just watch it directly on YouTube.

Tags:

Porting Old BlogEngine.net Comments to Disqus

12/29/2011 10:45:36 AM

Yesterday I’ve finally managed to upgrade this blog to the latest version of BlogEngine.net. Let me know if you notice any issues related to that.

The process was smooth and easy. Except that I wanted to move commenting to Disqus at the same time. Enabling Disqus comments was easy too, but moving old comments up there was not.

First I found this method and tool. It looked like it worked at first but comments didn’t show up in Disqus. Then I figured that BE.net uses Permalink for Disqus URLs and this tool was using “friendly” link. So I modified what it was exporting and then it just started crashing when trying to upload comments to Disqus.

Then I found this method and tool. It takes standard export from BlogEngine.net in BlogML format, extracts comments from it and saves them in WRX format that can be imported into Disqus. Unfortunately it uses the same “friendly” URLs and BE uses permalinks as identifiers for Disqus threads. Obviously its possible to modify BE code to use the same URLs but it’s not future-proof (among other issues).

So finally I decided to make a small utility that would take WRX generated by the above mentioned tool and original BlogML and replaces “friendly” URLs with permalinks in the WRX. It’s very primitive and not flexible so I’m posting source here instead of binary.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Xml.Linq;

namespace BlogML2WRXFix
{
class Program
{
static void Main(string[] args)
{
var wrx = XDocument.Load(args[0]);
var blogML = XDocument.Load(args[1]);
XNamespace blogMLNS = "http://www.blogml.com/2006/09/BlogML";
string hostPrefix = args[2];

var wrxItems = wrx.Descendants("item");

foreach (var wrxItem in wrxItems)
{
var linkNode = wrxItem.Descendants("link").First();
// URLs in my WRX included extra slash at the beginning
string linkUrl = linkNode.Value.Substring(1);
linkNode.Value = String.Format(hostPrefix + "post.aspx?id={0}", blogML.Descendants(blogMLNS + "post").First(f => f.Attribute("post-url").Value == linkUrl).Attribute("id").Value);
}

wrx.Save(args[0]);

Console.ReadLine();
}
}
}

It takes path to the WRX file as the first parameter, path to BlogML file as the second and host prefix to use for the output URLs (like http://devblog.ailon.org/) as the third.

Tags:

Copyright © 2003 - 2017 Alan Mendelevich
Powered by BlogEngine.NET 2.5.0.6