A Personal Rant on Trading Bitcoin

I would say the summary of my trading practices thus far can be summed up into, “I’m an idiot.”

It’s up. it’s down. Oh, my gosh I could have made $10k. Oh, no I’ve lost $300. Ack! What does everyone else think?… Just an average day in a Cryptocurrency trader’s life. But not for the above average trader’s life. Much of the same advice you’d get from a financial planner is applies to Bitcoin and other currencies.

Research the Company

Who are the developers of the Currency you’re thinking about buying? Who are their investors? Have they successfully launched a currency before? Have previous projects they worked on failed in a spectacular fashion?

There are a lot of motivations for attempting to create a new Altcoin. Notoriety, solving social or economic problems, or greed are some of the most popular themes of Cryptocurrency. Following the successes or failures of a development team will guide you in figuring out what motivates them. Don’t get stuck into thinking greed is bad motive. Several self-interested projects made a lot of money for the development team and early investors who knew when to sell.

Research the Product

The development teams are going to market their Cryptocurrency to garner investment interest, adoption, and higher trading prices. That makes it easy to find information like what problems are they trying to solve or new Blockchain technology are they trying to introduce? Are they trying to bring a solution to Apple products or Mobile devices where others aren’t?

Invest for the Long Term

If you’re full time job is staring at charts and day trading, you can still do that with cryptocurrencies. You just need to adjust to the increased volatility. By Volatility I mean 40% up or down in a day… 30 minutes even. But if your trading on the intraday bumps you might find a higher portion of your profits going to fees and splits. So, I say invest for the long term. If I had followed the advice in these sections, I’d have a lot more disposable income.

Personal Stories

I met a guy while working at Dell that told me a story of the $300,000 240 MB Hard drive he bought. Yes, MB. Well he cashed in some of his employee purchase plan stock for a new hard drive when the stock wasn’t worth all that much. At the time of telling me the story the Hard Drive was worthless and the amount of stock he sold for it was worth $300,000. Oh, how we laughed. And now I’ll relate for you the story of the guy who bought a pizza with Bitcoin when it was worth pennies and now that Bitcoin would be worth Millions. You’d think I’d learn from others, but I too have purchased a $700 tablet for $4,000 worth of Bitcoin at today’s prices.

But I think more disappointing are opportunities I missed due to fear.

Stratis is an Altcoin someone pointed out to me in December of 2016. The price was under $0.05. I thought well let’s wait and see what happens. The interesting thing about Stratis is the development team’s partnership with Microsoft and building their platform on the .NET framework. This means that the products a developer would write to interact with their Blockchain technology can run natively on Windows Operating Systems without a lot of additional translation code or “wrapper” code. The price went up to something over $0.07 and I said, “Ok I’ll buy some.” And invested $300. I woke up one morning a few weeks later and the price is over $0.30, and has been hovering between $0.40 and $0.50 the last two weeks. The currency had a lot of earmarks of something I thought was a good investment, and I kick myself for not putting in $1500 or more at the $0.07 price.

DASH, which was launched as Dark Coin, I used to mine. The name Dark Coin certainly sounded cool to the kids and it was marketed as the first truly anonymous currency because the network had the function called mixing where your bitcoin cold be split up and mixed with fractions from other Dark Coin on the network without additional entries in the blockchain thus removing the traceability of the transactions. When fintech investing in Blockchain technologies started becoming serious business they grew up and changed the name to DASH. I had mined 8 DARK when I had a hard drive failure and said well I won’t both with that currency any more. At that time, Dash was only worth around $1.00, so I was out maybe $10. At the same time Stratis had its big jump DASH goes to $100 and has stated above $50. Now why didn’t I keep mining when the difficulty was low and amass a vast fortune? I was able to restore my Dark Wallet from a backup and retrieve my 8+ DASH, but I could have had 100 over the course of that year.

Check out the stellar raise of PIVX. I looked at it when it was less than $0.03. It’s trading at $1.38 today… $1,500.00 would be worth over $100,000, and it happened extremely fast.

Stay tuned!

Wikipedia – Bitcoin

https://en.wikipedia.org/wiki/Bitcoin

Bitcoin Forum – The most popular place to discuss all Cryptocurrencies

https://bitcointalk.org/index.php

Cryptocurrency Trading Charts

https://coinmarketcap.com/

Most Profitable Mining Calculations

http://www.coinwarz.com/cryptocurrency

Some Exchanges

https://poloniex.com/

https://btc-e.com/

https://www.gdax.com/

https://www.bittrex.com/

 

 

 

 

 

Cryptocurrency Mining Software and Pools

In “Mining Cryptocurrencies” I wrote briefly about CPU, GPU and ASIC mining. All of these mining methods require software to get work (the complicated math problem) from the Cryptocurrency’s network and send it to the hardware to calculate.

Also, note there are several different kinds of math problems currencies use related to their protocol. I won’t go into a lot of detail, first because I’m not a math wiz, second because I could devote several posts to one protocol and there are several, but most importantly because that’s not the approach I’m taking in my blog. I’m here to help someone who doesn’t need to know every last tiny nuance of every Cryptocurrency to get started.

I will however list some Currency and Protocol pairs. Bitcoin uses SHA-256, Litecoin and DNote use scrypt, Dash (which used to be Dark Coin) uses X11, Ethereum uses Ethash, and ZCash uses Equihash. If you’ve taken my advice in previous posts you’ve checked out http://www.coinwarz.com/cryptocurrency and know you know what some of the items on that page are.

Going back to mining software… As I stated above, the hardware performs the math, but software is required to gather the math and send it to the hardware. That means a device is required to communicate with the hardware. In the case of CPUs, GPUs this usually means a computer with a hard disk to install the software. The first versions of ASIC miners were USB devices for computers and the software would detect them to send work to them. Standalone ASIC devices still have a CPU,  network interface and software, but these software is flashed to a chip or written to an SD card plugged into an integrated computer like a Raspberry Pi.

Nearly all mining software is available for free from Github.com. Start by either going to a mining pool or the coin’s community page to find links and instructions. Standalone ASIC miners have their own software. Upgrades are usually available from the manufacturer.

In most cases the software only provides mining for one protocol. That’s not always the case, some developers have created software that can receive a command from a pool that mines multiple pools to switch which currency the software is mining.

Mining software is almost exclusively written in C++. Which doesn’t mean a lot to many people. But it allows for two main advantages. First the software can be easily put together for multiple OS, Linux and Windows being the most popular. Secondly the software is modular, or it can be broken up into pieces.  Developers can take a miner currently available on Github for one currency and replace the parts they need for another currency and thus all the existing support for GPUs and OS come along for the ride. Likewise, if a new family of video cards is released it’s easy to add a new piece of code to support those cards. When you’re looking for mining software make to you download the right package for your operating system and your video card family (Windows/Nvidia or Linux/AMD etc.) Some software packages have both video card families available in each OS package, but you’ll find from reading through reviews that one software might work better with your hardware than another.

There is scant little mining software for Apple products. Mostly because Apple sucks. Yeah I said it. But also, because you can’t add and upgrade the Video cards for GPU mining and Apple locks down what software is made available to their systems. I guess the company is scared mining software might over heat the CPU.

As a funny side note, my son was actually trying to mine Litecoin on some of our old Android phones. He had to place them under box fans to keep them from over heating and in the end he never made enough to = $0.01, but the price of Litecoin is on the rise again so who knows.

Solo vs. Pool

I introduced a new term “pool” above, so now is a good time to talk about solo vs. pool mining. Solo mining means you use your hardware to mine blocks directly on the block chain. This can be profitable for the first 10 minutes a new currency network is up. Once the currency becomes popular and there is always a handful of miners who seem to have invested $1 billion in hardware to have the best Hash Rate, it’s time to find a pool.

Pool mining means miners pool their Hash Rates, or combine their work. The Pool itself gets the block reward and divides it among its member miners per the amount of work each contributed to finding the block. There are several different methods of determining how much each miner gets of the reward, but in general the miner who does the most work get the highest percentage. One of the bits of information you’ll get from Coinwarz.com is how much currency you should generate a day. Keep in mind that with Solo mining you only receive a reward when you find the block, but then you get the whole reward. You may not actually get that reward for several weeks… months? in the case of Bitcoin, unless you have spent $1 billion in your mining farm you will not see a reward ever. However, with pool mining, because you earn some of the reward every time a block is found, you should see your balance growing at the rate Coinwarz.com has calculated for you.

In the early days when most Bitcoin enthusiasts were altruistic and rebels against the world, all the software was open source and the pools were free. That’s not 100% the case anymore. Some of the best mining software for Altcoins has a DevFee built-in. For some part of your mining day the software will disconnect from your pool and connect to the developer’s pool and account and mine for them to reimburse them for the time they spent developing the awesome software you’re using. Likewise, mining pools almost universally charge a fee, 1 or 2% of your earnings to pay for the upkeep, fees and maintenance of the servers you’re using.

Up next a look at networks and wallets.

Stay tuned!

Wikipedia – Bitcoin

https://en.wikipedia.org/wiki/Bitcoin

Bitcoin Forum – The most popular place to discuss all Cryptocurrencies

https://bitcointalk.org/index.php

Cryptocurrency Trading Charts

https://coinmarketcap.com/

Most Profitable Mining Calculations

http://www.coinwarz.com/cryptocurrency

Some Exchanges

https://poloniex.com/

https://btc-e.com/

https://www.gdax.com/

https://www.bittrex.com/

 

 

 

Coming Home

Almost 2 months ago I had my first day at Avanade. For those of you who don’t know, Avanade was cerated as a join venture between Microsoft and Accenture. Avanade has thier own business development streams but 99.9% of the Microsoft projects Accenture wins, are sent to the the Avanade team for execution.

Well let me just say what an absolute joy it has been to come back to the Microsoft family of products. After 13 months of wasting my life away fighting with Open Source garbage, I’ve come home to integrated enterprise solutions that work as advertised or at least have some reliable sources for support when they don’t. I was actually told to stop blogging about how much the Open Stack is a waste of time and money… Anyway, that’s behind me.

To add to the good vibes, Avanade is connected to Microsft in so many ways. We’ve actually had advanced looks at new technologies before the rest of the community. There 20+ MVPs in just the Midwest region, Avanade requires 80+ hours of training every year, and employees are encouraged to participate in developer community organizations.

I’m excited to talk about the first area of expertise they’d like me to look at, Avanade Touch Analytics (ATA). I haven’t completed the training yet, but this offering is fantastic. The easiest interface I’ve ever used to create dashboards that look and feel like Tableau or Spotfire, but perform lightyears ahead of both. Once the data sources are made available to the ATA server for any customer’s instance, the dashboards can be authored for or on any device. Switch between layout views to see how your dashboards will look on any device before releasing them. Publish multiple dashboards to different Active Directory security groups and let your users pick the information that’s important to them. It’s exciting, and I’m glad to see an offering addressing the shortcomings of the competition in a hosted or onsite instalations.

Well that’s enough advertising. Now that my censorship is at an end, I’ll be blogging mroe often I really want to discuss SQL Server’s memory resident database product, interesting things I’ve learned about the SSIS Service recently, and Service Broker.

I’m not a DBA, But I Play One on TV: Part 2 – CPU and RAM

In Part 1 I discussed SQL Server and Hard Disk configurations. Now let’s have a look at CPU and RAM. This topic is actually kind of easy. More is better… most of the time.

CPU

It’s my opinion that most development environments should have a minimum of 4, 2.5+ GHz Processors, If that’s one socket with two cores, or one socket with 4 cores or, or two sockets with 2 cores, doesn’t really make that much of a difference. For a low utilization production system you’ll need 8, 2.5+ GHz processors. Look, you can get this level of chip in a mid-high grade laptop. Now if you’re looking at a very high utilization system it’s time to think about 16 processors or 32 split up over 2 or more sockets. Once you get to the land of 32 processors advanced SQL Server configuration knowledge is required. In particular you will need to know how to tweak the MAXDOP (Maximum Degree of Parallelism) setting.

Here’s a great read for setting a query hint: http://blog.sqlauthority.com/2010/03/15/sql-server-maxdop-settings-to-limit-query-to-run-on-specific-cpu/

And here are instructions for a system wide setting: http://technet.microsoft.com/en-us/library/ms189094(v=sql.105).aspx

What does this setting do? It controls the number of parallel processes SQL Server will use when servicing your queries. So why don’t we want SQL Server to maximize the number of parallel processes all the time? There is another engine involved in the process that is responsible for determining which processes can and cannot be done in parallel and the order of the parallel batches. In a very highly utilized SQL Server environment this engine can get bogged down. Think of it like air traffic control at a large airport… but there’s only one controller in the tower and it’s Thanksgiving the biggest air travel holiday in the US. Well the one air traffic controller has to assign the runway for every plane coming in and going out. Obviously, he/she becomes the bottleneck for the whole airport. If this individual only had one or two runways to work with, they wouldn’t be the bottleneck; the airport architecture is the bottleneck. I have seen 32 processor systems grind to a halt with MAXDOP set at 0 because the parallelism rule processing system was overwhelmed.

For more information on the parallel processing process: http://technet.microsoft.com/en-us/library/ms178065(v=sql.105).aspx

RAM

RAM is always a “more is better” situation. Keep in mind that if you don’t set the size and location of the page file manually, the O/S is going to try and take 1.5 times of the RAM from the O/S hard drive. The more RAM on the system, the less often the O/S will have to utilize the much slower page file. For a development system 8GB will probably be fine, but now a days you can get a mid-high level Laptop with 16GB even 32GB is getting pretty cheap. For production 16GB is the minimum, but I’d really urge you to get 24GB. And like I said 32GB configurations are becoming very affordable.

To Proc or Not to Proc

I’ve had some interesting conversations and fun arguments about how to author queries for SQL Server Report Services (SSRS) reports. There are a lot of professionals out there who really want hard fast answers on best practices. The challenge with SSRS is the multitude of configurations available for the system. Is everything (Database Engine, SSAS, SSRS, and SSIS) on one box? Is every service on a dedicated box? Is SSRS integrated with a SharePoint cluster? Where are the hardware investments made in the implementation?

Those are a lot of variables to try and make universal best practices for. Lucky for us Microsoft provided a tool to help troubleshoot report performance. Within the Report Server database there is a view called ExecutionLog3. ExecutionLog3 links together various logging tables in the Report Server database. Here are some of the more helpful columns exposed.

  •          ItemPath – The path and report names that was executed in this record.
  •          UserName – The User the report was ran as.
  •          Format – Format the report was rendered in (PDF, CSV, HTML4.0, etc.)?
  •          Parameters – Prompt selections made.
  •          TimeStart – Server local date and time the prport was executed.
  •          TimeEnd – Server local date and time the report finished rendering.
  •          TimeDataRetrieval – Amount of time in milliseconds to get report data from data source.
  •          TimeProcessing – Amount of time in milliseconds SSRS took to process the results.
  •          TimeRendering – Amount of time in milliseconds Required to produce the final output (PDF, CSV, HTML4.0, etc.)
  •          Status – Succeeded, Failed, Aborted, etc.

I always provide two reports based on the information found in this view. The first report utilizes the time columns to give me insight into how the reports are performing and when the systems peaks utilization. The second report focuses on which users are using what reports to gauge the effectiveness of the reports to the audience.

Generally I’m a big fan for stored procedures, mostly because my reports are usually related to a common data source and stored procedures provide me with a lot of code reuse. Standardizing, the report prompt behavior with stored procedures is also a handy tool. A simple query change can cascade to all the reports that use a stored procedure, alleviating the need to open each report and perform the same change. Additionally, I like to order the result sets in SQL not after the data is returned to the report. But that doesn’t mean that you’re not going to find better performance moving some functionality between tiers based on the results you find in ExecutionLog3.

I’m sorry there just isn’t a one size fits all recommendation for how SSRS reports are structured. Which means; 1 you’ll have to do some research on your configuration, and 2 don’t accept a consultant’s dogma on the topic.

GUID’s – Never for Clustered Indexes

Globally Unique Identifiers have their place in software development. They’re great for identifying a library in the GAC or windows registry. They are, however, huge data types from the database perspective.

Oracle, MYSQL, Sybase, and DB2 do not provide any special data type for fields storing GUID’s, for these vendors a GUID is a 34-38 character string (depending on including dashes and “{}”). SQL Server has provided a Unique Identifier data type which has some benefits in storage and access speeds over a 36 character varchar, or nvarchar field. However, they’re still huge…

Unique Identifier Data Type

http://msdn.microsoft.com/en-us/library/ms190215(v=sql.105).aspx

SQL Server’s Unique Identifier displays as a 36 character string (dashes and no “{}”) and stores a GUID as 16 byte binary value. There’s no argument that it’s nearly impossible (not mathematically impossible) to create a duplicate GUID, but how many data sources are going to outgrow a bigint (-2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807)) data type? That’s only 8 bytes, half a Unique Identifier. Hard disk space has gotten cheap, why do we care about data type size anyway?  In the article mentioned above, it’s mentioned that indexes created on Unique Identifier fields are going to perform slower than indexes built on integer fields. That statement hardly scratches the surface of performance implications with Unique Identifier indexes, and it’s all related to the size.

Pages and Extents

http://msdn.microsoft.com/en-us/library/ms190969(SQL.105).aspx

The above article explains how SQL stores data and indexes in 8KB pages. 96 bytes are reserved for the page header, there’s a 36 byte row offset and then 8,060 bytes remain for the data or index storage. If your table consisted of just one column, a page could store 503 GUID’s,  or  1007 BigInt’s, or 2015 int’s. Put another way, the smaller the amount of bytes in a row, the more you can store in one page. SQL Server doesn’t control where the Pages are written on the hard disk, the O/S and hardware decide. The chances of consecutive or sequential pages being stored in distant disk sectors increases with the more pages stored for each table or index in the system. As the number of index pages grows, the more out of sync they become with the data pages leading to index fragmentation.

Index Fragmentation

http://www.brentozar.com/archive/2009/02/index-fragmentation-findings-part-1-the-basics/

http://www.brentozar.com/archive/2009/02/index-fragmentation-findings-part-2-size-matters/

Let’s recap what we have so far,

  1. GUID’s are randomly generated values without any sequential nature or restrictions.
  2. GUID’s are twice as big as the biggest integer data types.
  3. The larger a tables rows are the more pages have to be created to store the data.
  4. The more pages an index has, the more fragmented they get.
  5. The more fragmented the indexes get the more frequently they have to be rebuilt.

Clustered Index Implications

Clustered indexes set the organization and sorting of the actual data in the table. Non-clustered indexes created on a table with a clustered index have to be updated with pointer changes as records are inserted or deleted, or the clustered index value updated because these changes require the data pages to be resorted and new keys generated. SQL Server Identity columns of an integer data type reduce a lot of I/O overhead and SQL server processing because the rows are always inserted in the correct order. GUID values are almost never inserted in the correct order because of their random nature. Thus, with a GUID clustered index every insert or delete or update of that field requires data page reorganization, non-clustered index updates, more pages to be created, and increased fragmentation.