sneakycrow

code, dogs, and teleporting, flying, cowboy elves.

So, for those of you who have been following, I kind of dropped off the face of the internet for several months. I've been through a lot of changes. Mostly a job change, and just being generally busy.

But, I've finally updated my website, and I'm trying to get back on project.

I wanted to elaborate on my most recent project, which I am investing most of my time (and even some money) into. It's called Imagine Dragons.

The idea of the project is to create an almost “novel” of DnD sessions for DnD players. The users can keep track of all their sessions and portray them in a story-like fashion for other people to follow and read as a normal book flow.

We also want to enable features like allowing the users to connect with artists, order what we're called “Campaign Books”, and much more! It's very very exciting.

Currently, we're still in the early stages. We're developing the backend of the application, standard Authorization flows and databasing structure. The stack is going to be Rust on the backend with React on the frontend using a Postgres db.

I'm also working with a designer for branding. We've got a logo coming soon (I can't wait for this), as well as some general web designs and branding guides.

It's a super exciting project. If anyone would like to potentially contribute or participate, join our discord

So, today I made some serious progress with my Forum API. I’m honestly super proud of myself. While most of the code is kind of a modified version of the example code on diesel, I was able to come up with some small stuff on my own.

I think the biggest takeaway today is I learned about the file structure of Rust programs a lot more. When to call things like self, super, and things like that.

Prior to today, there wasn’t really much in the project. I brought in the crates I wanted in my Cargo.toml file, but beyond that I hadn’t done much.

I’ve established a lot today. Primarily, DB connection, basic structs, and some utility functions for starting to read and write data..

Here’s the code, I’ll explain each piece.

#[macro_use]
extern crate diesel;
extern crate dotenv;

mod schema;
mod models;

use dotenv::dotenv;
use std::env;
use diesel::prelude::*;
use diesel::pg::PgConnection;
use std::io::{stdin, Read};

For this bit, we are bringing in the creates we need to start. For diesel, we’re using some macros below, so we need to throw that line right above the import.

After the crates, I’m bringing in some local files, and then below that pulling out the specific functions, models, traits, etc that I need from each crate.

fn establish_connection() -> PgConnection {
    dotenv().ok();
    
    let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must be set");
    
    PgConnection::establish(&database_url).expect(&format!("Error connecting to {}", database_url))
}

This function is pretty simple. It returns a database connection to our Postgres DB (provided by the docker-compose file).

fn get_threads() -> Vec<models::Thread> {
    use schema::threads::dsl::*;

    let connection = establish_connection();
    let results = threads.filter(published.eq(true))
        .limit(5)
        .load::<models::Thread>(&connection)
        .expect("Error loading posts");
    results
}

With this function, we’re grabbing the threads. The way we do this is establish a connection, grab the “threads” (the alias’s are coming from the use statement in the fn), then filter each thread by whether or not it’s published.

fn create_thread<'a>(conn: &PgConnection, title: &'a str, body: &'a str) -> models::Thread {
    use schema::threads;

    let new_thread = models::NewThread {
        title: title,
        body: body
    };

    diesel::insert_into(threads::table)
        .values(&new_thread)
        .get_result(conn)
        .expect("Error saving new thread")
}

This function accepts a connection to our Postgres DB, a title, and a body. It uses the params to create a NewThread struct, then inserts that into our threads table.

fn publish_thread(id: i32) {
    use schema::threads::dsl::{threads, published};

    let connection = establish_connection();

    let thread = diesel::update(threads.find(id))
        .set(published.eq(true))
        .get_result::<models::Thread>(&connection)
        .expect(&format!("Unable to find post {}", id));

    println!("Published thread {}", thread.title);
}

This function is pretty simple too. It accepts an ID (the ID of the thread you want to publish), then set’s it’s published value to true in the DB.

fn cli_generate_thread() {
    let connection = establish_connection();

    println!("What would you like your title to be?");
    let mut title = String::new();
    stdin().read_line(&mut title).unwrap();
    let title = &title[..(title.len() - 1)]; // Drop the newline
    println!("\nOk! Let's write {} (Press {} when finished)\n", title, EOF);
    let mut body = String::new();
    stdin().read_to_string(&mut body).unwrap();

    let thread = create_thread(&connection, title, &body);
    println!("\nSaved  draft {} with id {}", title, thread.id);
}

This is a function I’ll more than likely delete. Because I haven’t set up any endpoints yet, I needed a way to generate threads to test functionality like create_thread, get_threads, and publish_thread. That’s the purpose of this function.

It establishes a connection, accepts for user input for the title, then asks for user input for the body. The EOF variable comes down below, and that’s simply establish some listeners to keyboard events (as I understand it), so we can tell the program when we’re done typing. Here’s those EOF variables

#[cfg(not(windows))]
const EOF: &’static str = “CTRL+D”;

#[cfg(windows)]
const EOF: &’static str = “CTRL+Z”;
fn main() {
    let threads = get_threads();

    for thread in threads {
        println!("Title: {}", thread.title);
        println!("\n-------");
        println!("Body: {}", thread.body);
    }
}

This last function is obviously the main one, and for now it’s being used to get the threads and displayed them. As I was coding I was consistently changing what was in here. First I changed it to use the cli_generate_thread, then the publish_thread, and now get_threads.

And that’s quite a lot of progress. This is literally the majority of the work for now. The next part is going to be setting up some endpoints (via a REST API and actix), and then having each of these functions be related to one of those.

All the final code can be found here: [Source Hut](https://git.sr.ht/~sneakycrow/rust-forum-api]

#rust

So, just like a lot of people I know, I have a bad habit of starting projects without finishing others.

After the recent release of Mozilla’s File Sharing service Send, I felt a little defeated in my own file share software. So I’ve decided to switch back to a similar project I’ve wanted to do for a while: Rust Forum API.

When I was younger, I used to go on forums a lot. Proboards was my jam. It’s arguably one of the entry points for me into web software. Forums are always something I’ve wanted to make myself. So, in that spirit, I’m starting a new project for helping my rust knowledge, a forum api.

It will also include a frontend in React, but the focus of the project will be the API. I’ll be using actix_web and diesel with a Postgres DB. It’s going to be a fun project (I think).

I’ll keep semi-regular updates here. I’ve also added issue tracking, and I’m trying to be good about successfully accomplishing at least one issue per week.

Repo Issue Tracking

GLHF :)

#rust

This week I started writing a file sharing software for the web. I wanted to start creating Entry posts, to track progress and issues I’m having.

note: I’m going to post some code below, but it’s not all the steps required to get up and running

I’m building it in Rust using the actix framework with diesel for interacting with the DB on the backend.

And on the frontend I’m just planning on using something fast. Probably Vue.js with Bulma CSS.

I started off by initializing the project with Rust’s cargo, and then bringing in the packages I knew I needed:

  • serde
  • actix-web
  • diesel
  • dotenv

Next, I started writing the main.rs file. I need to get the API running, so I initialized a “server” via the actix-web readme.

extern crate actix_web;
use actix_web::{server, App, HttpRequest, Responder};

fn greet(req: &HttpRequest) -> impl Responder {
    let to = req.match_info().get("name").unwrap_or("World");
    format!("Hello {}!", to)
}

fn main() {
    server::new(|| {
        App::new()
            .resource("/", |r| r.f(greet))
            .resource("/{name}", |r| r.f(greet))
    })
    .bind("127.0.0.1:8000")
    .expect("Can not bind to port 8000")
    .run()
}

Now that that was running, I know I needed a database. I’m preferential to PostgreSQL, so I created a docker compose file to get that running.

version: '3.1'

services:
  postgres:
    image: postgres:latest
    restart: always
    ports:
      - 5432:5432
    environment:
      POSTGRES_PASSWORD: postgres

note: I opted to use a compose file because I’m more familiar with them and I imagine I might add another service down the line

Then, I started structuring the data for the SQL table. The files aren’t actually going to be stored in the DB. I’m probably going to use Digital Ocean’s spaces for that. I do want the SQL rows to have some data for displaying on the frontend. Spaces can handle the actual downloads.

CREATE TABLE file_links (
  id UUID PRIMARY KEY,
  published BOOLEAN NOT NULL DEFAULT FALSE,
  storage_location TEXT NOT NULL,
  title TEXT,
  description TEXT,
  downloads INTEGER DEFAULT 0
)

And that’s as far as I got for now. My next steps that I want to take are these:

  • Create an endpoint for requesting all file_links from the DB
  • Integrate a SQL Query into that endpoint
  • Create Data for testing the endpoint

#dev

This article is a work in progress

This is an article about my own personal experiences learning Javascript vs learning Rust. I come from a computer programming background, but JS was the first language I was fluent in.

The Learning Curve

Javascript

When learning JS, I came from a HTML and CSS background primarily. I had worked a little with PHP, but not enough to truly understand it.

JS was a hard concept for me. It was the first language I did that truly had “interactivity” with the user, beyond CSS pseudo-elements.

With JS, I’d say as soon as I understood how javascript executed, and truly understand the DOM (at least in a basic sense), my javascript really started to shine.

Node is an entirely different story. Personally, I’d recommend learning client side JS, then going to a server side language that’s not Node (PHP, Python, Ruby even), and then coming back to Node. Node is a power house, but it’s missing some key features. I firmly believe that people that jump right to Node without knowing another backend get stuck there.

Rust

Rust is really hard if you’re not determined, patient, and disciplined. There’s tons of books, and the Rust documentation has to be the best documentation ever written. If you can manage to stick to reading all the docs, reading some books, and continuing to practice, the language gets significantly easier.

It’s just that clicking point feels much farther out with Rust. I definitely don’t think it’s a beginner friendly language, but, if you can manage to learn Rust, you can do a lot. So, I recommend trying it out. Just don’t let it defeat you.

*I’m still learning Rust myself, and don’t feel fluent in it enough to elaborate further than this

I'm sorry to say this post is not going to be about a puppy that can do Jie Jitsu.

I goofed and I missed a post mortem, but it was because of a crazy eventful week, that is just now starting to slow down.

What Happened

Well, to start off, I missed every single one of my goals

Whoops

But, that’s alright! Because the goals are still completable. It’s not the end of the world. I’m going to miss some sometimes, and that’s okay. The takeaway I have is to not continue to miss them, to not make a habit of doing it. But, for balance purposes, allow myself to goof once in a while accidentally

This week, I changed from going working out in general to working out as a byproduct. I signed up for Brazilian Jie Jitsu classes. I’m very excited. I’m going to learn some self defense, hopefully gain some confidence, meet some people, and have fun while working out. The classes are twice a week, and then I also am starting yoga once a week. That should complete my workout sessions each week!

But, the most exciting thing that happened

Drum Roll

I adopted a dog!

His name is Arsenio. He’s two years old. He’s a husky (and possibly a little mix in there). He’s a beautiful, adorable, smart, and sweet dog. I’m so happy to have him in my life. Here’s a bunch of pictures!

Takeaways

Don’t fret if you a miss a week or two of goals. Just keep trying. Have some discipline, and move forward.

P.S. I’m going to be posting here more often with development stuff, so look out for that!

A post mortem of week 2

What I’ve learned

This week has been a grand learning experience for myself, in the best of ways. A few good things happened, a few bad things happened. I’ve learned quite a bit.

Regarding my goals, if you didn’t see my previous post this week, I decided to change up some of my goals. Primarily the intention of picture taking, the goal of working out more, and I added a new goal of learning a programming language fluently.

What I’m realizing, is that it’s super important to focus on ourselves. This week I dropped the ball on that, putting other people, other priorities, over my own.

Don’t get me wrong, there needs to be a balance. I don’t want to be selfish, but I do want to take care of myself. It’s extremely important.

This week, I didn’t do that, and it caused me to drop the ball on some of my goals. I didn’t work out the way I wanted to, and it was because I caused myself so much anxiety that I started to feel sick.

Lesson of the week: Take care of yourself, so you can take care of your loved ones

Picture of the Week!

I made Erin get up with me this week and have breakfast. She's so cute

I think it's important to note that people change, and so do our goals. That's okay! Sometimes we think we want something and we realize that we don't.

That's what happened to me. I've updated my goals with a few new changes that I've realized:

I want to take more pictures

This goal is actually still accurate. And the way I'm going about this will still remain the same. But the intention, the action, behind the goal changed. Originally, I was going to buy a DSLR. I argued that investing money on something like that will motivate me to accomplish the goal.

After 20 days of taking pictures easily with my iPhone, I realized that was unreasonable. The goal is totally achievable with todays Smart Phones. I am not super into photography, not enough to purchase a camera at least. But I love taking pictures, and I decided a better intention would be to introduce a new organizational tool (a todolist) to hold myself accountable.

I want to work out more

This goal in and of itself was offbase for me. I don't actually want to work out more. What I want is to be physically fit and confident in my body. The way I achieve this goal is by eating healthy and getting exercise.

I decided that the intention and balance of the goal was very accurate and useful, but the end goal was weak and inaccurate.

Lastly, I've added a new goal!

I want to Rust to become my strongest language

Rust is a language that I've been working on for the last year. I've been on and off on it, and every time I come back I feel a need to start from scratch. That's because I'm never on long enough to intake the language concepts.

So, this goal, is to finally follow through on it. To add to that, I want to know Rust well enough to outweigh how well I know JS and PHP. Since I'm using JS and PHP at work, this will be a difficult task. But, as long as I follow through and consistently practice, I can do it!

I was driving on my way to work this morning thinking about repository hosts, and I thought I’d look up some articles on other people’s experience with the three. Sr.ht is so new that not a lot of people have posted about it, and of course there’s a ton of articles on Github vs Gitlab.

I decided I’d write my own article about my personal experience with all three.

Gitlab

I was introduced to Gitlab decently early in it’s development, but didn’t dive into it until I started working at Isolary. Gitlab is really quite a nice solution. It basically has everything you would need for a software project baked into it. And everything works with each other really well.

One of the biggest benefits is that Gitlab offers a hosted solution, but also offers a self-hosted solution. That’s extremely appealing to people like me. I definitely prefer self-hosted solutions. My personal issue with Gitlab though, is that the self-hosted “free” packages are limited. But, they still contain most every feature you’d need, honestly.

One of the biggest downfalls that a lot of people point out (and that I agree with) is the lack of community. For me personally, that’s not a big deal. I don’t really have very many projects with multiple contributors. That being said, it really does not support exploring other projects and finding other people (networking basically).

The situation that this created for me, which is why I didn’t pick Gitlab as my personal solution, was that it felt like I was in this huge room with a bunch of different tools, but no one else in it with me. It felt lonely, as weird as that may sound.

I think Gitlab is an excellent solution for teams, but I personally don’t like it for my own projects.

Github

Github is more or less the reverse of Gitlab. Now, I will preface with Github has been introducing more and more features to compete with Gitlab. But, at the point of writing this, to get the same amount of features Gitlab has baked into your project, you’d have to integrate some third party solutions.

That being said, I really like Github as a personal solution, and I don’t have much experience with it as a team solution. While Gitlab feels more featureful without the need to integrate third party solutions, Github feels more network-capable.

By network-capable, I mean I really do feel a sense of community in Github. To add to that, a lot of other various third party platforms let you login with Github, which makes the account feel useful outside of Github itself.

I don’t want to boost the community feeling too much of Github. One thing I have noticed is that you do have to be very intentional about being involved in the community. That’s not necessarily a negative thing, but it is something I feel a lot of people don’t realize.

If you’re not intentional about going out of your way to network with people, your issues or repo will feel like you’re speaking at a normal voice in a 1000 person crowd.

I have noticed that a lot of people have been more negative towards Github since the Microsoft acquisition. Microsoft has a bad taste in my mouth, so if I’m being honest I did kind of jump on that bandwagon. What do you expect though? Microsoft has burned me so many times that I’m scared of them.

I also realized recently, that I don’t need the community features of Github. I don’t utilize them. I also don’t need the “bloat” of Gitlab (there’s just a lot of features of Gitlab I wouldn’t use personally).

This is why the next option was so appealing to a minimalist like myself.

Sr.ht

Sr.ht is a new (still in alpha) platform. They offer hosted solution (which is what I use), but also offer self-hosted solutions. It’s an open source project.

It just feels intentional. Everything I do is just what I need to do. And, they have pretty much anything I would need. They have repo support in their git platform. They have CI/CD in their build platform. They have issue tracking in their todo platform.

The website also just feels minimal, which I truly appreciate it.

I can’t speak much more than that on Sr.ht, but I’m slowly maneuvering my repositories over to it.

#TODO: Write a blog post after one year of using Sr.ht

Over the past few weeks, I've been working with various organizational tools. I've played with a lot of them over the past couple of years, but I've toned down to my favorites.

Note: I primarily work on iOS and MacOS. These applications may or may not support other platforms

Bear

My first, and favorite tool, is Bear. It's an amazing note taking tool. The biggest feature for me is supporting markdown. The second biggest being able to export into various formats.

This app has proved itself extremely useful. Here’s some of the primary purposes I’ve used it for: – Writing up a blog post and exporting into whatever format I need (HTML or Markdown primarily) – Quickly writing up some release notes and exporting them into PDF format for a client – We use markdown in our repositories for Changelogs and Readmes already, so this is extremely nice – Writing some API documentation, and exporting that to HTML for public facing versions, markdown for repo specific documentation, or PDF for client documentation

Things 3

Things 3 is a very well built and well designed todo list organizer. One thing that I really enjoy about it (see what I did there), is the ability to view my calendar events from within the app. I can also set repeating reminders, like for my take one picture a day goal. You can also organize things into projects and what not. Honestly, these features aren’t unheard of in other applications, but Things 3 is a one time purchase, and provides you with all the features that other applications (such as Todoist) require a subscription for.

Enter your email to subscribe to updates.