React Native Shadow is Missing on iOS but is Okay on Android

In one of our projects, some of the items that had a shadow, was working just fine on Android, but the shadow was missing on iOS.

After investigating it, it turned out to be related to the items that had overflow: 'hidden', which on iOS resulted in shadow being trimmed.

Turns out on iOS, the shadow is part of the UI component that you define it at, which results in removing the shadow, when one has the overflow set to hidden. On Android, the shadow is applied outside of the component, so it is just fine to have overflow: 'hidden' and still get the shadow.

The solution was to wrap the component in another <View /> with the shadow defined in it, while having the overflow: 'hidden' in the inner component.

Example code:

// Before:
// ...
<View style={ { 
  // we need the overflow hidden to round the images in the content
  overflow: 'hidden',
  borderRadius: 20,
  
  // shadow definition
  shadowColor: '#000',
  shadowOffset: {
    width: 0,
    height: 2,
  },
  shadowOpacity: 0.25,
  shadowRadius: 3.84,
  elevation: 5,  
} }>
  { children }
</View>

// After:
<View style={ { 
  // we still need the same radius, so the shadow would have the same shape as
  // the inner container

  borderRadius: 20,
  
  // shadow definition
  shadowColor: '#000',
  shadowOffset: {
    width: 0,
    height: 2,
  },
  shadowOpacity: 0.25,
  shadowRadius: 3.84,
  elevation: 5,  
} }>

  <View style={ {
    // we need the overflow hidden to round the images in the content
    overflow: 'hidden',
    borderRadius: 20,
  } }>
  { children }

  </View>
</View>

So if you end up having missing shadows on iOS, make sure to check for overflow: 'hidden' on the element : )

Updating a React Native/Expo image file does not update the visualization of this image everywhere in the app

I’ve had an interesting problem when saving and updating images in a React Native application built with Expo.

I’m building an app that has contacts and images (that are either taken from the phone contact entry or picked from the gallery).

The issue was that editing the image at one place and saving it, would not update the contact image in the contacts list. When updating the image, I was updating the image file and overriding it in the filesystem.

After saving it, and going to the previous screen, the old image was still there. Only after refreshing the application it was replaced.

Since I was reusing the file name, the prop in the contact card was not modified (the file path was the same), so the component didn’t know it had to re-render.

To solve that, I decided to update my helper function to add a timestamp to the filename. This way the file path would change, forcing all the components with the image to re-render.

export async function persistCachedFile ( cachedFile: string, permanentFolder: string, fileId: string ) {
    const permanentDirectoryPath = `${ FileSystem.documentDirectory }${ permanentFolder }/`
    const uniqueFilePath = `${ permanentDirectoryPath }${ fileId }-${ Date.now() }`;

    await ensureDirExists( permanentDirectoryPath );

    await FileSystem.copyAsync( {
        from: cachedFile,
        to: uniqueFilePath
    } );

    return uniqueFilePath;
}

The downside here is, that the old files are forever going to stay in the app directory. To avoid that, we need to add a cleanup function. I came up with something the following function that runs each time we copy the file.

export async function cleanupOldFilesAsync ( folder: string, fileId: string ) {
    // Finbd all files that have the imageId in their file name (and delete then):
    const directoryFiles = await FileSystem.readDirectoryAsync( folder );
    const previousImages = directoryFiles.filter( file => {

        if ( file.includes( fileId ) ) {
            return true;
        }
        return false;
    } );

    // Delete previous images.
    if ( previousImages.length ) {
        previousImages.forEach( previousImage => {
            // We don't await, because removing the files is not critical
            deleteAsync( `${ folder }${ previousImage }` );
        } )
    }
}

Now call the cleanupOldFiles from persistCachedFile (before we store the updated file) and voilà : )

The end result is:

import {
	deleteAsync,
	getInfoAsync,
	makeDirectoryAsync,
	readDirectoryAsync,
	copyAsync,
	documentDirectory
} from 'expo-file-system';

export async function ensureDirExists ( directory: string ) {
	const dirInfo = await getInfoAsync( directory );
	if ( !dirInfo.exists ) {
		await makeDirectoryAsync( directory, { intermediates: true } );
	}
}


export async function cleanupOldFilesAsync ( folder: string, fileId: string ) {
	// Finbd all files that have the imageId in their file name (and delete then):
	const directoryFiles = await FileSystem.readDirectoryAsync( folder );
	const previousImages = directoryFiles.filter( file => {

		if ( file.includes( fileId ) ) {
			return true;
		}
		return false;
	} );

	// Delete previous images.
	if ( previousImages.length ) {
		previousImages.forEach( previousImage => {
			// We don't await, because removing the files is not critical
			deleteAsync( `${ folder }${ previousImage }` );
		} )
	}
}

export async function persistCachedFile ( cachedFile: string, permanentFolder: string, fileId: string ) {
    const permanentDirectoryPath = `${ FileSystem.documentDirectory }${ permanentFolder }/`
    const filePath = `${ permanentDirectoryPath }${ fileId }`;
    const uniqueFilePath = `${ filePath }-${ Date.now() }`;

    cleanupOldFilesAsync( permanentDirectoryPath, fileId )
    await ensureDirExists( permanentDirectoryPath );

    await FileSystem.copyAsync( {
        from: cachedFile,
        to: uniqueFilePath
    } );

    return uniqueFilePath;
}

Tips for Meaningful Interviews with Developers

Lately I’ve been recruiting people for our Front-End team at Up2 Technology. I am quite satisfied with the process so I decided to share it with you.

This is a non-extensive list with the guidelines I’m trying to follow in order to have an interviewing process satisfying for both me and the people applying.

Empathy

A rule of thumb I follow is to never organize the process in a way that I’d not feel good if I had to go through it as a recruit myself.

Why bother?

Job security is less of a thing than what it used to be, so it would not be a surprise if in a few years (or months) you’re the person being on the other side.

People learn mostly from their own experiences so we have to do our best in such situations. These people will inevitably become senior developers or team leads at a point and will start recruiting themselves. Make sure you show them the best of you!

Skip the whiteboard

I’m happy to see more and more people standing up against the whiteboard debugging sessions. Even though I was prepared for such sessions from the university, where and coding on a piece of paper is not uncommon, I’d still feel very stressed during such situations. As an addition, the whiteboard interview hardly represents the real capabilities of the person being interviewed. I am writing JavaScript for more than 10 years now and still google how some of the Array and Date methods work.

Something I’m very proud of myself is that I’ve never conducted a whiteboard coding session.

What to do instead?

A superior way to see someone’s skills is to give them a realistic task – something that you often receive as a task yourself or that is common for you to assign to your team.

I usually provide an API that returns a JSON and the task is for the person to visualize the returned data however they decide. I do provide some context on what the end-user cares for, but only as high-level details. This is actually what most of our assignments look like.

Additional ideas on how to approach the task for home

  • Give extra time for the people to work on your task (at least 7 days, to include a weekend) – You’re probably not the only company these people are applying for. They might also be still working full-time. Be respectful and give them the time needed so they can be calm while doing your task
  • Limit the time for actual development, which will force an incomplete solution – this will show you what the candidate thinks is most important and what do they feel most comfortable at.
  • Use the task to test their git skills too – ask them to use git as they would in their day-to-day job. (Not using git? Got the idea – ask them to use your system to see if they’d need extra training)
  • Hide a common error you to see how do they handle it – they might figure it out, or they might ask. Whatever they do, I think it is okay – the most important is to notice the error. I intentionally give a CORS error, since this is the most common I got.

Don’t waste their time

Make sure you don’t fool them and you avoid doing harm as much as possible – like asking people to quit before you’re sure that you will be able to hire them (I’ve heard of such super-lame cases).

Once you’ve decided that a person is not your candidate – let them know immediately. Give them the reason you’ve not picked them and move on. They will feel bad, but they’d feel even worse if you’d wasted more of their time.

Avoid Relying on the Trial Period

Most of the positions have some kind of trial period (at least in Bulgaria) where you’re legally allowed to cancel the contract of a person immediately. (They could do that too).

Please, try to avoid that and apply it only in a very edge case. This is causing greater harm to the employee (since most of the time they cannot go back to their old job) than harm to the company – after another set of interviews, you will find the right people for your team.

Maybe Unexpected: Apply for a job in a company you admire (and people like their recruitment process)

Go wild. Apply for the company you’ve always wanted to work for. Check the feedback at glassdoor how their interviewing is.

In the end, they could hire you or not. Either way, you’d witness their recruitment process from within and learn from both the good parts and the bad ones.

Cheers, and go find the people you’d enjoy working with.

AWS S3 and it’s informative errors – 404, “NoSuchUpload”

I’m continuing with my exploration in the AWS world 🙂

For the last couple of days, I have been occasionally receiving the weird error “NSuckUpload” when I try to either upload a part to an S3 Multipart upload or try to complete the upload with given UploadId.

S3 Multipart Upload is the way you upload really big files into S3.

As of today (2019 Jan 18), the docs indicate that you can upload a 5GB file with one call to their api, but for bigger files, you’d need to split the file in parts and upload each one of them using the S3 Multipart Upload API.

Here’s how the multipart upload API works:

  • Call the s3. createMultipartUpload method to indicate that you will upload a file split on parts. Each part should be between 5MB and 5GB. You can have < 5MB only for the last part (which is useful if you don’t know exactly how big is your file that you’re going to export). The method returns an UploadId, that one must use in order to add parts and complete the multipart upload.
  • (N-times) Upload a part using s3.uploadPart, providing the body of the file part, the UploadId, the PartNumber, and the items you pass everywhere – the Bucket and the Key. Mind that PartNumber starts from 1 (for whatever reason). This method returns ETag for your part that you must store.
  • Call the s3.completeMultipartUpload method to indicate that you’re ready with the upload. One has to provide the UploadId all the parts in format { ETag, PartNumber } and the regular Bucket and Key.

After completing these steps in the s3 bucket a file with the given name (Key) should appear.

So my mistake here was that my DB exporter didn’t wait for all the async work to be done before calling the stream ‘end’. This caused writing after the ‘end’ of the stream and skipping a number of items

The funny part was that instead of receiving an error regarding this casual problem, the AWS API returned 404 “NoSuchUpload”, although I could see the uploadId when listing the active uploads afterward.

Moral of the story:

Add unit tests to your code and verify that you could write something to file before trying to upload it to the cloud. Also – try to provide useful error messages when designing an API.

AWS Closes S3 Read Stream Unexpectedly

I’m continuing with my notes on transferring big files from and to AWS S3 with node.js

If you are reading a file from a S3 bucket using a stream that you occasionally pause, mind that the read stream will be closed in 60 minutes.

If you cannot handle the file in that period of time, you’d receive a ‘data’ and an ‘end’ event, even though you didn’t finish processing the file.

One possible solution here is to download the file before starting the import, process it and delete it once we don’t need it any more.

//So instead of:
const s3Stream = s3.getObject( params ).createReadStream();
const csvStream = fastCsv.fromStream( s3Stream, csvParams );
/* Do your processing of the csvStream */


// Store your file to the file system
const s3Stream = s3.getObject( params ).createReadStream();
const localFileWriteStream = fs.createWriteStream( path.resolve( 'tmp' , 'big.csv' ) );
s3Stream.pipe( localFileWriteStream );

localFileWriteStream .on( 'close', () => {
    const localReadStream = fs.createReadStream( path.resolve( 'tmp', 'big.csv' ) );

    const csvStream = fastCsv.fromStream( localReadStream , csvParams );

    csvStream.on( 'data', ( data ) => {
        /* Do your processing of the csvStream */
    });

    csvStream.on( 'end', () => {
        // Delete the tmp file
        fs.unlink( path.resolve( 'tmp', 'big.csv' ) );
    });
);

Testing with Jest in a node and ReactJS monorepo (and getting rid of environment.teardown error)

Big number of the applications we develop have at least one ReactJS UI, that is held in one repo and an API, held in another. If we need to reuse some part of the code, we do so by moving it to another repository and adding it as a git submodule

For our latest project we decided to give the monorepo approach a try (which we didn’t come to a conclusion yet if it better fits our needs). The project is a node.js API with a ReactJS app, that is based on create-react-app

This first issue we faced with it was with testing the node app – tests ran just fine in the react application (/app/) but if you tried to run it for the server, you’d get the following error:

● Test suite failed to run
TypeError: environment.teardown is not a function

  at ../node_modules/jest-runner/build/run_test.js:230:25

In our package.json we had the trivial test definition – just running jest:

"scripts": {
    ...
    "test": "jest"
    ...
}

We didn’t had issue with this approach in a node API with no CRA app in it, so as it turned out to be the case, we had to indicate that the environment is node.

To do so we added a testconfig.json and added it to the script in the package.json

testconfig.json

{
	"testEnvironment": "node"
}

package.json

{
	"scripts": {
		"test": "jest --config=testconfig.json"
	}
}

If you want jest to monitor your files, change the “test’ script to “jest –watchAll –config=testconfig.json”

Adding a new currency to WooCommerce

Seemed to me that adding a new currency to WooCommerce would be extremely easy and wide spread which turns out is not the case. I couldn’t find any filter in the filter list with the hooks and filters list.

Anyway I managed to find out that there are two filters for this: ‘woocommerce_currencies’  and ‘woocommerce_currency_symbol’. Adding the bulgarian currency for instance is as easy as:

add_filter( 'woocommerce_currencies', 'add_bgn_currency' );
function add_bgn_currency( $currencies ) {
 $currencies['BGN'] = __( 'Bulgarian lev', 'woocommerce' );
 return $currencies;
}
add_filter('woocommerce_currency_symbol', 'add_bgn_currency_symbol', 10, 2);
function add_bgn_currency_symbol( $currency_symbol, $currency ) {
 switch( $currency ) {
 case 'BGN': $currency_symbol = 'лв.'; break;
 }
 return $currency_symbol;
}

Note: All this code can be placed somewhere in your theme’s functions.php

The result is that now the currency is shown in the Currency list:

WooCommerce Currency Settings
WooCommerce Currency Settings

Anyway one problem still persists – the currency symbol position is totally unmanagable from a hook. It can be only set site-wide through the settings section of WooCommerce. It is also placed in a not-so-intuitive place:

WooCommerce Pricing Position
WooCommerce Pricing Position

This is how far I’ve got with the multilingual shop with WPML and WooCommerce. Still wondering how to make the symbol position to be currency-specific (and not be afraid to update the plugin afterwards). Maybe I’ll have to make it in JS in the end.

I’ll keep you posted! And follow this guy:


//