React Native Shadow is Missing on iOS but is Okay on Android

In one of our projects, some of the items that had a shadow, was working just fine on Android, but the shadow was missing on iOS.

After investigating it, it turned out to be related to the items that had overflow: 'hidden', which on iOS resulted in shadow being trimmed.

Turns out on iOS, the shadow is part of the UI component that you define it at, which results in removing the shadow, when one has the overflow set to hidden. On Android, the shadow is applied outside of the component, so it is just fine to have overflow: 'hidden' and still get the shadow.

The solution was to wrap the component in another <View /> with the shadow defined in it, while having the overflow: 'hidden' in the inner component.

Example code:

// Before:
// ...
<View style={ { 
  // we need the overflow hidden to round the images in the content
  overflow: 'hidden',
  borderRadius: 20,
  
  // shadow definition
  shadowColor: '#000',
  shadowOffset: {
    width: 0,
    height: 2,
  },
  shadowOpacity: 0.25,
  shadowRadius: 3.84,
  elevation: 5,  
} }>
  { children }
</View>

// After:
<View style={ { 
  // we still need the same radius, so the shadow would have the same shape as
  // the inner container

  borderRadius: 20,
  
  // shadow definition
  shadowColor: '#000',
  shadowOffset: {
    width: 0,
    height: 2,
  },
  shadowOpacity: 0.25,
  shadowRadius: 3.84,
  elevation: 5,  
} }>

  <View style={ {
    // we need the overflow hidden to round the images in the content
    overflow: 'hidden',
    borderRadius: 20,
  } }>
  { children }

  </View>
</View>

So if you end up having missing shadows on iOS, make sure to check for overflow: 'hidden' on the element : )

SentryError: Native Client is not available, can’t start on native when updating expo-cli to 4.x.x (from version 3.22.3)

TL;DR: Update your metro.config.js to use @expo/metro-config based on the latest guidelines (SDK 40+) –

It’s funny when you encounter an error in a project, and after spending a lot of effort researching it, to find out that the cause is the same with a totally different error on a very different project.

In my case this was caused by outdated metro.config.js file, specifically the SVG loading code that uses react-native-svg-transformer.

To fix it, I replaced the metro config with the following:

const { getDefaultConfig } = require("@expo/metro-config");

module.exports = (async () => {
  const {
    resolver: { sourceExts, assetExts }
  } = await getDefaultConfig(__dirname);
  return {
    transformer: {
      babelTransformerPath: require.resolve("react-native-svg-transformer")
    },
    resolver: {
      assetExts: assetExts.filter(ext => ext !== "svg"),
      sourceExts: [...sourceExts, "svg"]
    }
  };
})();

And also intall the '@expo/metro-config' module:

yarn add @expo/metro-config
// or
npm install @expo/metro-config

More info on the other error – SVG Icons Not Loaded After Updating ReactNative Expo to Version 40

Cheers!

Icons Not Loaded After Updating ReactNative Expo to Version 40

TL; DR: If you’re loading SVGs, check into your metro.config.js and see if you’re using the getDefaultConfig from '@expo/metro-config'. If you’re requiring it from 'metro-config', you should update your code based on the one below. More info in the readme ofreact-native-svg-transformer

Recently I’ve had an issue with updating a project I was working on – after updating the Expo SDK to version 40, the icons stopped working.

The project was using react-native-elements, so this was my first guess for the cause of the issue. Digging a bit deeper, it turned out that any icon from @expo/vector-icons was shown as an X in a square.

Digging through the project (and github issues) I decided to create an empty expo project and gradually include the files. Doing so, I found out that there was a custom metro.config.js that took care for loading the SVGs. Looking into the readme of the 'react-native-svg-transformer' and Eurica 🙂 from version 40 or newer, the code in the metro.config.js should be different:

const { getDefaultConfig } = require("@expo/metro-config");

module.exports = (async () => {
  const {
    resolver: { sourceExts, assetExts }
  } = await getDefaultConfig(__dirname);
  return {
    transformer: {
      babelTransformerPath: require.resolve("react-native-svg-transformer")
    },
    resolver: {
      assetExts: assetExts.filter(ext => ext !== "svg"),
      sourceExts: [...sourceExts, "svg"]
    }
  };
})();

And don’t forget to install the '@expo/metro-config' module:

yarn add @expo/metro-config
// or
npm install @expo/metro-config

That’s it and happy hacking 🙂

Messing with Expo Permissions caused ImagePicker to misbehave

Have you seen the following alert?

Sorry, we need media library permissions to make this work!

Well.. me too! 🙂 And I’ve seen this error despite the fact that I’ve had both the "CAMERA" and "MEDIA_LIBRARY" permissions added to my android.permissions array.

Googling didn’t help much, because it just led me to either the expo permissions docs or to the ImagePicker docs, from which it seemed that the needed permissions are already in.

Luckily, I started to dig through the expo GitHub issues, and found an issue by folks facing the same problem.

The solution is to add the “READ_EXTERNAL_STORAGE” and the “WRITE_EXTERNAL_STORAGE” to the permissions array, despite the docs listing them as ones that are added by default.

Updating a React Native/Expo image file does not update the visualization of this image everywhere in the app

I’ve had an interesting problem when saving and updating images in a React Native application built with Expo.

I’m building an app that has contacts and images (that are either taken from the phone contact entry or picked from the gallery).

The issue was that editing the image at one place and saving it, would not update the contact image in the contacts list. When updating the image, I was updating the image file and overriding it in the filesystem.

After saving it, and going to the previous screen, the old image was still there. Only after refreshing the application it was replaced.

Since I was reusing the file name, the prop in the contact card was not modified (the file path was the same), so the component didn’t know it had to re-render.

To solve that, I decided to update my helper function to add a timestamp to the filename. This way the file path would change, forcing all the components with the image to re-render.

export async function persistCachedFile ( cachedFile: string, permanentFolder: string, fileId: string ) {
    const permanentDirectoryPath = `${ FileSystem.documentDirectory }${ permanentFolder }/`
    const uniqueFilePath = `${ permanentDirectoryPath }${ fileId }-${ Date.now() }`;

    await ensureDirExists( permanentDirectoryPath );

    await FileSystem.copyAsync( {
        from: cachedFile,
        to: uniqueFilePath
    } );

    return uniqueFilePath;
}

The downside here is, that the old files are forever going to stay in the app directory. To avoid that, we need to add a cleanup function. I came up with something the following function that runs each time we copy the file.

export async function cleanupOldFilesAsync ( folder: string, fileId: string ) {
    // Finbd all files that have the imageId in their file name (and delete then):
    const directoryFiles = await FileSystem.readDirectoryAsync( folder );
    const previousImages = directoryFiles.filter( file => {

        if ( file.includes( fileId ) ) {
            return true;
        }
        return false;
    } );

    // Delete previous images.
    if ( previousImages.length ) {
        previousImages.forEach( previousImage => {
            // We don't await, because removing the files is not critical
            deleteAsync( `${ folder }${ previousImage }` );
        } )
    }
}

Now call the cleanupOldFiles from persistCachedFile (before we store the updated file) and voilà : )

The end result is:

import {
	deleteAsync,
	getInfoAsync,
	makeDirectoryAsync,
	readDirectoryAsync,
	copyAsync,
	documentDirectory
} from 'expo-file-system';

export async function ensureDirExists ( directory: string ) {
	const dirInfo = await getInfoAsync( directory );
	if ( !dirInfo.exists ) {
		await makeDirectoryAsync( directory, { intermediates: true } );
	}
}


export async function cleanupOldFilesAsync ( folder: string, fileId: string ) {
	// Finbd all files that have the imageId in their file name (and delete then):
	const directoryFiles = await FileSystem.readDirectoryAsync( folder );
	const previousImages = directoryFiles.filter( file => {

		if ( file.includes( fileId ) ) {
			return true;
		}
		return false;
	} );

	// Delete previous images.
	if ( previousImages.length ) {
		previousImages.forEach( previousImage => {
			// We don't await, because removing the files is not critical
			deleteAsync( `${ folder }${ previousImage }` );
		} )
	}
}

export async function persistCachedFile ( cachedFile: string, permanentFolder: string, fileId: string ) {
    const permanentDirectoryPath = `${ FileSystem.documentDirectory }${ permanentFolder }/`
    const filePath = `${ permanentDirectoryPath }${ fileId }`;
    const uniqueFilePath = `${ filePath }-${ Date.now() }`;

    cleanupOldFilesAsync( permanentDirectoryPath, fileId )
    await ensureDirExists( permanentDirectoryPath );

    await FileSystem.copyAsync( {
        from: cachedFile,
        to: uniqueFilePath
    } );

    return uniqueFilePath;
}

Android Emulator Losing Internet Connectivity

Spoiler: When having Internet connectivity issues, make sure that Android Studio is running. (because it acts as a proxy for the emulator)

So I’ve had a reoccurring issue where my Android Emulator device would lose connectivity from time to time, and to fix it I went to delete it and create a new device from the Android Virtual Device Manager (AVD). Unfortunately, this was only temporary and the issue would appear again at a point.

After happening again, I decided to try to debug it instead of starting from scratch. It turned out, that in order to have an internet connection, the Android Studio should be started. The reason for losing the connectivity was because I was sometimes closing the Android Studio in order to decrease the load on my PC (I’m working with React Native and expo, and don’t use the Android Studio much).

So before trying something fancy from the SO answers to the “Android emulator not able to access the internet” question, make sure that Android Studio is running 🙂

Tips for Meaningful Interviews with Developers

Lately I’ve been recruiting people for our Front-End team at Up2 Technology. I am quite satisfied with the process so I decided to share it with you.

This is a non-extensive list with the guidelines I’m trying to follow in order to have an interviewing process satisfying for both me and the people applying.

Empathy

A rule of thumb I follow is to never organize the process in a way that I’d not feel good if I had to go through it as a recruit myself.

Why bother?

Job security is less of a thing than what it used to be, so it would not be a surprise if in a few years (or months) you’re the person being on the other side.

People learn mostly from their own experiences so we have to do our best in such situations. These people will inevitably become senior developers or team leads at a point and will start recruiting themselves. Make sure you show them the best of you!

Skip the whiteboard

I’m happy to see more and more people standing up against the whiteboard debugging sessions. Even though I was prepared for such sessions from the university, where and coding on a piece of paper is not uncommon, I’d still feel very stressed during such situations. As an addition, the whiteboard interview hardly represents the real capabilities of the person being interviewed. I am writing JavaScript for more than 10 years now and still google how some of the Array and Date methods work.

Something I’m very proud of myself is that I’ve never conducted a whiteboard coding session.

What to do instead?

A superior way to see someone’s skills is to give them a realistic task – something that you often receive as a task yourself or that is common for you to assign to your team.

I usually provide an API that returns a JSON and the task is for the person to visualize the returned data however they decide. I do provide some context on what the end-user cares for, but only as high-level details. This is actually what most of our assignments look like.

Additional ideas on how to approach the task for home

  • Give extra time for the people to work on your task (at least 7 days, to include a weekend) – You’re probably not the only company these people are applying for. They might also be still working full-time. Be respectful and give them the time needed so they can be calm while doing your task
  • Limit the time for actual development, which will force an incomplete solution – this will show you what the candidate thinks is most important and what do they feel most comfortable at.
  • Use the task to test their git skills too – ask them to use git as they would in their day-to-day job. (Not using git? Got the idea – ask them to use your system to see if they’d need extra training)
  • Hide a common error you to see how do they handle it – they might figure it out, or they might ask. Whatever they do, I think it is okay – the most important is to notice the error. I intentionally give a CORS error, since this is the most common I got.

Don’t waste their time

Make sure you don’t fool them and you avoid doing harm as much as possible – like asking people to quit before you’re sure that you will be able to hire them (I’ve heard of such super-lame cases).

Once you’ve decided that a person is not your candidate – let them know immediately. Give them the reason you’ve not picked them and move on. They will feel bad, but they’d feel even worse if you’d wasted more of their time.

Avoid Relying on the Trial Period

Most of the positions have some kind of trial period (at least in Bulgaria) where you’re legally allowed to cancel the contract of a person immediately. (They could do that too).

Please, try to avoid that and apply it only in a very edge case. This is causing greater harm to the employee (since most of the time they cannot go back to their old job) than harm to the company – after another set of interviews, you will find the right people for your team.

Maybe Unexpected: Apply for a job in a company you admire (and people like their recruitment process)

Go wild. Apply for the company you’ve always wanted to work for. Check the feedback at glassdoor how their interviewing is.

In the end, they could hire you or not. Either way, you’d witness their recruitment process from within and learn from both the good parts and the bad ones.

Cheers, and go find the people you’d enjoy working with.

Wondering how big is your React project?

Recently I wanted to explore the impact of a refactoring I did so I ended up checking the length of the files I’ve had in git (hoping that this number would decrease).

The magic command was to list the files in git, grep the ones ending with js, jsx and css and count the lines.

git ls-files | grep -P ".*(js|jsx|css)$" | xargs wc -l

This works both on Linux and Windows with Git Bash

Side note: Feel free to remove/add/change extentions in the (js|jsx|css) to check your source files.

Thanks to this Stack Overflow question 🙂

AWS S3 and it’s informative errors – 404, “NoSuchUpload”

I’m continuing with my exploration in the AWS world 🙂

For the last couple of days, I have been occasionally receiving the weird error “NSuckUpload” when I try to either upload a part to an S3 Multipart upload or try to complete the upload with given UploadId.

S3 Multipart Upload is the way you upload really big files into S3.

As of today (2019 Jan 18), the docs indicate that you can upload a 5GB file with one call to their api, but for bigger files, you’d need to split the file in parts and upload each one of them using the S3 Multipart Upload API.

Here’s how the multipart upload API works:

  • Call the s3. createMultipartUpload method to indicate that you will upload a file split on parts. Each part should be between 5MB and 5GB. You can have < 5MB only for the last part (which is useful if you don’t know exactly how big is your file that you’re going to export). The method returns an UploadId, that one must use in order to add parts and complete the multipart upload.
  • (N-times) Upload a part using s3.uploadPart, providing the body of the file part, the UploadId, the PartNumber, and the items you pass everywhere – the Bucket and the Key. Mind that PartNumber starts from 1 (for whatever reason). This method returns ETag for your part that you must store.
  • Call the s3.completeMultipartUpload method to indicate that you’re ready with the upload. One has to provide the UploadId all the parts in format { ETag, PartNumber } and the regular Bucket and Key.

After completing these steps in the s3 bucket a file with the given name (Key) should appear.

So my mistake here was that my DB exporter didn’t wait for all the async work to be done before calling the stream ‘end’. This caused writing after the ‘end’ of the stream and skipping a number of items

The funny part was that instead of receiving an error regarding this casual problem, the AWS API returned 404 “NoSuchUpload”, although I could see the uploadId when listing the active uploads afterward.

Moral of the story:

Add unit tests to your code and verify that you could write something to file before trying to upload it to the cloud. Also – try to provide useful error messages when designing an API.