Separate names with a comma.
Store attachment differently and more effectively.
You can use the enforce permission option.
I have that box checked...anything else that I need to do?
Wait I think it is working! Thank you!
Trying to make sure I set this up correctly:
I'm using CloudFlare and I've successfully configured a domain attach.mydomain.com to resolve via CNAME to my S3 bucket.
In your instructions for configuring Cloud front here: https://xfrocks.com/other/articles/howto-xenforo-attachments-via-amazon-s3-and-cloudfront.60/
One of the blank fields is Domain Name and you entered the cloudfront domain name in the example.
My Attachment Options do not have an option for entering a domain name for Cloudfrount. See attached.
See attached. Does everything look OK? How can I tell that the CDN is working?
This is driving me crazy. I can't get images uploaded to show up in the Media Gallery. The files themselves are being uploaded into the bucket but it's not showing up in the Media Gallery itself. It seems it's not communicating with the server properly. I've tried turning off Cloud Front but no joy.
I disabled the addon and I'm able to add an image to a gallery. The moment I enable the addon:
IAM keys loaded
bucket name entered
local copy disabled
domain for bucket name entered (using CloudFlare)
The image will upload successfully into the S3 bucket and I can see it in bucket.mydomain.com/2017/01
But I cannot see it in my album. It won't display.
I'm getting no server log errors.
What is wrong?!
What's really frustrating is that I've read all your documentation. It appears that the plugin is updated so that some of the fields in the instructions are not there. I've even noticed that some of the steps other users took include options that have been removed. You speak of a config.php entry in your latest update but there are ZERO instructions on how to config.
The only way I can get images into my media gallery is to turn off your plugin.
In your screenshot, you entered the domain name already I see? You even black it out...
Anyway, if you are using CloudFlare, do not follow the part about CloudFront, they are different services fyi.
After uploading, do you receive any error message?
As I noted, none of the screenshots look like what I'm seeing in the options.
1. Are you saying that I should leave the Domain Name field blank if I'm using Cloud Flare in the second field?
2. If I only use CloudFlare for DNS and pass through my entry (so it's not orange) then can I pass through permissions and use CloudFront.
No. The file gets uploaded into S3 but it does not appear in my Xenforo forum. It displays a broken image link.
Incidentally, it's another example of where you have made changes to the Addon but have not changed the instructions. Other respondents wrote about an option to preserve file names. That is not an option in my screenshot above.
Also, in your release notes you wrote about updates to config.php but you provided no instructions.
Attached are screenshots showing what it looks like after a file has been uploaded. It shows in s3 but not on the site.
Attached is the options I have. I re-enable AWS S3 uploading and left the domain field blank. I also bypassed Cloudflare so the Cloud is grayed out for my attach.domainnam.com subdomain and Cloudflare is only functioning as a DNS.
2. If you use CloudFront, you will need to pick another domain. You cannot use the same one as your bucket name I believe.
Maintain File Name option is not available for S3, only FTP/external_data (because S3 can be setup to use the exact filename from uploader).
Broken link without error seems to indicate a config issue. Please send me the options screenshot without censor and the broken link via conversation so I can see what is wrong.
OK, in the hopes of helping others who may experience similar problems.
I have SSL configured for my main domain htts://www.mydomain.com.
I set up a bucket name that was a subdomain attach.mydomain.com and pointed a CNAME record to it in CloudFlare.
The issue is that you have to use Full SSL in CloudFlare in order for SSL to work properly in most cases. Since attach.mydomain.com did not have an SSL cert loaded, even though the Addon was uploading files to the S3 bucket, the files were not reachable by trying to get to https://attach.mydomain.com/2017/01/filename.jpg
I resolved the issue by simply creating a bucket without a subdomain name in it. This still allows me to use https but it resolves to an aws domain and everything is fine.
If I wanted to use a subdomain I would need to attach my own cert to it and it's more work than I wanted to do.
Thank you for reporting back
In the interest of clarity I want to make sure users of this Addon understand how to migrate their attachments to Amazon S3. The author gave the below instructions but they were confusing to me so I want to clarify after I understood what he meant.
In one place he wrote:
It's the first and second steps that were confusing to me.
Since I had already enabled S3 and Cloud Front I didn't understand what he meant by "change to use external data".
1. Go to Options>Attachments. If you've already selected Store in Amazon S3, then you're going to have to choose Store File in External Data. Note, make sure you also select Maintain File Name. After you save the system is set up to save Attachments to the /data folder of your web root.
2. Go to Tools > Rebuild Caches and Rebuild Now under Move Attachment Data. What this does is move files from internal_data to the data directory in correct file format.
3. Copy all the folders from /data to your S3 bucket preserving all your path information. You can use a program like CloudBerry if you don't want to use AWS CLI.
After that, you can re-enable all your Amazon S3 options and go back to Rebuild Caches and Run the Storage Option Tool (using the same S3 information) to make sure that all your attachments are pointing to S3.
Hope this helps others.
I store my attachments on external FTP. How I can change attachments URLs from http to https?
Not dumb at all You can update attachment options to use https for new attachments.
Then use Admin > Rebuild Caches > Update Attachment Data Storage Options to change url for existing ones.
Looking at my xf_attachment_data table, I see that this addon is storing my S3 bucket name, access key id, and secret access key in bdattachmentstore_options for every single row. This is really wasteful for sites that store all their attachments in the same place, which I'd guess most do. It's also going to be bad news when we want to rotate our credentials, since every row in the table will need updating and we have a lot of attachments.
Is this how it's intended to work, or have I configured something wrong? Is there a way to just store the backend options just once, rather than once per attachment?
That is how it's intended to work. We know that's inefficient and may cause security risk but that's safer than relying on an option which may get updated to something else and cause failure while deleting unused attachments etc. We may support a new way to configure storage options in config.php soon (it will still store the S3 key in db but not the whole thing).
Wouldn't it be better to store backend options in a separate table like xf_bd_attachment_store, and then reference that table from xf_attachment_data by id?
I've hacked that into my local copy of the addon, and it's working well so far. Let me know if you're interested in the code (though I haven't yet figured out how to update xf_bd_attachment_store when settings are saved from XF admin; right now I'm just manipulating that table manually).